“The reality is we’re creating God.”
An interview with the Silicon Valley supergeek who believes we face an apocalyptic threat from AI
(Paywall, but I'll dig up some quotes from other sites)
https://www.thetimes.co.uk/article/can-this-man-save-the-world-from-artificial-intelligence-329dd6zvd?utm_source=twitter&utm_campaign=cc&utm_medium=branded_social
There’s no shortage of AI fearmongerers in the tech industry — Elon Musk has repeatedly warned the world about the dangers of AI someday conquering humanity, for example. But that kind of speculative outlook somewhat glosses over the real hazards and harms linked to the AI we’ve already built.
For instance, facial recognition and predictive policing algorithms have caused real harm in underserved communities. Countless algorithms out there continue to propagate and codify institutional racism across the board. Those are problems that can be solved through oversight and regulation — but you wouldn’t know that if you, like Gawdat, think of AI development as the inevitable birth of a vengeful god.
An interview with the Silicon Valley supergeek who believes we face an apocalyptic threat from AI
(Paywall, but I'll dig up some quotes from other sites)
https://www.thetimes.co.uk/article/can-this-man-save-the-world-from-artificial-intelligence-329dd6zvd?utm_source=twitter&utm_campaign=cc&utm_medium=branded_social
There’s no shortage of AI fearmongerers in the tech industry — Elon Musk has repeatedly warned the world about the dangers of AI someday conquering humanity, for example. But that kind of speculative outlook somewhat glosses over the real hazards and harms linked to the AI we’ve already built.
For instance, facial recognition and predictive policing algorithms have caused real harm in underserved communities. Countless algorithms out there continue to propagate and codify institutional racism across the board. Those are problems that can be solved through oversight and regulation — but you wouldn’t know that if you, like Gawdat, think of AI development as the inevitable birth of a vengeful god.
The Egyptian-born entrepreneur was struck by a terrifying revelation about the future of AI after witnessing an eerie moment in the firm's R&D labs.
The epiphany came after he saw AI developers collaborating with Google X on dexterous robotic arms.
After what he described as slow progress, he witnessed one day a robotic arm reach down and pick up a ball, which it then displayed to the researchers.
Even more eerily, Mr Gawdat claimed every single arm could replicate the manoeuvre and after another two days, the arms could pick up just about anything.
Speaking to The Times, he said: "And I suddenly realised, this is really scary.
"Like we had those things for a week. And they're doing what children will take two years to do.
"And then it hit me that they are children. But very, very fast children."
The key difference, he argued, is machines even at a very basic level of intelligence have the potential to learn incredibly quickly.
He added: "The reality is, we're creating God."
When Terminator 2: Judgment Day hit the silver screens in 1991, the film envisioned a dark, post-apocalyptic future in which smart machines ruled the Earth.
In the film, rogue artificial intelligence, known as Skynet, has overthrown its human masters and waged a deadly war to wipe humans off the face of the planet.
Arnold Schwarzenegger's Terminator character famously says in the film: "Three billion human lives ended on August 29, 1997.
"The survivors of the nuclear fire called the war Judgment Day.
"They lived only to face a new nightmare: the war against the machines."
Last year he claimed it would take AI less than five years to overtake humanity.
He said: "My assessment about why AI is overlooked by very smart people is that very smart people do not think a computer can ever be as smart as they are. And this is hubris and obviously false.
"We’re headed toward a situation where AI is vastly smarter than humans and I think that time frame is less than five years from now."
He has also expressed his fears about the line between AI and human consciousness becoming increasingly blurred before the end of the century.