Stephen Fry explains why artificial intelligence ‘poses a 70% risk of wiping out humanity’ Earnhire

Stephen Fry explains why artificial intelligence 'poses a 70% risk

Aside from his comedy, drama and literary work, Stephen Fry is widely known for his love of technology – he once wrote a column on the subject called “Dork Talk.” Guardianin That first deployment He proves his credentials by claiming to be the owner of the second Macintosh computer sold in Europe (“Douglas Adams bought the first”) and that he’s “never met a smartphone that he hadn’t bought.” But like many of us who were “ignorant of all things digital” at the end of the last century and the beginning of this one, Fry seems skeptical of certain big tech projects underway today. Video above.

Of course, this plan pertains to artificial intelligence in general, and to the “logical AI subgoals of survival, deception, and power” in particular. Even at a relatively early stage of development, we have witnessed AI systems that seem too good at their jobs, to the point of engaging in acts that would be considered deceptive and unethical if the subject were human. (Frye gives the example of a stock-market investing AI that engaged in insider trading and then lied about it.) Moreover, “as AI agents take on more complex tasks, they create strategies and subgoals that are invisible to us because they are hidden in billions of parameters,” and quasi-evolutionary “selection pressures also cause AI to circumvent safeguards.”

In this video, MIT physicist and machine learning researcher Max Tegmark “Right now we’re creating creepy, super-competent, amoral psychopaths who don’t sleep, who think way faster than us, who can make copies of themselves, and who lack any sense of humanity whatsoever,” Frye quotes a computer scientist as saying. Geoffrey Hinton In a competition between AIs, “the more self-preserving ones will win, the more aggressive ones will win, and we’ll have all the problems that we have with excited chimps,” Hinton’s colleagues warn. Stuart Russell “We need to worry about machines not because they are conscious, but because they are capable. They may take preemptive action to help them achieve the objectives we give them,” he explains, and their actions may not be with sufficient consideration for human lives.

Would it be better to shut down the whole business? Fry sues philosopher Nick Bostromargues that “it may be a mistake to stop AI development because another problem that AI could prevent could eventually destroy humanity.” This seems to prescribe a deliberately cautious form of development, but “almost all of the AI ​​research funding, which amounts to hundreds of billions of dollars per year, is driving functionality for profit, and safety efforts are tiny in comparison.” Although “we don’t know whether we can maintain control of a superintelligence,” we can still “point it in the right direction, rather than rushing to create one with no moral compass and no clear reason to kill us.” As they say, the mind is a good servant but a terrible master. The same is true of the mind’s creations, as the AI ​​example reminds us.

Related content:

Stephen Fry voices new dystopian short film about artificial intelligence and simulation theory: Watch run away

Stephen Fry reads Nick Cave’s moving letter on ChatGPT and human creativity: “We are fighting for the soul of the world”

Stephen Fry explains cloud computing in a short animated video

Stephen Fry tells the story of Johannes Gutenberg and the first printing press

Stephen Fry on the power of words in Nazi Germany: How dehumanizing language laid the foundation for genocide

Neural Networks for Machine Learning: A Free Online Course with Geoffrey Hinton

Based in Seoul,ColinMaOnershallWriting and broadcastingHe has written papers on cities, languages, and cultures, and his projects include the Substack newsletter. Books about citiesAnd booksA city without a state: Walking through 21st-century Los Angeles.Follow us on TwitterCollinhamOnershallorFacebook.

Share this post