‘Godfather’ Geoffrey Hinton warns of AI-driven extinction in next 30 years : ‘Evolution allowed baby to control mother…’ | Mint

‘Godfather’ Geoffrey Hinton warns of AI-driven extinction in next 30 years : ‘Evolution allowed baby to control mother…’ | Mint

Source: Live Mint

Geoffrey Hinton, the British-Canadian computer scientist widely regarded as the “godfather” of artificial intelligence (AI), has raised alarm bells regarding the potential risks associated with AI development. In a recent interview on BBC Radio 4’s Today programme, Hinton indicated that the likelihood of AI leading to human extinction within the next three decades has increased to between 10% and 20%.

Hinton Flags Rapid Advancements in AI

Asked on BBC Radio 4’s Today programme if he had changed his analysis of a potential AI apocalypse and the one in 10 chance of it happening, Hinton said: “Not really, 10% to 20%.”

Hinton’s estimate prompted Today’s guest editor, the former chancellor Sajid Javid, to say “you’re going up”, to which Hinton replied: “If anything. You see, we’ve never had to deal with things more intelligent than ourselves before.”

Hinton, while raising alarm bells on the influence of AI, added: “And how many examples do you know of a more intelligent thing being controlled by a less intelligent thing? There are very few examples. There’s a mother and baby. Evolution put a lot of work into allowing the baby to control the mother, but that’s about the only example I know of.”

Human Intelligence Compared to AI

London-born Hinton, a professor emeritus at the University of Toronto, said humans would be like toddlers compared with the intelligence of highly powerful AI systems.

“I like to think of it as: imagine yourself and a three-year-old. We’ll be the three-year-olds,” he said.

AI can be loosely defined as computer systems performing tasks that typically require human intelligence.

Hinton’s Resignation from Google

Geoffrey Hinton made headlines last year when he resigned from his position at Google, allowing him to speak more freely about the dangers posed by unregulated AI development.

He expressed concerns that “bad actors” could exploit AI technologies for harmful purposes. This sentiment aligns with broader fears within the AI safety community regarding the emergence of artificial general intelligence (AGI), which could pose existential threats by evading human control.

Reflecting on his career and the trajectory of AI, Hinton remarked, “I didn’t think it would be where we [are] now. I thought at some point in the future we would get here.” His apprehensions have gained traction as experts predict that AI could surpass human intelligence within the next two decades—a prospect he described as “very scary.”

Hinton Stresses on Need for Regulation on AI

To mitigate these risks, Hinton advocates for government regulation of AI technologies.

Hinton argued that relying solely on profit-driven companies is insufficient for ensuring safety: “The only thing that can force those big companies to do more research on safety is government regulation.”



Read Full Article