Are we on the cusp of AGI, as Musk suggests? There’s reason for doubt

Are we on the cusp of AGI, as Musk suggests? There’s reason for doubt

Source: Live Mint

The event horizon is a boundary that marks the outer edge of black holes, the point from which nothing can escape – not even light. AI singularity refers to when artificial intelligence (AI) surpasses human intelligence, leading to rapid, unpredictable technological growth – it’s known as artificial general intelligence, or AGI. Hence, Musk is suggesting that the world is on the cusp of AGI.

His post comes when big tech companies including OpenAI, Google, Meta, Microsoft, Deepseek, and Musk’s own xAI are bending backwards to promote their reasoning models, which are also known as chain-of-thought ones. As opposed to chain-of-thought models, which show intermediate reasoning steps, improving transparency and accuracy in complex tasks, non-chain-of-thought models are common in simple AI tasks like image recognition or basic chatbot replies.

As an example, xAI launched the new Grok 3 model on 18 February, which is said to have 10x more compute than the previous generation model and will compete with OpenAI’s ChatGPT 4-o and Google’s Gemini 2 Pro. These ‘reasoning’ models differ from ‘pre-trained’ ones as they are meant to mimic human-like thinking, implying that they take a bit more time to respond to a query but are also generally more useful for answering complex questions.

“We at xAI believe (a) pre-trained model is not enough. That’s not enough to build the best AI but the best AI needs to think like a human…,” the xAI team said during the launch.

What exactly is AGI?

Those bullish on AI and generative AI (GenAI) continue to list multiple reasons to try and convince us that the tech will help society but conveniently gloss over the limitations and legitimate reservations that sceptics offer.

On the other hand, those who fear the misuse of AI and GenAI go to the other extreme of focusing only on the limitations, which include hallucinations, deepfakes, plagiarism and copyright violations, the risk to human jobs, the guzzling of power, and the perceived lack of ROI.

A group of experts including Yann LeCun, Fei-Fei Li (also referred to as the ‘godmother’ of AI), and Andrew Ng believes that AI is nowhere close to becoming sentient (read: AGI). They underscore that AI’s benefits such as powering smartphones, driverless vehicles, low-cost satellites, chatbots, and providing flood forecasts and warnings far outweigh its perceived risks.

Another AI expert, Mustafa Suleyman, who is CEO of Microsoft AI (earlier co-founder and CEO of Inflection AI, and co-founder of Alphabet unit DeepMind), suggests using Artificial Capable Intelligence (ACI) as a measure of an AI model’s ability to perform complex tasks independently.

They should know what they are talking about. LeCun (now chief scientist at Meta), Geoffery Hinton and Yoshua Bengio received the 2018 Turing Award, also referred to as the ‘Nobel Prize of Computing’. And all three are referred to as the ‘Godfathers of AI’.

Li was chief of AI at Google Cloud and Ng headed Google Brain and was chief scientist at Baidu before co-founding companies like Coursera and starting Deeplearning.AI.

However, AI experts including Hinton and Bengio and the likes of Musk and Masayoshi Son, CEO of SoftBank, insist that the phenomenal growth of GenAI models indicates that machines will soon think and act like humans with AGI.

The fear is that if unregulated, AGI could help machines automatically evolve into Skynet-like machines that achieve AI Singularity or AGI (some also use the term artificial super intelligence, or ASI), and outsmart us or even wage war against us, as shown in sci-fi movies I, Robot and The Creator. Son said that ASI would be realised in 20 years and surpass human intelligence by a factor of 10,000.

AI agentic systems are adding to the concern since these models are capable of autonomous decision-making and action to achieve specific goals, which means they can work without human intervention. They typically exhibit key characteristics such as autonomy, adaptability, decision-making, and learning.

Google, for instance, recently introduced Gemini 2.0—a year after it introduced Gemini 1.0.

“Our next era of models (are) built for this new agentic era,” CEO Sundar Pichai said in a recent blog.

Hinton reiterated in a recent interview on BBC Radio 4’s Today programme that the likelihood of AI leading to human extinction within the next three decades has increased to 10-20%. According to him, humans would be like toddlers compared with the intelligence of highly powerful AI systems.

“I like to think of it as: imagine yourself and a three-year-old. We’ll be the three-year-olds,” he said. Hinton quit his job at Google in May 2023 to warn the world about the dangers of AI technologies.

10 tasks

Some experts have even placed money bets on the advent of AGI. For instance, in a 30 December newsletter titled: ‘Where will AI be at the end of 2027? A bet’, Gary Marcus—author, scientist, and noted AI sceptic—and Miles Brundage—an independent AI policy researcher who recently left OpenAI and is bullish on AI’s progress—said, “…If there exist AI systems that can perform 8 of the 10 tasks below by the end of 2027, as determined by our panel of judges, Gary will donate $2,000 to a charity of Miles’ choice; if AI can do fewer than 8, Miles will donate $20,000 to a charity of Gary’s choice….”

The 10 tasks include mastering a range of creative, analytical, and technical tasks like understanding new movies and novels deeply, summarising them with nuance, and answering detailed questions on plot, characters, and conflicts. The tasks include writing accurate biographies, persuasive legal briefs, and large-scale, bug-free code, all without errors or reliance on fabrication.

The bet extends to AI models mastering video games, solving in-game puzzles, and independently crafting Pulitzer Prize-worthy books, Oscar-calibre screenplays, and paradigm-shifting scientific discoveries. Finally, it involves translating complex mathematical proofs into symbolic forms for verification, showcasing a transformative ability to excel across diverse fields with minimal human input.

Elusive empathy, emotional quotient

The fact remains that most companies are testing GenAI tools and AI agents before using it for full-scale production work because of inherent limitations such as hallucinations (when these models confidently produce wrong information), biases, copyright issues, intellectual property and trademark violations, poor data quality, power guzzling, and more importantly, a lack of clear return on investment (ROI).

The fact remains that as AI models get more efficient with every passing day, many of us wonder when AI will surpass humans. In many areas, AI models have already done so but they certainly cannot think or emote like humans.

Perhaps they never will or may not need to do so since machines are likely to “evolve” and “think” differently. DeepMind’s proposed framework for classifying the capabilities and behavior of AGI models, too, notes that current AI models cannot reason. But it acknowledges that an AI model’s “emergent” properties could give it capabilities such as reasoning, that are not explicitly anticipated by developers of these models.

That said, policymakers can ill afford to wait for a consensus to evolve on AGI. The proverb, ‘It is better to be safe than sorry’, captures this aptly.

This is one reason that Mint argued in an October 2023 edit that ‘Policy need not wait for consensus on AGI’ to put up guardrails around these technologies. Meanwhile, the AGI debate is unlikely to die in a hurry, with emotions running high on either side.

 



Read Full Article