Strawberry: Can it unlock AI’s reasoning power?

Strawberry: Can it unlock AI’s reasoning power?

Source: Live Mint

OpenAI plans to release two highly-anticipated models. Orion, potentially the new GPT-5 model, is expected to be an advanced large language model (LLM), while Strawberry aims to enhance AI reasoning and problem-solving, particularly in mastering math.

Why are these projects important?

Project Strawberry (earlier dubbed Q*, or Q-Star) is reportedly a secret OpenAI initiative to improve AI’s reasoning and decision-making for more generalized intelligence. OpenAI co-founder Ilya Sutskever’s concerns about its risks led to CEO Sam Altman’s brief ouster. Unlike Orion, which focuses on optimizing existing LLMs like GPT-4 by cutting computational costs and enhancing performance, Strawberry aims to boost AI’s cognitive abilities, say The Information and Reuters. OpenAI might even integrate Strawberry into ChatGPT to enhance reasoning.

If true, how will they impact the tech world?

For autonomous systems such as self-driving cars or robots, Strawberry could improve safety and efficiency. Future iterations may focus on improving interpretability, making its decision-making processes transparent. Big tech giants like Google and Meta might face heightened competition since clients in healthcare, finance, automobiles and education, that are increasingly relying on AI, embrace the newer, enhanced models of OpenAI. Smaller startups, too, could struggle to compete with the new products, affecting their market position and investment prospects.

How can we be sure OpenAI is developing these?

New investors appear to be keen on investing in OpenAI, which, according to The Wall Street Journal, is planning to raise funds in a round led by Thrive Capital that would value it at more than $100 billion. Apple, Nvidia are likely investors in this round. Microsoft has already invested more than $10 billion in OpenAI, feeding reports of OpenAI boosting its AI models.

But can AI models actually reason?

AI struggles with human-like reasoning. But in March, Stanford and Notbad AI researchers indicated that their Quiet-STaR model could be trained to think before it responds—a step towards AI models learning to reason. DeepMind’s proposed framework for classifying the capabilities and behaviour of Artificial General Intelligence (AGI) models acknowledges that an AI model’s “emergent” properties could give it capabilities such as reasoning, that are not explicitly anticipated by developers of these models.

Will ethical concerns increase?

Despite claims of safe AI practices, big tech faces scepticism due to past misuse of data, copyrights and intellectual property (IP) violations. AI models with enhanced reasoning could fuel misuse, like misinformation. Quiet-STaR researchers admit there are “no safeguards against harmful or biased reasoning”. Sutskever, who proposed what is now Strawberry, launched Safe Superintelligence Inc., aiming to advance AI’s capabilities “as fast as possible while making sure our safety always remains ahead”.

 



Read Full Article

Leave a Reply

Your email address will not be published. Required fields are marked *