A battle is raging over the definition of open-source AI

A battle is raging over the definition of open-source AI

Source: Live Mint

Open-source software—in which a developer releases the source code for a product and allows anyone else to reuse and remix it to their liking—is at the foundation of Google’s Android, Apple’s iOS and all four of the largest web browsers. The encryption of a WhatsApp chat, the compression of a Spotify stream and the format of a saved screenshot are all controlled by open-source code.

Though the open-source movement has its roots in the post-hippy utopianism of 1980s California, it is nevertheless going strong today in part because its ethos is not entirely altruistic. Making software freely available has allowed developers to get help making their code stronger; prove its trustworthiness; earn plaudits from their peers; and, in some cases, make money by selling support to those who use the products for free.

Several model-makers in the world of artificial intelligence (AI), including Meta, a social-media giant, want to follow in this open-source tradition as they develop their suites of powerful products. They hope to corral hobbyists and startups into a force that can rival billion-dollar labs—all while burnishing their reputation.

Unfortunately for them, though, guidelines published last week by the Open Source Initiative (OSI), an American non-profit, have suggested that the modern use of the term by tech giants has become stretched into meaninglessness. Burdened with restrictions and developed in secrecy, these free products are never going to power a true wave of innovation unless something changes, the OSI says. It is the latest salvo in a lively debate: what does open source really mean in the age of AI?

In traditional software, the term is well-defined. A developer will make available the original lines of code used to write a piece of software. Crucially, in doing so, they will disclaim most rights: any other developer can download the code and tweak it as they see fit for their own ends. Often, the original developer will append a so-called “copyleft” licence, requiring the tweaked version to be shared in turn. Eventually, original code can evolve into an entirely new product. The Android operating system, for instance, is the descendant of Linux, originally written to be used on personal computers.

Following in this tradition, Meta, an American tech giant, proudly claims that its large-language model (LLM), Llama 3, is “open source”, sharing the finished product with anyone who wants to build on top of it for free. However, the company also places restrictions on its use, including a ban on using the model to build products with more than 700m monthly active users. Other labs, from France’s Mistral to China’s Alibaba, have also released LLMs for free use, but with similar constraints.

What Meta shares freely—the weights of connections between the artificial neurons in its LLM, rather than all the source code and data that went into making it—is certainly not sufficient for someone to build their own version of Llama 3 from the ground up, as open-source purists would normally demand. That’s because training an AI is very different from normal software development. Engineers amass the data and construct a rough blueprint of the model, but the system in effect assembles itself, processing the training data and updating its own structure until it achieves an acceptable performance.

Because each training step tweaks the model in fundamentally unpredictable ways that only converge to the right solution over time, a model trained using the same data, the same code and the same hardware as Llama 3 would be very similar to the original, but not the same. That wipes out some of the supposed benefits of the open-source approach: inspect the code all you want, but you can never be sure that what you’re using is the same thing that the company offered.

Other hurdles also stand in the way of truly open-source AI. Training a “frontier” AI model that stands toe-to-toe with the latest releases from OpenAI or its peers, for example, costs at least $1bn—disincentivising those who have spent such sums from letting others profit. There is also the issue of safety. In the wrong hands, the most powerful models could teach users to build bioweapons or create unlimited child-abuse imagery. Locking their models away behind a carefully constrained access point allows AI labs to control what they can be asked, and dictate the ways in which they are allowed to respond.

Open and shut

The complexity of the issue has led to disputes over what, exactly, “open-source AI” should mean. “There are lots of different people that have different concepts of what [open source] is,” says Rob Sherman, the vice-president for policy at Meta. is at stake in this debate than just principles, since those tinkering with open source today could become the industry giants of the future.

In a recent report, the OSI did its best to define the term. It argued that to earn the label, AI systems must offer “four freedoms”: they should be free to use, study, modify and share. Instead of requiring the full release of training data, it called only for labs to describe it in enough detail to allow a “substantially equivalent” system to be built. In any case, sharing all of a model’s training data would not always be desirable—it would in effect prevent, for instance, the creation of open-source medical AI tools, since health records are the property of their patients and cannot be shared without restriction.

For those building on top of Llama 3, the question of whether or not it can be labelled open source matters less than the fact that no other major lab has come close to being as generous as Meta. Vincent Weisser, the founder of Prime Intellect, an AI lab based in San Francisco, would prefer if the model were made “fully open on every dimension” but still believes Meta’s approach will have long-term positive impacts, leading to cheaper access for end users and increased competition. Since Llama was first published, enthusiasts have squashed it small enough to run on a phone; built specialised hardware chips capable of running it blisteringly fast; and repurposed it for military ends as part of a project by the Chinese army, proving the downsides are more than theoretical.

Not everybody is likely to be so willing an adopter. Legally speaking, using true open-source software should come with “no friction”, says Ben Maling, a patent expert at EIP, a law firm in London. Once lawyers are needed to parse the details and consequences of every individual restriction, the engineering freedom so much tech innovation relies on disappears. Companies like Getty Images and Adobe have already sworn off using some AI products for fear of accidentally infringing the terms of their licences. Others will follow.

Precisely how open-source AI is defined will have broad implications. Just as vineyards live or die based on whether they can call their produce champagne or mere sparkling wine, an open-source label may prove critical to a tech firm’s future. If a country lacks a home-grown AI superpower, says Mark Surman, president of Mozilla, an open-source foundation, then it may wish to back the open-source industry as a counterweight to American dominance. The European Union’s AI act currently has loopholes to ease requirements around testing for open-source models, for instance. Other regulators around the world are likely to follow suit. As governments seek to establish tight controls on how AI can be built and operated, they will be forced to decide: do they want to ban bedroom tinkerers from operating in the space, or free them from costly burdens?

For now, the closed-off labs are sanguine. Even Llama 3, the most capable of the almost-open-source contenders, has been playing catchup to the models released by OpenAI, Anthropic and Google. One executive at a major lab told The Economist that the economics involved make this state of affairs inevitable. Though releasing a powerful model that can be accessed at no cost allows Meta to undercut its competitors’ businesses without troubling its own, the lack of direct revenue also limits its desire to spend the sums required to be a leader rather than a fast follower. Freedom is rarely truly free.

© 2024, The Economist Newspaper Ltd. All rights reserved. From The Economist, published under licence. The original content can be found on www.economist.com



Read Full Article

Leave a Reply

Your email address will not be published. Required fields are marked *