Why open-source AI models are good for the world
Source: Live Mint
Open innovation lies at the heart of the artificial-intelligence (ai) boom. The neural network “transformer”—the t in GPT—that underpins OpenAI’s was first published as research by engineers at Google. TensorFlow and PyTorch, used to build those neural networks, were created by Google and Meta, respectively, and shared with the world. Today, some argue that AI is too important and sensitive to be available to everyone, everywhere. Models that are “open-source”—ie, that make underlying code available to all, to remix and reuse as they please—are often seen as dangerous.
Several charges are levelled against open-source AI. One is that it is helping America’s rivals. On November 1st it emerged that researchers in China had taken Llama 2, Meta’s open large language model, and adapted it for military purposes. Another argument against open-source AI is its use by terrorists and criminals, who can strip a model of carefully built safeguards against malicious or harmful activity. Anthropic, a model-maker, has called for urgent regulation, warning about the “unique” risks of open models, such as their ability to be “fine-tuned” using data on, say, making a bioweapon.
True, open-source models can be abused, like any other tech. But such thinking puts too much weight on the dangers of open-source AI and too little on the benefits. The information needed to build a bioweapon already exists on the internet and, as Mark Zuckerberg argues, open-source AI done right should help defenders more than attackers. Besides, by some measures, China’s home-grown models are already as good as Meta’s.
Meanwhile, the benefits of open software are plain to see. It underpins the technology sector as a whole, and powers the devices billions of people use every day. The software foundation of the web, the standards of which were released into the public domain by Tim Berners-Lee from cern, is open-source; so, too, is the Ogg Vorbis compression algorithm used by Spotify to stream music to millions.
Making software free has long helped developers make their code stronger. It has allowed them to prove the trustworthiness of their work, harness vast amounts of volunteer labour and, in some cases, make money by selling tech support to those who use it. Openness should underpin innovation in ai as well. If the technology has as much potential as its backers say, then it is a way to ensure that power is not concentrated in the hands of a few Californian firms.
Closed models will have their place, for uses that are sensitive, or tasks that need to be conducted at the cutting edge. But models that are open or partly open will be crucial, too. The Open Source Initiative, an industry body, defines a model as open-source if you can download it and use it as you want, and if a description of the underlying training data is provided. None of the open models of the big labs, such as Alibaba and Meta, qualifies. But by offering partially open platforms, the labs provide insight into their models, allowing others to learn from, and sometimes build on, their techniques.
One reason the Open Source Initiative says Meta’s models are not open-source is that access to them is restricted, notably because their use is limited to applications with fewer than 700m monthly users. But Meta may yet find it in its own interest to open up further. The more it does, the more attractive its platform could become to developers, and the more likely that a future superstar application is nurtured on its technology.
Governments, too, should allow open-source AI to thrive, imposing safety regulations uniformly and eschewing restrictions and intellectual-property protections that force research under lock and key. With artificial intelligence, as with a lot of other software, innovation flourishes in the open.
© 2024, The Economist Newspaper Limited. All rights reserved. From The Economist, published under licence. The original content can be found on www.economist.com