An agenda to maximise AI’s benefits and minimise harms, by David Patterson

An agenda to maximise AI’s benefits and minimise harms, by David Patterson

Source: Live Mint

This was the subject of a spirited conversation I had in February 2024 with Andy Konwinski, a former student of mine and the co-founder of two AI-related startups, Databricks and Perplexity. Andy shared his disbelief that a friend’s son had dropped out of his computer-science programme. This bright student believed AI would soon make programmers obsolete.

He isn’t alone: according to Gallup, three-quarters of Americans say that AI will reduce the total number of jobs within ten years. It reminded me of a similarly promising student who, 20 years ago, abandoned computer science owing to his fear that offshoring meant virtually all programming jobs would shift to lower-income countries like India. His panic was misplaced: since 2000, both the number of jobs for programmers in America and their inflation-adjusted salaries grew by half.

Our conversation turned to shared frustration over the polarised discourse between AI “accelerationists” and “doomers”. The reality, we agreed, is more nuanced. We concluded that there is an urgent need for computer scientists to take a more active role in both steering research and shaping the narrative. Rather than simply predict what the impact of AI will be given a laissez-faire approach, our goal was to propose what the impact could be given directed efforts to maximise the upsides and minimise the downsides.

We then assembled nine of the world’s leading computer scientists and rising AI stars, from academia, startups and big tech, to explore the pragmatic near-term impact of AI. We also interviewed two dozen other experts about AI’s impact on their specialties, including John Jumper, a winner of this year’s Nobel prize in chemistry, on science; President Barack Obama on governance; his former UN ambassador and national security adviser Susan Rice on security; and Eric Schmidt, a philanthropist and Google’s former chief executive, on several topics. For those interested, we’ve compiled our learnings into a more detailed 30-page paper, entitled “Shaping AI’s Impact on Billions of Lives“.

Five guidelines emerged for harnessing AI for the public good. We believe they should guide our efforts in both the discovery and deployment of this transformative technology.

First, humans and AI systems working as a team do more than either on their own. Applications of AI focused on human productivity produce more positive benefits than those focused on human replacement. Tools that make people more productive increase their employability, satisfaction, and opportunity. People can act as safeguards if the AI veers off course in areas for which it is not well trained. In short, focussing on human productivity helps both people and AI succeed.

Second, to increase employment, aim for productivity improvements in fields that would create more jobs. Despite tremendous productivity gains in computing and passenger aviation, America in 2020 had 11 times more programmers and eight times more commercial-airline pilots than in 1970. This growth is because programming and air transport are fields for which, as economists say, demand is elastic. Agriculture, on the other hand, is relatively inelastic, so productivity gains meant the number of agriculture jobs fell by three-fourths in one human lifetime (1940 to 2020). If AI practitioners aim to improve productivity in elastic fields, despite public fears, AI can actually increase employment.

Third, AI systems should initially aim at removing the drudgery from current tasks. Releasing time for more valuable work will encourage people to use new AI tools. Doctors and nurses choose their careers because they want to help patients, not do endless documentation. Schoolteachers prefer teaching, not grading and record-keeping. High priority should be given to AI tools that are going to improve the meaningfulness of people’s current work in hospitals and classrooms.

Fourth, the impact of AI varies by geography. Eric Schmidt emphasises that while rich countries worry about AI displacing highly trained professionals, countries with lean economies face shortages of skilled experts. AI could make such expertise more widely available in such regions, potentially enhancing quality of life and economic growth, becoming as transformative there as mobile phones have become. For example, an AI system that improved the skills and productivity of nurses and physician assistants would also give more patients access to high-quality health care in regions that are short of doctors. The increasing popularity of smartphones in low- and middle-income countries enables widespread access to multilingual AI models that can dramatically help people in low- and middle-income countries to get access to information, education, media/entertainment, and more in their native languages if desired. Improvements to local economies and critical services may even provide alternatives to emigration for some in middle income countries.

And finally, we need better metrics and methods to evaluate AI innovations. At times the marketplace can do this, such as for AI tools for professional programmers. In high-stakes domains it cannot, because we cannot risk harming participants. We need to use gold-standard tools: A/B testing, randomised controlled trials, and natural experiments. Equally urgent is post-deployment monitoring to evaluate whether AI innovations do what they say they are doing, whether they are safe, and whether they have externalities. We also need to continuously measure AI systems in the field so as to be able to incrementally improve them.

There is no shortage of concerns about the risks and complexities of AI, which we address in the long paper: data privacy and security, intellectual-property rights, bias, information accuracy, threats to humanity from more advanced AI, and energy consumption (though on this last point, AI accounts for under a quarter of 1% of global electricity use, and the International Energy Agency considers AI’s projected increased energy consumption for 2030 to be modest relative to other trends).

Although there are risks, there are also many opportunities both known and unknown. It can be as big a mistake to ignore the benefits of AI as it is to ignore its risks. AI moves quickly, and governments must keep pace. Similar to how the government collaborated with industry in the successful development and deployment of chips and cars, we propose a coordinated public-private partnership for AI. Its goal would be to remove bureaucratic roadblocks, ensure safety and provide transparency and education to policymakers and the public.

At this point, readers might expect that we scientists are about to ask for government funding. But we believe that money for these efforts should come from the philanthropy of the technologists who have prospered in the computer industry. Several have already pledged support, and we expect more to join. We think these commitments should be deployed in two ways: to create major inducement prizes to stimulate research and recognise breakthroughs, and to fund ad hoc three- to five-year multidisciplinary research centres.

We brainstormed on an AI moonshot. But which goal? We might create an AI mediator that orchestrates conversations across political chasms to pull us out of polarisation and back into pluralism. We can leverage the growing prevalence of smartphones by aiming to create a tutor app for every child in the world in their language, for their culture, and in their best learning style. We might enable biologists and neuroscientists to make a century of progress in a single decade. But if we create the right blueprint for innovation, and bring experts and users together into the conversation, we don’t have to pick just one moon. 

David Patterson is the Pardee Professor of Computer Science, Emeritus at the University of California at Berkeley and a Distinguished Engineer at Google.

© 2025, The Economist Newspaper Limited. All rights reserved. From The Economist, published under licence. The original content can be found on www.economist.com



Read Full Article