AI belongs to everybody: Sam Altman

AI belongs to everybody: Sam Altman

Source: Live Mint

I’m sure you’ve been looking at the announcements that India has made on its AI program. You were here sometime back and you made these comments—about how India was better off not trying to do its own frontier model—that became controversial. Has your view changed? And do you think the Indian AI plan is on the right track?

That was in a different context. That was a different time when frontier models were super expensive to do. And you know, now, I think the world is a very different paradigm. I think you can do them at way lower costs and maybe do incredible work. India is an incredible market for AI in general, for us too. It’s our second biggest market after the US. Users here have tripled in the last year. The innovation that’s happening, what people are building [in India], it’s really incredible. We’re excited to do much, much more here, and I think it’s (the Indian AI program) a great plan. And India will build great models.

What are your plans in India? Because while everyone looks at the front end of AI, there is this huge back end. What you’re doing in the US now, for instance, in partnership with SoftBank, is creating this huge infrastructure. Do you plan to bring some of that infrastructure to India?

We don’t have anything to announce today, but we are hard at work, and we hope to have something exciting to share soon.

Late 2022 was when you announced ChatGPT, and over the weekend, you made the DeepResearch announcement. The pace of change seems to be quite staggering. Microprocessors have Moore’s Law. Is there a law on pace of change here?

DeepResearch is the thing that has most felt, like ChatGPT, in terms of how people are reacting. I was looking online last night and reading—I’ve been very busy for the last couple of days, so I hadn’t gotten to read the reviews—and people look like they’re having a magical experience, like they had when Chatgpt first launched. So, this move from chatbots into agents, I think, is having the impact that we dreamed at night, and it’s very cool to see people have another moment like that. 

Moore’s law is, you know, 2x every 18 months (the processing power of chips double every 18 months), and that changed the world. But if you look at the cost curve for AI, we’re able to reduce the cost of a given level of intelligence, about 10x (10 times) every 12 months, which is unbelievably more powerful than Moore’s law. If you compound both of those out over a decade, it’s just a completely different thing. So, although it’s true that the cost of the best of the frontier models is on this steep, upward, exponential [curve], the rate of cost reduction of the unit of intelligence is just incredible. And I think the world has still not quite internalised this.

What was your first response when the news of the Chinese model, Deep Seek, came out? At least the headline was that they’d managed to train their model at a much lower cost, although it turned out later that that wasn’t really the case.

I was extremely sceptical of the cost number. It was like, there are some zeros missing. But, yeah, it’s a good model, and we’ll need to make better models, which we will do.

AI appears to be extremely infrastructure intensive and capital intensive. Is that the case? Does that mean there are very few players who can really operate at that scale?

As we talked earlier, it is changing. To me, the most exciting development of the last year is that we figured out how to make very powerful small models. So, the frontier will continue to be massively expensive and require huge amounts of infrastructure, and that’s why we’re doing this Stargate Project. But, you know, we’ll also get GPT 4-level models running on phones at some point. So, I think you can look at it in either direction.

One of the challenges of being where you are, and who you are, is that your company was the first company that pretty much captured public imagination when it came to artificial intelligence. When you are the first company, you have the responsibility, not just for your company, but also for the industry and how the entire industry interfaces with society. And there, there are several issues that are cropping up…

We have a role as, I think, if you’re on the frontier…we have a role as an educator, and the role is like a lookout to tell society what you think is coming and what you think the impact is going to be. It won’t always be right, but it’s not up to us or any other company to say, okay, given this change, here’s what society is supposed to do.

It’s up to us to say, here’s the change we see coming, here’s some ideas, here’s our recommendations. But society is going to have to decide how we think about how we’re going to mitigate the economic impact, how we’re going to broadly distribute the benefits, how we’re going to address the challenges that come with this. So, we are a voice, an important voice, in that. And I also don’t mean to say we don’t have responsibility for the technology we create. Of course we do, but it’s got to be a conversation among all the stakeholders.

If you look at Indian IT industry, they have done really well at taking stuff that other people have built and building very smart models on top of it, and providing services on top of it, rather than building the models itself. Is that what you think they should be doing with AI? Or do you think, they should do more?

I think India should go for a full stack approach…

…Which will require a lot of capital.

Well, it’s not an inexpensive project, but I think it’s worth it.

You have over 300 million users…

… okay, and what have you learnt in terms of what they are using ChatGPT for?

Can I show you something? Because it’s just a really meaningful thing. I was just looking at X (turns the computer to show the screen). So this guy, we’re not really friends, but I know him a little. DeepResearch launched a couple of days ago, and his daughter has a very rare form of cancer, and he kind of stopped his job, I think, or maybe changed his job, and is working super hard. He’s put together a big private research team [to understand her disease]. He’s raised all this money, and DeepResearch is giving him better answers than the private research team he hired. And seeing stuff like that is really meaningful to us.

Do you expect President (Donald) Trump to take more steps to protect American leadership in AI? Do you see that happening? Or, to phrase the question differently, is there a national game to be played in AI?

Of course there is. But our mission, which we take super seriously, is for AGI (artificial general intelligence) to benefit all of humanity. I think this is one of these rare things that transcends national borders. AI is like the wheel and the fire, the Industrial Revolution, the agricultural revolution, and it’s not a country thing. It belongs to everybody. I think AI is one of these things. It is like the next step in that. And those don’t belong to nations.

You first spoke about artificial general intelligence a couple of years ago. Have we moved closer to that?

Yes, when I think about what the models are capable of now relative to what they could do a couple of years ago. I think we’re undeniably closer…

Are we also more adventurous with our failsafes now?

Where we’ve moved from a couple of years ago… I think of how much progress we’ve made in model safety and robustness relative to two years ago. You know, look at the elucidation rate of a current model, or the ability to comply with a set of policies, we’re in way better shape than we were two years ago. That doesn’t mean we don’t have to go solve for thinks like superintelligence (a theoretical construct of AI or intelligence far exceeding human intelligence). Of course we do, but we’ve been on a nice trajectory there.

Have you looked at the Lancet paper on the Swedish breast cancer study that came out yesterday? They used an AI model called Transpara, which I don’t know whether you’re familiar with, and they discovered that the accurate diagnosis increased by 29%, with no false positives…

That’s fantastic. I was thinking the other day, you know, how much better does AI have to be allowed to drive? How much better does AI have to be as a diagnostician than a human doctor before it’s allowed to diagnose? It’s clearly got to be better; self-driving cars have to be much safer than human drivers for the world to accept them. But, how many more of those studies do we need before we say we want the AI doctor?

Although I just think that when it comes to diagnosis, the bar will be a lot lower than it is for cars… 

I think for cars, maybe subjectively, you want it to be like, 100 times safer. For a diagnosis, it should be much lower.

 



Read Full Article