Arm CEO Rene Haas on the AI chip race, Intel, and what Trump means for tech

Arm CEO Rene Haas on the AI chip race, Intel, and what Trump means for tech

Source: The Verge

Earlier this month, I teased an upcoming interview with Rene Haas, CEO of chip design company Arm. I sat down live in Silicon Valley with Rene at an event hosted by AlixPartners, the full version of which is now available.

Rene is a fascinating character in the tech industry. He’s worked at two of the most important chip companies in the world: first Nvidia, and now Arm. That means he’s had a front-row seat to how the industry has changed in the shift from desktop to mobile and how AI is now changing everything all over again.

Arm has been central to these shifts, as the company that designs, though doesn’t build, some of the most important computer chips in the world. Arm’s architectures are behind Apple’s custom iPhone and Mac chips, they’re in electric cars, and they’re powering AWS servers that host huge chunks of the internet.

When he was last on Decoder a couple of years ago, Rene called Arm the “Switzerland of the electronics industry,” thanks to how prevalent its designs are. But his business is getting more complex in the age of AI, as you’ll hear us discuss. There have been rumors that Arm is planning to not only design but also build its own AI chips, which would put it into competition with some of its key customers. I pressed Rene on those rumors quite a bit, and I think it’s safe to say he’s planning something.

Rene was about six months into the CEO job when he was last on Decoder, following Nvidia’s failed bid to buy Arm for $40 billion. After regulatory pressure killed that deal, Rene led Arm through an IPO, which has been tremendously successful for Arm and its majority investor, the Japanese tech giant SoftBank.

I asked Rene about that SoftBank relationship and what it’s like to work with its eccentric CEO, Masayoshi Son. I also made sure to ask Rene about the problems over at Intel. There have been reports that Rene looked at buying part of Intel recently, and I wanted to know what he thinks should happen to the struggling company.

Of course, I also asked about the incoming Trump administration, the US vs. China debate, the threat of tariffs, and all that. Rene is a public company CEO now, so he has to be more careful when answering questions like these. But I think you’ll still find a lot of his answers quite illuminating. I know I did.

Okay, Arm CEO Rene Haas. Here we go.

This transcript has been lightly edited for length and clarity.

Rene Haas, you are the CEO of Arm. Welcome to Decoder.

This is actually your second time on the show. You were last on in 2022. You hadn’t been the CEO for that long. The company had not yet gone public, so a lot has changed. We’re going to get into all of that. You’re also a podcaster now, so the pressure’s on me to do this well. Rene has a show where he interviewed [Nvidia CEO] Jensen [Huang] pretty recently that you all should check out. 

This convo will touch on several things. A lot has changed in the world of AI and policy in the last couple of years. We’re going to get into all that, along with the classic Decoder questions about how you’re running Arm. But first, I wanted to talk about a thing that I bet a lot of people in this room have been talking about this week, which is Intel. We’re going to start with something easy.

What do you think should happen to Intel?

I guess at the highest level, as someone who’s been in the industry my whole career, it is a little sad to see what’s happening from the perspective of Intel as an icon. Intel is an innovation powerhouse, whether it’s around computer architecture, fabrication technology, PC platforms, or servers.. So to see the troubles it’s going through is a little sad. But at the same time, you have to innovate in our industry. There are lots of tombstones of great tech companies that didn’t reinvent themselves. I think Intel’s biggest dilemma is how to disassociate from being either a vertical company or a fabless company, to oversimplify it. I think that is the fork in the road that it’s faced for the last decade, to be honest with you. And [former Intel CEO] Pat [Gelsinger] had a strategy that was very clear that vertical was the way to win.

In my opinion, when he took that strategy on in 2021, that was not a three-year strategy. That’s a 5-10-year strategy. So now that he’s gone and a new CEO will be brought in, that’s the decision that has to be made. My personal bias is that vertical integration can be a pretty powerful thing, and if they can get that right, they would be in an amazing position. But the cost associated with it is so high that it may be too big of a hill to climb.

We’re going to talk about vertical integration as it relates to Arm later, but I wanted to reference something you told Ben Thompson earlier this year. You said, “I think there’s a lot of potential benefit down the road between Intel and Arm working together.” And then there were reports more recently that you all actually approached Intel about potentially buying their product division. Do you want to work closer with Intel now considering what’s gone on in the last couple of weeks?

Well, a couple of things with Intel. I’m not going to comment on the rumors that we’re going to buy it. Again, if you’re a vertically integrated company and the power of your strategy is that you have a product and fabs, you have a potentially huge advantage in terms of cost versus the competition. When Pat was the CEO, I did tell him more than once, “You ought to license Arm. If you’ve got your own fabs, fabs are all about volume and we can provide volume.” I wasn’t successful in convincing him to do that, but I do think that it wouldn’t be a bad move for Intel.

On the flip side, in terms of Arm working with Intel, we work really closely with TSMC and Samsung. IFS is a very, very large effort for Intel in terms of external customers, so we work with them very closely to ensure that they have access to the latest technology. We also have access to their design kits. We want external partners who want to build an Intel to be able to use the latest and greatest Arm technology. So in that context, we work closely with them.

Turning to policy, there’s a few things I want to get into, starting with the news from yesterday. Do you have any reaction to David Sacks being Trump’s AI “czar?” I don’t know if you know him, but do you have any reaction to that?

I do know him a little bit. Kudos to him. I think that’s a pretty good thing. It’s quite fascinating that if you go back eight years to the December ahead of Trump 1.0 as he was starting to fill out his cabinet choices and appointees, it was a bit chaotic. At the time there wasn’t a lot of representation from the tech world. This time around, whether it’s Elon [Musk], David [Sacks], Vivek [Ramaswamy] — I know Larry Ellison has also been very involved in discussions with the administration — I think it’s a good thing, to be honest with you. Having a seat at the table and having access to policy is really good.

I would say few companies face as many geopolitical policy questions as you guys given all of your customers. How have you or would you advise the incoming administration on your business?

I would say it’s not just for our business. Let’s talk about China for a moment. The economies of the two countries are so inextricably tied together that a separation of supply chain and technology is a really difficult thing to architect. So I would just say that as this administration or any administration comes into play and looks at policy around things like export control, they should be mindful that a hard break isn’t as easy as it might look on paper. And there’s just a lot of levers to consider back and forth.

We are one attribute in the supply chain. If you think about what it takes to build a semiconductor chip, there are EDA tools, the IP from Arm, the fabrication, companies like Nvidia and MediaTek that build chips, but then there’s raw materials that go into building the wafers, the ingots, and the substrates. And they come from everywhere. It’s just such a complex problem that’s so inextricably linked together that I don’t believe there’s a one-size-fits-all policy. I think administrations should be open to understanding that there needs to be a lot of balance in terms of any solution that’s put forward.

What’s your China strategy right now? I was reading that you’re maybe working to directly offer your IP licenses in China. You have a subsidiary there as well. Has your strategy in China shifted at all this year?

No. The only thing that’s probably changed for us — and I would say probably for a lot of the world — is that China used to be a very, very rich market for startup companies, and venture capital flew around very freely. There was a lot of innovation and things of that nature. That has absolutely slowed down. Whether that is the exit for these companies from a stock market standpoint or in getting access to key technology isn’t as well understood. We’ve definitely seen that slow down.

On the flip side, we’ve seen incredible growth in segments such as automotive. If you look at companies like BYD or even Xiaomi that are building EVs, the technology in those vehicles in terms of their capabilities is just unbelievable. Selfishly for us, they all run on Arm. China’s very pragmatic in terms of how it builds its systems and products, and it relies very heavily on the open source global ecosystem for software, and all of the software libraries that have been tuned for Arm. Whether it’s ADAS, the powertrain, or [In-Vehicle Infotainment], it’s all Arm-based. So our automotive business in China is really strong.

Does President-elect Trump’s rhetoric on China and tariffs specifically worry you at all as it relates to Arm?

Not really. My personal view on this is that the threats of tariffs are a tool to get to the negotiating table. I think President Trump has proven over time that he is a businessman, and tariffs are one lever to start a negotiation. We’ll see where it goes, but I’m not too worried about that.

What do you think about the efforts by the Biden administration with the CHIPS Act to bring more domestic production here? Do you think we need a Manhattan Project for AI, like what OpenAI has been pitching?

I don’t think we need a government, OpenAI, Manhattan-type project. I think the work that’s being done by OpenAI, Anthropic, or even the work in open source that’s being driven by Meta with Llama, we’re seeing fantastic innovation on that. Can you say the US is a leader in terms of foundation and frontier models? Absolutely. And that’s being done without government intervention. So, I don’t think it’s necessary with AI, personally.

On the subject of fabs, I’ll go back to the question you started me on with Intel spending $30–40 billion a year in CapEx for these leading edge nodes. That is a hard pill to swallow for any company, and that’s why I think the CHIPS Act was a good and necessary thing. Building semiconductors is fundamental to our economic engine. We learned that during COVID when it took 52 weeks to get a key fob replaced thanks to everything going on with the supply chain. I think having supply chain resiliency is super important. It’s super important on a global level, and it’s definitely important on a national level. I was and am in favor of the CHIPS Act.

So even if we have the capital potentially to invest more in domestic production, do we have the talent? That’s a question that I think about and I’ve heard you talk about. You spend a lot of time trying to find talent and it’s scarce. Even if we spend all this money, do we have the people that we need in this country to actually win and make progress?

One of the things that’s happening is a real rise in the visibility of this talent issue, and I think putting more money into semiconductor university programs and semiconductor research is helping. For a number of years, semiconductor degrees, specifically in manufacturing, were not seen as the most attractive to go off and get. A lot of people were looking at software as a service and other areas. I think we need to get back to that on the university level. Now, one could argue that it’ll maybe help if AI bots and agents can come in and do meaningful work, but  building chips and semiconductor processes is very much an art as well as a science, particularly around improving manufacturing yields. I don’t know if we have enough talent, but I know there’s a lot of effort now going towards trying to bolster that.

Let’s turn to Arm’s business. You have a lot of customers — all of the big tech companies — so you’re exposed to AI in a lot of ways. You don’t really break out, as far as I know, exactly how AI contributes to the business, but can you give us a sense of where the growth is that you’re seeing in AI and for Arm?

One of the things we were talking about earlier was how we are now a public company. We were not a public company in 2022. One of the things I’ve learned as a public company is to break out as little as you possibly can so nobody can ask you questions in terms of where things are going.

[Laughs] Yeah, I know you are. So I would say no, we don’t break any of that stuff out. What we are observing — and I think this is only going to accelerate — is that whether you’re talking about an AI data center or an AirPod or a wearable in your ear, there’s an AI workload that’s now running and that’s very clear. This doesn’t necessarily need to be ChatGPT-5 running six months of training to figure out the next level of sophistication, but this could be just running a small level of inference that is helping the AI model run wherever it’s at. We are seeing AI workloads, as I said, running absolutely everywhere. So, what does that mean for Arm?

Our core business is around CPUs, but we also do GPUs, NPUs, and neural processing engines. What we are seeing is the need to add more and more compute capability to accelerate these AI workloads. We’re seeing that as table stakes. Either put a neural engine inside the GPU that can run acceleration or make the CPU more capable to run extensions that can accelerate your AI. We are seeing that everywhere. I wouldn’t even say that’s going to accelerate; that’s going to be the default.

What you’re going to have is an AI workload running on top of everything else you have to do, from the tiniest of devices at the edge to the most sophisticated data centers. So if you look at a mobile phone or a PC, it has to run graphics, a game, the operating system, and apps — by the way, it now needs to run some level of Copilot or an agent. What that means is I need more and more compute capability inside a system that’s already kind of constrained on cost, size, and area. It’s great for us because it gives us a bunch of hard problems to go off and solve, but it’s clear what we’re seeing. So, I’d say AI is everywhere.

There was a lot of chatter going into Apple’s latest iPhone release about this AI super cycle with Apple intelligence, this idea that Apple intelligence would reinvigorate iPhone sales, and that the mobile phone market in general has plateaued. When do you think AI — on-device AI — really does begin to reignite the growth in mobile phones? Because right now it doesn’t feel like it’s happening.

And I think there’s two reasons for that. One is that the models and their capabilities are advancing very fast, which is advancing how you manage the balance between what runs locally, what runs in the cloud, and things around latency and security. It’s moving at an incredible pace. I was just in a discussion with the OpenAI guys last week. They’re doing the 12 days of Christmas —

12 days of ship-mas, and they’re doing something every day. It takes two or three years to develop a chip. Think about the chips that are in that new iPhone when they were conceived, when they were designed, and when the features that we thought about had to go inside that phone. ChatGPT didn’t even exist at that time. So, this is going to be something that is going to happen gradually and then suddenly. You’re going to see a knee-in-the-curve moment where the hardware is now sophisticated enough, and then the apps rush in.

What is that shift? Is it a new product? Is it a hardware breakthrough, a combination of both? Some kind of wearable?

Well, as I said, whether it’s a wearable, a PC, a phone, or a car, the chips that are being designed are just being stuffed with as much compute capability as possible to take advantage of what might be there. So it’s a bit of chicken-and-egg. You load up the hardware with as much capability hoping that the software lands on it, and the software is innovating at a very, very rapid pace. That intersection will come where suddenly, “Oh my gosh, I’ve shrunk the large language model down to a certain size. The chip that’s going in this tiny wearable now has enough memory to take advantage of that model. As a result, the magic takes over.” That will happen. It will be gradual and then sudden.

Are you bullish on all these AI wearables that people are working on? I know Arm is in the Meta Ray-Bans, for example, which I’m actually a big fan of. I think that form factor’s interesting. AR glasses, headsets — do you think that is a big market?

Yeah, I do. It’s interesting because in many of the markets that we have been involved in, whether it’s mainframes, PCs, mobile, wearables, or watches, some new form factor drives some new level of innovation. It’s hard to say what that next form factor looks like. I think it’s going to be more of a hybrid situation, whether it’s around glasses or around devices in your home that are more of a push device than a pull device. Instead of asking Alexa or asking Google Assistant what to do, you may have that information pushed to you. You may not want it pushed to you, but it could get pushed to you in such a way that it’s looking around corners for you. I think the form factor that comes in will be somewhat similar to what we’re seeing today, but you may see some of these devices get much more intelligent in terms of the push level.

There’s been reports that Masayoshi Son, your boss at SoftBank, has been working with Jony Ive and OpenAI, or a combination of the three, to do hardware. I’ve heard rumors that there could be something for the home. Is there anything there that you’re working with that you can talk about?

I read those same rumors.

Amazon just announced that it’s working on the largest data center for AI with Anthropic, and Arm is really getting into the data center business. What are you seeing there with the hyperscalers and their investments in AI?

The amount of investment is through the roof. You just have to look at the numbers of some of the folks who are in this industry. It’s a very interesting time because we’re still seeing an insatiable investment in training right now. Training is hugely compute intensive and power intensive, and that’s driving a lot of the growth. But the level of compute that will be required for inference is actually going to be much larger. I think it’ll be better than half, maybe 80 percent over time would be inference. But the amount of inference cases that will need to run are far larger than what we have today.

That’s why you’re seeing companies like CoreWeave, Oracle, and people who are not traditionally in this space now running AI cloud. Well, why is that? Because there’s just not enough capacity with the traditional large hyperscalers: the Amazons, the Metas, the Googles, the Microsofts. I think we’ll continue to see a changing of the landscape — maybe not a changing so much, but certainly opportunities for other players in terms of enabling and accessing this growth.

It’s very, very good for Arm because we’ve seen a very large increase in growth in market share for us in the data center. AWS, which builds its Graviton general-purpose devices based on Arm, was at re:Invent this week. It said that 50 percent of all new deployments are Graviton. So 50 percent of anything new at AWS is Arm, and that’s not going to decrease. That number’s just going to go up.

One of the things we’re seeing is with devices like the Grace Blackwell CPUs from Nvidia. That’s Arm using an Nvidia GPU. That’s a big benefit for us because what happens is the AI cloud is now running a host node based on Arm. If the data center now has an AI cluster where the general purpose compute is Arm, they naturally want to have as much of the general-purpose compute that’s not AI running on Arm. So what we’re seeing is just an acceleration for us in the data center, whether it’s AI, inference, or general-purpose compute.

Are you worried at all about a bubble with the level of spending that’s going into hyper-scaling and the models themselves? It’s an incredible amount of capital, and ROI is not quite there yet. You could argue it is in some places, but do you ascribe to the bubble fear?

On one hand, it would be crazy to say that growth continues unabated, right? We’ve seen that is never really the case. I think what will get very interesting, in this particular growth phase, is to see at what level does real benefit come from AI that can augment and/or replace certain levels of jobs. Some of the AI models and chatbots today are decent but not great. They supplement work, but they don’t necessarily replace work.

But if you start to get into agents that can do a real level of work and that can replace what people might need to do in terms of thinking and reasoning? Then that gets fairly interesting. And then you say, “Well, how’s that going to happen?” Well, we’re not there yet, so we need to train more models. The models need to get more sophisticated, etc. So I think the training thing continues for a bit, but as AI agents get to a level where they reason close to the way a human does, then I think it asymptotes on some level. I don’t think training can be unabated because at some point in time, you’ll get specialized training models as opposed to general purpose models, and that requires less resources.

I was just at a conference where Sam Altman spoke, and he was really lowering the bar on what AGI will be pretty intentionally, and talked about declaring it next year. I cynically read into that as OpenAI trying to rearrange its profit-sharing agreement with Microsoft. But putting that aside, what do you think about AGI? When we will have it, what will it mean? Is it going to be an all-at-once, Big Bang moment, or is it going to be as Altman is talking about now, more like a whimper?

I know he has his own definitions for AGI, and he has reasons for those definitions. I don’t subscribe so much to the “what is AGI vs. ASI” (artificial super intelligence) debate. I think more about when these AI agents start to think, reason, and invent. To me, that is a bit of a cross-the-Rubicon moment, right? For example, ChatGPT can do a decent job of passing the bar exam, but to some extent, you load enough logic and information into the model, and the answers are there somewhere. To what level is the AI model a stochastic parrot and just repeats everything it’s found over the internet? At the end of the day, you’re only as good as the model that you’ve trained on, which is only as good as the data.

But when the model gets to a point where it can think and reason and invent, create new concepts, new products, new ideas? That’s kind of AGI to me. I don’t know if we’re a year away, but I would say we are a lot closer. If you would’ve asked me this question a year ago, I would’ve said it’s quite a ways away. You asked me that question now, I say it’s much closer.

What is much closer? Two years? Three years?

Probably. And I’m probably going to be wrong on that front. Every time I interact with partners who are working on their models, whether it’s at Google or OpenAI, and they show us the demos, it’s breathtaking in terms of the kind of advancements they’re making. So yeah, I think we’re not that far away from getting to a model that can think and reason and invent. 

When you were last on Decoder, you said Arm is known as the Switzerland of the electronics industry, but now there’s been a lot of reports this year that you were looking at really going up the stack and designing your own chips. I’ve heard you not answer this question many times, and I’m expecting a similar non-answer, but I’m going to try. Why would Arm want to do that? Why would Arm want to go up the value chain?

This is going to sound like one of those, “If I did it answers,” right? Why would Arm consider doing something other than what it currently does? I’ll go back to the first discussion we were having relative to AI workloads. What we are seeing consistently is that AI workloads are being intertwined with everything that is taking place from a software standpoint. At our core, we are computer architecture. That’s what we do. We have great products. Our CPUs are wonderful, our GPUs are wonderful, but our products are nothing without software. The software is what makes our engine go.

If you are defining a computer architecture and you’re building the future of computing, one of the things you need to be very mindful of is that link between hardware and software. You need to understand where the trade-offs are being made, where the optimizations are being made, and what are the ultimate benefits to consumers from a chip that has that type of integration. That is easier to do if you’re building something than if you’re licensing IP. This is from the standpoint where if you’re building something, you’re much closer to that interlock and you have a much better perspective in terms of the design trade-offs to make. So, if we were to do something, that would be one of the reasons we might.

Are you worried at all about competing with your customers though?

I mean, my customers are Apple. I don’t plan on building a phone. My customer’s Tesla. I’m not going to build a car. My customer is Amazon. I’m not going to build a data center.

What about Nvidia? You used to work for Jensen.

Well, he builds boxes, right? He builds DGX boxes, and he builds all kinds of stuff.

Speaking of Jensen — we were talking about this before we came on — when you were at Nvidia, CUDA was really coming into fruition. You were just talking about the software link. How do you think about software as it relates to Arm? As you’re thinking about going up the stack like this, is it lock-in? What does it mean to have something like a CUDA?

One can look at lock-in as an offensive maneuver that you take where, “I’m going to do these things so I can lock people in” and/or you provide an environment where it’s so easy to use your hardware that by default, you’re then “locked in.” Let’s go back to the AI workload commentary. So today, if you’re doing general purpose compute, you’re writing your algorithms in C, JAX, or something of that nature. 

Now, let’s say, you want to write something in TensorFlow or Python. In an ideal world, what does the software developer want? The software developer wants to be able to write their application at a very high level, whether that is a general purpose workload or an AI workload, and just have it work on the underlying hardware without really having to know what the attributes are of that hardware. Software people are wonderful. They are inherently lazy, and they want to be able to just have their application run and have it work. 

So, as a computer architecture platform, it’s incumbent upon us to make that easy. It’s a big initiative for us to think about providing a heterogeneous platform that’s homogeneous across the software. We’re doing it today. We have a technology called Kleidi, and there are Kleidi libraries for AI and for the CPU. All the goodness that we put inside our CPU products that allows for acceleration uses these libraries, and we make those available open. There’s no charge. Developers, it just works. Going forward, since the vast majority of the platforms today are Arm-based and the vast majority are going to run AI workloads, we just want to make that really easy for folks.

I’m going to ask you about one more thing you can’t really talk about before we get into the fun Decoder questions. I know you’ve got this trial with Qualcomm coming up. You can’t really talk about it. At the same time, I’m sure you feel the concern from investors and partners about what will happen. Address that concern. You don’t have to talk about the trial itself, but address the concern that investors and partners have about this fight that you have.

So the current update is that it plans to go to trial on Dec. 16, which isn’t very far away. I can appreciate, because we talked to investors and partners, that what they hate the most is uncertainty. But on the flip side, I would say the principles as to why we filed the claim are unchanged, and that’s about all I can say.

All right, more to come there. So Decoder questions. Last time you were here on the podcast, Arm had not yet gone public. I’m curious to know now that you’re a couple years in, what surprised you about being a public company?

I think what surprised me on a personal level is the amount of bandwidth that it takes away from my day because I end up having to think about things that we weren’t thinking about before. But at the highest level, it’s actually not a big change. Arm was public before. We consolidated up through SoftBank when it bought us. So, the muscles in terms of being able to report quarterly earnings and have them reconciled within a timeframe, we had good muscle memory on all of that. Operationally for the company, we have great teams. I have a great finance team that’s really good at doing that. For me personally, it was just the appreciation that there’s now a chunk of my week that’s dedicated to activities that I wasn’t really working on before.

Has the structure of Arm organizationally changed at all since you went public?

No. I’m a big believer in not doing a lot of organizational changes. To me, organizational design follows your strategy, and strategy follows your vision. If you think about the way you’ve heard me talk about Arm publicly for the last couple of years, that’s pretty unchanged. As a result, we haven’t done much in terms of changing the organization. I think organization changes are horrendously disruptive. We’re an 8,000-person company, so we’re not gigantic, but if you do a gigantic organization change, it better have followed a big strategy change. Otherwise, you’ve got off-sites, team meetings, and Zoom calls talking about my new leaders. If it’s not in support of a change of strategy, it’s a big waste of time. So I really try hard not to do much of that.

What we talked about earlier with potentially looking at going more vertical or the value there, that seems like a big change that could affect the structure.

If we were to do that. That’s right.

Is there a trade-off that you’ve had to make this year in your decision-making that was particularly hard that you can talk about, something that you had to wrestle with? How did you weigh those trade-offs?

I don’t know if there was one specific trade-off. In this job as a CEO — gosh, it’ll be three years in February — you’re constantly doing the mental trade-off of what needs to happen in the day-to-day versus what needs to happen five years from now. My proclivity tends to be to think five years ahead as opposed to one quarter ahead. I don’t know if there’s any major trade-off that I would say I make, but what I’m constantly wrestling with is that balance between what is necessary in the day-to-day versus what needs to happen in the next five years.

I’ve got great teams. The engineering team is fantastic. The finance team is fantastic. The sales and marketing teams are great. In the day-to-day, there isn’t a lot I can do to impact what those jobs are, but the jobs that I can impact are over the next five years. What I try to do is spend areas of time on work only I can do, and if there’s work that the team can do where I’m not going to add much, I try to stay away from it. But that’s the biggest trade-off I wrestle with is the day-to-day versus the future.

How different does Arm look in five years?

We don’t know that we’ll look very different as a company, but hopefully we continue to be an extremely impactful company in the industry. I have big ambitions for where we can be.

I’d love to know what it’s like to work with [Masayoshi Son]. He’s your largest shareholder and your board chair. I’m sure you talk all the time. Is he as entertaining in the boardroom as he is in public settings?

Yes. He’s a fascinating guy. One of the things I admire a lot about Masa, and I don’t think he gets enough credit for this, is that he is the CEO and founder of a 40-year-old company. And he’s reinvented himself a lot of times. I mean, SoftBank started out as a distributor of software, and he’s reinvented himself from being an operator with SoftBank mobile to an investor. He’s a joy to work with, to be honest with you. I learn a lot from him. He is very ambitious, obviously, loves to take risks, but at the same time, he has a good handle on the things that matter. I think everything you see about him is accurate. He’s a very entertaining guy.

How involved is he in setting Arm’s long-term future with you?

Well, he’s the chairman of the company, and the chairman of the board. From that perspective, the board’s job is to evaluate the long-term strategy of the company, and with my proclivity towards thinking also in the long term, he and I talk all the time about those kinds of things.

You’ve worked with two very influential tech leaders: Masa and Jensen at Nvidia. What are the unique traits that make them unique?

That’s a wonderful question. I think people who build a company and are running it 20, 30 years later and drive it with the same level of passion and innovation — Jensen, Masa, Ellison, Jeff Bezos, I’m sure I’m leaving out names — carry a lot of the same traits. They’re very intelligent, obviously brilliant, they look around corners, and work incredibly hard but have an incredible amount of courage. Those ingredients are necessary for people who stay at the top that long.

I’m a big basketball fan, and I’ve always drawn analogies between, if you think about a Michael Jordan or a Kobe Bryant, when people talk about what made them great, obviously their talent was through the roof and they had great athleticism, but it was something in their character and their drive that cut them in a different level. And I think Jensen, Son, Ellison, the other names I mentioned, they all fall in the same group. Elon Musk too, obviously.

All right, we’re going to leave it there. Rene, thank you so much for joining us.

Decoder with Nilay Patel /

A podcast from The Verge about big ideas and other problems.

SUBSCRIBE NOW!



Read Full Article

Leave a Reply

Your email address will not be published. Required fields are marked *