Rene Haas is chief executive of Arm, the chip designer behind the processors in 99 per cent of all smartphones. After being bought by SoftBank in 2016, the UK-headquartered company became last year’s biggest initial public offering, in a deal valuing it at $54.5bn on Nasdaq. Since then, Arm’s market capitalisation has nearly tripled to around $140bn, as it has been caught in the updraft of investor excitement about artificial intelligence.
Based in Silicon Valley, Haas has worked in the industry for almost 40 years, including seven years at AI chipmaker Nvidia before joining Arm in 2013. Since becoming chief executive in 2022, he has pushed Arm to diversify further from its mobile phone roots into PCs, automotive and industrial components and, increasingly, servers — all underpinned by the same promise of power efficiency that has kept its technology at the heart of the iPhone.
Arm does not manufacture its own processors — though a recent report suggested that may soon change — instead licensing a growing array of designs to the biggest names in the tech industry, including Microsoft, Nvidia, Apple, Google, Samsung, Amazon, and Taiwan Semiconductor Manufacturing Company.
After Apple switched its Mac processors from Intel to its own Arm-based versions in 2020, Microsoft this year unveiled a series of Arm-powered Windows PCs, hailing a new era of the “AI PC”.
In this conversation with FT global technology correspondent Tim Bradshaw, Haas discusses the growing importance of software to chipmakers and how AI is changing the devices we use.
Tim Bradshaw: Microsoft has been making a big push with Arm-based Windows PCs in the past few weeks but this isn’t the first time Microsoft tried to make that switch. What’s different now compared with the failed efforts of the past, such as 2012’s Windows RT?
Rene Haas: I worked on the very first Windows on Arm PCs back in 2012. And a lot has changed since that time. One of the things that’s probably the biggest difference now is that virtually the entire application ecosystem is native to Arm — meaning that, not only is the performance going to be fantastic, but try to find an application that’s not going to run. If you go back 12 years when Windows on Arm kicked off, it was a completely different world in terms of local apps versus cloud, and Windows on Arm didn’t support a lot of popular applications [such as Apple’s iTunes and Google’s Chrome web browser]. That was a killer blow.
Fast forward to 2024, there’s no issue with application ecosystems. And what’s been proven on the Windows on Arm platforms as an extension of the other ecosystem, MacOS, is the experience is phenomenal, when we look at the battery life and the performance that the Macs have . . . It’s going to be a very different game this time.
TB: And now with the extra sales pitch of ‘AI everywhere’. Where do you think we’re up to in finding the right applications for these new AI tools?
RH: Talking about AI PCs, I think it’s very early. I think you have Copilot [Microsoft’s brand for products enhanced by AI assistants recently extended to its latest AI PCs] that has now been introduced. So the feature set that has been talked about, I think it’s going to start to take advantage of the underlying hardware.
We know there’s going to be some other [Windows AI PC] systems coming out in the upcoming years. So, while the first-generation systems are going to be interesting, the second generation systems are going to be even more [so]. And folks who bought the first ones are probably going to be a little bit green with envy when they see what the second ones look like.
TB: Buying version one of any new product is just part of the risk/reward of being an early adopter. Are you an early adopter? What tech are you playing with right now?
RH: Whether it’s game consoles, whether it’s phones . . . I’m a very much an early adopter. I probably have every mobile phone in existence. I’m a big foldable phone guy. I think they’re great. Because they are small enough when folded to act like a mobile phone. But when you expand it out, you can look at spreadsheets, you can watch videos. It’s like a mini tablet.
TB: It seems like we’re in another moment where people are experimenting with different form factors for consumer electronics, with folding phones and AI glasses. Have you tried any of those new AI wearables?
RH: I have tried some of them. I do like the Meta Ray Ban augmented reality glasses. They’re stylish. The video quality is good. They are good sunglasses and they don’t feel bulky or weird. Me, personally, I don’t like something heavy on my head. So that’s why I like the Ray Bans and they have Arm inside, which is also what I like.
TB: Do you see that becoming a big product category? Because we’ve been here before with Google Glass which — to say the least — was not successful.
RH: I think augmented reality is still emerging in terms of the capabilities of that field. I think there’s a huge opportunity with holograms, with display technology. That is an area that is probably early days still in terms of being figured out. I think it’s a generational thing . . . I think a generation has to grow up being comfortable with wearing things for an extended amount of time. [So] it’s more of a niche item right now.
TB: All of these products, whether AI PCs or smart glasses, are part of a broader trend for moving from AI services that run in the cloud — like the ChatGPT app, which needs an internet connection to work — to systems that run on the “edge” [industry jargon essentially meaning people’s or companies’ own devices, like phones or factory equipment]. There’s much more competition here than in AI chips, where Nvidia totally dominates right now. Do you see the edge becoming a bigger opportunity for chipmakers than the cloud?
RH: We are still in very early days in terms of AI workloads running everywhere. So to your point of, ‘what is an edge device?’ maybe the user would describe that as ‘not the cloud’. So what has to happen is the [AI] models . . . need to evolve. I think the models need to get a little bit smaller, a little bit more efficient to run in these other areas.
Where is Arm going to play? They’re all going to run through Arm because, first off, you have to have a CPU [central processing unit], which is table stakes and for any of these end devices, and the installed base is all Arm anyway. So the software ecosystem is going to look to optimise around Arm.
We’re showing some information at Computex [the trade event in Taiwan this week] around compute libraries that will essentially make it very, very easy to run these AI workloads on Arm CPUs. Developers, in the past, did not have access to the Arm CPU when they wanted to run an AI application. Arm will now be making these libraries available to developers. So they can write the application and it takes advantage of the hardware. It could run three times faster, four times faster, at the same power.
TB: These libraries are part of the broader package of Arm products that you describe as the ‘compute subsystem’. This is a core part of Arm’s strategy now, to go beyond designing one single chip for customers to build on. Can you explain more about that — and why you’re doing it?
RH: What really makes Arm unique is we have the most ubiquitous computer architecture on the planet. Our CPUs are in 30bn devices per year, almost 300bn in total. What we are finding is that the chips are becoming increasingly more difficult to build and it takes longer to build them . . . as you get to smaller transistors.
So how can Arm help? Let’s say, in a server, you might have 128 ARM CPUs. And with those 128 ARM CPUs, you have all of the [networking systems] that connect them together. You have a memory mapping system, you have a mesh network . . . Previously, the end customer would have to put all that stuff together and then build their chip. With compute subsystems, we put all that together for them.
We are in mobile phones, we are in PCs, we’re in automotive applications, we are in complex AI training, and we are in general-purpose server[s]. All of those are Arm CPUs [and] areas that we are going to do compute subsystems. So, over time, it’s going to be a very, very large part of our business.
TB: One of your big new customer wins on the data centre side recently was Microsoft which is doing a new Arm-based CPU for its cloud called Cobalt. You’ve now got Amazon, Google, Microsoft — the three biggest cloud computing providers — all running Arm CPUs as part of their cloud platforms. When did that work start from your side to see that come to fruition?
30bnNumber of devices built every year with an Arm central processing unit
RH: We have been working on this for over 10 years. It’s been a tremendous amount of work [in which] two things had to come together. The CPUs had to get performant enough against the competition. They had to be very efficient. They had to be very high speed. And we had to have all the components around it. And then . . . the software ecosystem had to have everything required that you could just run the servers. So Linux distributions, like Red Hat and SuSE. We were working in parallel to have all the pieces of the software together.
When you combine the software being ready with world-class products and power efficiency, you now have a compelling advantage in terms of the chip. Now, what makes it even more compelling is, by building a custom chip, you can essentially build a custom blade, a custom rack, and a custom server that’s very unique to what Microsoft is running with Azure or what Google is running in Google Cloud or AWS.
TB: Power efficiency is a big part of Arm’s pitch over traditional server chipmakers like Intel and AMD. Microsoft said recently that it’s investing so fast in AI data centres that it’s looking like it might miss some of its climate targets. That must be a problem all the Big Tech companies are facing right now?
RH: Oh, yes, it’s massive. Two things are going to accelerate Arm’s adoption in the cloud. One is just broadly, this power efficiency issue. And secondly, the fact that, on AI, we can greatly reduce power by this customisation. Just look at Nvidia. Nvidia built a chip called Grace Hopper and then they built a chip called Grace Blackwell. They are essentially replacing the Intel or AMD CPU with an Arm CPU, which is called Grace.
TB: One Big Tech company that hasn’t announced an Arm-based chip in its data centres yet is Meta, Facebook’s owner. Its new chip for AI inference [the kind needed to deliver AI services rather than create them], called MTIA, is using an open-source alternative to Arm’s architecture called RISC-V . . . Are they using Arm in other ways or have they decided to go down a different path?
RH: This MTIA chip is an accelerator. And that accelerator has to connect to a CPU. So it can connect to an ARM CPU, or it can connect to an Intel CPU. RISC-V is not interesting from a CPU standpoint, because it’s not running any key software . . . I’ll leave it to Meta to say whether they’re going to connect to Intel or Arm.
TB: The analysts I speak to see big potential growth for RISC-V in areas like automotive, where Arm is also hoping to grow. Do you worry that RISC-V is starting to nibble at the edges?
RH: Where I don’t see it nibbling anywhere is running key software applications. I think there’s a misunderstanding commonly between the RISC-V architecture as it applies to being a chip and when it’s really running [key] software. Because it’s all about the software.
And, again, back to what makes Arm very unique: every mass popular application you can think of has been ported to and optimised for Arm. It takes a long, long time not only to get the software written, but ported and optimised. There’s no software anywhere for RISC-V in these places. None.
TB: So, if not competition from RISC-V, what does keep you up at night?
RH: The things that I worry about are the stuff that’s inside my control. We have massive opportunity with all these compute subsystems. We have massive opportunity with growth in AI. We have massive opportunity to reduce power to go solve this issue relative to data centres. It’s just making sure that we can execute on the strategies we have, because we are at a magical time in our industry relative to the growth potential.
TB: How much does being a public company keep you awake at night?
RH: Generally speaking, it doesn’t change how I think about running the company because I don’t really think about the company from quarter to quarter. I think about the company from year to year. Most of my discussions that I have with our internal teams or engineers are about 2027, 2028.
TB: Unfortunately, Wall Street does tend to look at things quarter by quarter. You’ve had a lot of stock-price volatility around your quarterly earnings reports. That’s not uncommon for a newly-listed company but do you think investors really understand the Arm business?
RH: What I would say about the volatility is we’ve had three quarters of being a public company and each quarter was bigger than the last one. And each quarter that we talked about going forward was larger . . . we basically indicated that we see 20 per cent growth year on year and we see that continue for the next few years.
We achieved $3bn in revenue over this past year. It took us 20 years to get to $1bn. It took us, I think, another 10 to get to $2bn. It took us two years to get to $3bn. And we’re looking to get to $4bn in one year. So the trajectory is in the right place.
We have incredible visibility in terms of our business, [not only because] we get reports from our customers, but because our market share is so high.
TB: Some investors worry about visibility in two parts of your business in particular. One of them is Apple, one of your biggest customers but which is famously not very open with its partners. The other is Arm China. You warned in your IPO prospectus of past problems obtaining “accurate information” from Arm China. What insight do you really have?
RH: We have great insight with Apple. They’re a phenomenal partner for us. They have signed a long-term [contract] extension. They’re very committed to Arm.
Arm China, that’s our joint venture in China. They are essentially a distributor for us. So we have very good visibility in terms of how we work with partners there. With China, the issue that we’ve faced in terms of export control are no different from other [chip] companies. But, in general, I would say, with Arm China, things are going quite well.
TB: How has being a public company changed your relationship with SoftBank and its chief executive, Masayoshi Son? They’re still a 90 per cent shareholder but you’re more out on your own now. How does that dynamic change?
RH: I think it’s changed in the sense that, as a public company, we now have a board that has independent directors that represent shareholders. So all the things that we have to do from a governance standpoint, that’s a little bit different. I’d say we are certainly more independent in terms of how we think about the company, how we talk about the company. But SoftBank’s our largest shareholder, so obviously they have a big say in terms of things at the boardroom table.
With Masa, I would say the relationship is no different. We talk all the time. He’s a brilliant guy. I think he gets a little bit of a bad rap in the press. He’s a guy who started the company 40 years ago and is still running it. There’s a pretty small group of people who have done that kind of thing, and the company is still broadly successful.
TB: How does Arm fit in with SoftBank’s broader strategic goals around AI?
RH: Clearly, Masa is very bullish on all things AI and — given that it’s pretty hard to talk about an AI application that doesn’t bump into Arm — we’re at the centre of many of those things. For example, SoftBank made an investment into a UK company called Wayve, which is doing some amazing work in LLMs [large language models, the complex AI systems that sit behind applications such as ChatGPT] for full self-driving cars. It’s running on Arm. So there is an area where if Wayve does well, Arm does well.
TB: Does that mean you’re going to move into making your own AI chips, as Nikkei reported recently?
RH: I can’t give you anything on that one. I can’t comment.
TB: Silicon Valley in general, and the chip industry in particular, is full of ‘frenemies’. Nvidia’s biggest customers are all making their own AI chips, for example. Where do you think you can, and can’t, compete with your customers?
RH: I tend to think more about where can we add value and where is the industry going? Back to compute subsystems. When we kicked the idea off, this was a bit controversial because, by doing a full subsystem implementation, some customers might say, ‘Hey, that’s the kind of work I do. Arm, I don’t need to have your finished solution.’ Fast forward, we solve a lot of problems in terms of engineering overhead. We solve a lot of problems relative to time to market. We solve a lot of problems relative to broadening out Arm’s platforms.
So that’s an example of something that might be a frenemy kind of thing where people might look at it and say, ‘That’s my domain’. But I would say it’s worked out far better than we thought. Even the early customers who pushed back at it are adopting it.
TB: Another example of a frenemy for Arm is Intel. At the same time as competing for a lot of Intel’s PC and server business, you’re actually getting closer to them on the foundry side. You were recently on stage at an Intel event — which some people who have been watching this industry for 30 years might have seen as a ‘hell freezing over’ kind of moment. What is the nature of that relationship exactly?
RH: Yeah, that’s a great example of the world moving around. Intel, 10 years ago, probably saw it was very beneficial to see Arm as not a healthy competitor. Fast forward, Intel has a burgeoning business that is trying to grow around Intel Foundry. What does Intel Foundry need? They need volume. Well, who drives the most volume in terms of CPUs? It’s Arm. So they obviously see the size of that opportunity . . . They’ve taken a lot of money from the US government on the Chips Act and they need to put that money to work. I think working with Arm is going to be the fastest way they can do that.
TB: We’ve talked a lot about AI in the abstract. What are the particular applications of AI that you’re most excited about personally?
RH: A really simple AI application that I use is to remove people from photographs. I’ll take pictures of my kids, my grandkids, my friends, and someone will photobomb. And you can just clean that stuff up. With [Google Photos] Magic Eraser, you can do that. Crazy simple, but that’s AI.
But the areas that I personally find far more interesting are drug research and medical. A very simple example: You’re ill, you go to the pharmacy, they prescribe some medicine to you, and you look at the medicine and the side effects are as generic as it can be. That seems like something that, if the doctor knew my DNA genome sequence and would be able to map out exactly which drugs will give me what kind of reaction, knowing exactly my background and profile, that would be compelling. I was meeting this morning with somebody who’s in this industry and was asking that question. With AI, that’s probably three to four years away.
Another interesting example is drug research. How long does it take to develop a new drug? Ten years. That can be cut in half, it can be cut by two-thirds by using AI. That to me is incredibly exciting.
TB: Some AI boosters argue the technology will soon replace all human labour. Do you think your grandchildren will have to work?
RH: I hope so. I hope so. What a life if they don’t.
This transcript has been edited for brevity and clarity
Read the full article here