Right around the time I last wrote on Advanced Micro Devices, Inc. (NASDAQ:AMD), a number of major industry disruptions occurred:
- In September 2022 the Ethereum Merge converted from (largely gaming GPU-based) mining to proof of stake operation. This led to a large decrease in demand (and price reductions) for gaming GPUs. Since then, gaming GPU sales appear to have stabilized at a lower level (as expected).
- PC sales took a big downturn during Q3 2022. PC manufacturers were already sitting on a lot of inventory, and they quickly stopped buying new components, including CPUs. As a result, PC CPU sales ground nearly to a halt around a year ago. It has taken several quarters, but the inventory correction appears to be over. PC CPU sales have started tracking back upwards for AMD and Intel Corporation (INTC).
- ChatGPT was launched November 30, 2022, setting off shockwaves in the data center, and causing buyers of server CPUs to pause and reconsider the configuration of their infrastructure.
My “Strong Buy” rating from that article still looks good-the stock is up 67% in the 11 months since then. But due to the major industry disruptions, some of the parts of my forecast are unlikely to be achieved at the predicted cadence. It’s now time to update the thesis given how much the landscape has changed.
AI should drive major gains for AMD throughout 2024
AMD’s AI opportunity is huge. Multiple players in the sector have mentioned a $150 billion by 2027 level opportunity. In comparison, AMD’s entire revenues for Q2 totaled $5.4 billion. There are many efforts-from both established firms and startups-working on a variety of AI projects. Some of those are likely to take off and scale meaningfully.
AMD’s product lines are well suited to capitalize on the explosion of interest in AI. 2024 should be a very strong AI-powered year of growth for AMD.
AMD Instinct-leadership generative AI accelerators
Nvidia Corporation’s (NVDA) H100 is the most desirable AI product on the market right now, with production runs reportedly sold out into 2024. But AMD Instinct MI300 accelerators, launching in Q4, appear to be very competitive with Nvidia’s H100, with MI300X being significantly superior for large model inference applications.
AMD’s MI300X is a leadership generative AI accelerator with 192GB of HBM3 memory. The MI300X beats Nvidia’s H100 with up to 2.4x HBM density and up to 1.6x HBM bandwidth. This allows a single MI300X to run models up to 80 billion parameters-something H100 can’t do. AMD demonstrated the MI300X running the 40 billion parameter Falcon-40B model using just one MI300x. Running that same model requires using 2 copies of Nvidia’s H100.
Similarly, AMD calculated their MI300X could run the PaLM 2 model using just 4 MI300X accelerators, compared with needing 10 of Nvidia’s H100 to run the same model. This means MI300X has a much lower total cost of ownership for scaling inference applications. Given the H100 costs around $35k, that is a staggering difference in performance per dollar.
AMD also announced the MI300A, the world’s first APU accelerator for AI and HPC, using CDNA 3 GPU and Zen 4 CPU cores working together on the same accelerator sharing memory. AMD’s innovative chiplet technology allows the company to offer customized high-powered accelerators that are tailored to the needs of data center customers.
AMD disclosed that they had over 20 times as many engagements over their AI accelerators in Q2 than they did in Q4 2022. AMD management also disclosed they have committed customer orders for the MI300, and management expects those customers to ramp more quickly than they typically would with server CPU deployments. In the data center alone, AMD management expects the market for AI accelerators to reach $150 billion by 2027. Management indicated their Q4 MI300 sales would be several hundred million dollars, and would ramp throughout 2024.
Among the big customers rumored to be ramping MI300 during the next year are Meta Platforms (META), Azure from Microsoft (MSFT), and AWS from Amazon (AMZN). Azure already has a big deployment of MI250. And Meta has led the way to develop the PyTorch software solutions that allow AMD Instinct chips to run seamlessly in applications originally written to use Nvidia chips. Meta could quickly be a gigantic customer for AMD.
Given the strong demand for data center accelerators, and the leadership specs of the MI300 known so far, AMD accelerator revenues could easily exceed $1 billion per quarter during 2024, and keep growing from there. We’ll know a lot more after the MI300 launch in Q4.
AMD EPYC server processors
As a result of the AI wave, the major cloud providers have been redesigning their data center configurations. In the short term, this has resulted in a pause in server CPU purchasing as the cloud providers digested their inventory. But these new designs also allow the cloud providers to move away from their costly and inefficient Intel-based infrastructure and fully embrace the staggeringly dominant advantages of AMD EPYC.
AMD EPYC server processors blow the competition out of the water on performance metrics and total cost of ownership calculations. Just 15 AMD EPYC servers can do the work of 38 Intel servers, or 43 Ampere servers.
On any number of benchmarks, EPYC dominates Intel’s performance:
The AI wave presents problems for data center operators, including power management. Many data centers are limited in the amount of electrical load they can manage. And AI accelerators exacerbate this issue by requiring a lot more power consumption while also generating a lot more waste heat (which in turn increases power consumption to chill). This makes AMD EPYC’s enormous advantage on a performance per watt basis an extremely attractive feature for AI-heavy data center designs.
Major hyperscalers, like Meta, will need to do a refresh of their legacy server CPUs in 2024. AMD management disclosed that EPYC has already surpassed 50% market share with some major cloud providers. Given the enormous advantages EPYC provides, it would not be surprising for AMD to surpass 50% market share in the entire server CPU market in 2024. Management also guided to having over 900 cloud instances of AMD EPYC servers by the end of 2023-double the amount when AMD’s Data Center segment had $1.6 billion in quarterly revenue.
Given the obvious advantages, EPYC should become the default CPU for data centers, and quickly make inroads in the enterprise. When data centers finish their inventory digestion and proceed with their new designs, the server CPU market should pick up steam quickly. And EPYC should continue to grow market share rapidly. It would not be surprising for EPYC to start doing $2.5 billion to $3.5 billion per quarter in sales in 2024, and growing from there.
AMD Ryzen AI processors
AMD was the first to integrate AI into PC CPUs with their Ryzen AI product line. While there are not currently a lot of applications to make use of the inference processing units, I expect we will see those applications growing in size in the coming year and beyond. As AI models develop, more and more of the processing load will be moved to the edge. This will save money for AI product deployment and push consumers towards AI powered CPUs.
Due to the PC inventory correction, PC manufacturers essentially paused new PC designs. Now that the inventory correction is ending, we should expect to see new design wins for the Ryzen AI processor showing up in size. As with EPYC server CPUs, AMD’s Ryzen product line outperforms Intel’s, and especially on a performance per watt basis. The performance per watt dominance is the main reason AMD laptop designs have such amazing battery life.
As PC CPU purchases resume and AMD returns to growing market share through the new design wins, the Client segment should begin to grow quickly. It would not be surprising for the Client segment to snap back to $1.5 billion per quarter during 2024 and grow from there. Depending on how the PC market shapes up, figures approaching $2.5 billion per quarter are not out of the question in 2024.
AMD Radeon gaming GPUs
Researchers have learned that AMD Radeon gaming GPUs can be used for AI projects, and are cheaper on a performance per dollar basis than comparable Nvidia GPUs. This is a new area of exploration, and the total addressable market (“TAM”) for this type of activity is unclear. It’s another possible source of upside as 2024 progresses.
From data center accelerators, to server CPUs, to desktop processing power, AMD has the opportunity for enormous gains from AI.
AI competitive landscape
At this point, any Nvidia bulls still reading the article are probably jumping out of their chairs to talk about software.
For a long time, Nvidia has developed software to support the use of their GPUs. The CUDA software layer is a proprietary application programming interface that allows developers to conduct general purpose processing on Nvidia GPUs. Without much competition in the space, CUDA has become a very popular solution. Having developers learn a new platform, and converting CUDA to other platforms are both hurdles to switching to another firm’s hardware. This platform lock-in has led many people to believe that CUDA is part of an impenetrable moat that will keep Nvidia GPUs at a near-universal market share in the data center.
But in recent years the landscape has changed. Many firms do not want to be locked in to the (expensive) Nvidia platform. As a result, other open source libraries like TensorFlow and PyTorch have grown in popularity, and attracted significant resource investment from major computing oriented firms like Alphabet/Google (GOOG) and Meta.
PyTorch in particular is taking off in a big way. MosaicML and others worked to develop the ability for CUDA code to run seamlessly on AMD’s MI250. Without any optimization, MI250 was already running CUDA code inside PyTorch at about 80% the performance of an Nvidia A100. With optimization, the performance could be around parity. And you can be certain that AMD and others will be working to make those optimizations, trying to achieve parity.
Let me repeat that for clarity-it is now possible to run CUDA code via PyTorch on AMD GPUs, often right out of the box, with zero code changes and at high performance. That “impenetrable moat” may be more of a small speed bump in many cases.
Nvidia still provides a strong product ecosystem. But their advantage is eroding quickly. And AMD doesn’t need to be competitive in every application. Just achieving 20% market share in AI would send AMD earnings per share skyrocketing. And all the major data center customers want to see AMD with at least 20% DC GPU market share in order to keep Nvidia’s pricing competitive.
We’ll know a lot more about how strong MI300 is after the Q4 launch, when benchmarks and H100 comparisons will be available. But all the signs right now are that MI300 will be a very competitive product, and the market is not expecting that.
Inference vs training
MI300X’s advantage in inference could be key.
AI models have two major phases of development. First, AI models are trained by being fed with huge amounts of data. For example, ChatGPT is rumored to have trained and been fine-tuned on about 1,000 A100 GPUs for a few months. Then, after a model is trained, in order to provide a service to the user, inference comes next. Inference is the application of the trained model to new data. For example, when a user types a query into ChatGPT, the system runs the model on that new query.
Model training on enormous datasets is very resource intensive, but not very time sensitive. It’s usually OK if it takes months. On the other hand, inference for one query is usually not that resource intensive, but is often very time sensitive. It’s usually not OK if your ChatGPT response takes 6 weeks.
Inference on a model for one user may not need many resources. But if there are millions of simultaneous users, the hardware demand for inference could be orders of magnitude larger than for training needs.
Right now a lot of firms are training models on Nvidia A100 and H100. If any of these models strike gold, then the firms will need to move the applications to a production environment. And this means scale. MI300X’s advantage in inference-potentially 60% lower total cost of ownership in some cases-could mean a huge amount of demand.
Growth and valuation
Analysts fundamentally misunderstand AMD’s AI potential. CNBC called Bernstein’s Stacy Rasgon the best semiconductor analyst. Rasgon participated in an hour-long interview on AI, and during the entire discussion only mentioned AMD briefly and dismissively. His view that AMD will have zero AI revenues this year and a small amount next year shows, in my view, just how far out of touch the analyst community is.
AMD has an extremely strong AI upside opportunity that the market appears to be oblivious to. Intel, which has very little near-term AI upside, and generally inferior products to AMD’s for almost all product lines, is trading at a higher forward multiple!
Since ChatGPT kicked off the current AI mania, NVDA has added about $800B to its market cap. That’s 5x AMD’s entire market cap added to NVDA’s valuation almost entirely because of AI. AMD has the potential to have a significant share of the AI market, suggesting that AMD’s market cap could more than double as the market wakes up to the opportunity.
There are too many moving pieces to give precise estimates. But I believe that during 2024 we’ll see:
- The Data Center line grow to something in the $3.5 billion to $4.5 billion per quarter range by H2
- Instinct accelerators providing in the neighborhood of $1 billion/quarter sometime in H2
- The Data Center line growth would be about 200% from current levels, and these would be very high margin revenues
- The Client line grows to something in the $1.5 billion to $2.5 billion per quarter range by H2, depending on how things develop in the PC market.
- Company-wide margins in the 60% neighborhood, powered by the Data Center growth
- Revenues grow to something like $8 billion to $10 billion by H2
- Non-GAAP EPS grow to something like $2.50/quarter by H2
- Note: AMD’s GAAP earnings are not a useful metric since they are intentionally kept around $0 to avoid income taxes by recognizing accounting write-offs from the Xilinx acquisition.
If my numbers are even close to correct, the stock price should rocket higher.
Good Risk/Reward for AMD as an AI investment
There are many firms that provide an opportunity to invest in the growth of AI. For some of those firms, like Google or Microsoft, AI is more of a small portion of their portfolio, and is not likely to move the needle too much in their large and established business operations. And those firms are also trading at multiples that expect meaningful AI-related growth.
Nvidia is the default AI investment, and the company should make a lot of money over the next handful of years. But the stock price is so high (recently trading at 45x sales), it will be hard for the fundamentals to catch up to it anytime soon. Yes, sales should grow tremendously and bring their multiples down. But there is already a TON of growth priced in. The risk/reward ratio for AMD is so much stronger than for NVDA given the valuations.
If my numbers are even close to correct, AMD is trading at about 11x where I expect annualized EPS to be next year. And that would happen in an environment of almost doubling the business YoY, and with a lot of growth coming in an AI sector likely growing at 50% CAGR. With NVDA trading at 53x forward earnings, and INTC trading at 55x forward earnings (despite very poor growth prospects), it’s reasonable to expect AMD’s multiple to be much higher than 11x in that kind of a growth environment.
What if I’m wrong?
The growth prospects for AMD are so strong, that even if reality turns out to be significantly different from my numbers, the direction is highly likely to be solidly up from where they are now. There is always a risk that things don’t play out like they should. And one of the biggest risks is that MI300 is not as strong as I believe it to be. However, the chips have been sampling with customers for months, and both rumors and management have indicated that customers are lining up to buy. Perhaps sales are lower than I expect, but even so, those lower sales would still be entirely new revenue for the firm.
Summary
I suspect AMD shares take a big step up in the next several months when AMD guides for a big sequential increase for Q4, and the MI300 launch wakes the market to the potential for the product line.
I believe AMD shares will also have a very strong 2024 as the company secures many MI250 and MI300 design wins, gobbles up market share with EPYC, and the Client segment returns to significant profitability and taking share.
It’s reasonable to think AMD could secure over 20% of the data center AI accelerator market share sometime during 2025 as supply catches up to demand and people design AMD into more and more inference build outs.
2024 is likely to be huge for AMD.
Editor’s Note: This article was submitted as part of Seeking Alpha’s Best AI Ideas investment competition, which runs through August 15. With cash prizes, this competition — open to all contributors — is one you don’t want to miss. If you are interested in becoming a contributor and taking part in the competition, click here to find out more and submit your article today!
Read the full article here