AMD introduced the Ryzen 8040 series of laptop processors at the company’s AI event, transforming the discussion about the importance of CPU speed, power consumption and battery life into one that puts AI at the forefront.
In January 2023, AMD introduced the Ryzen 7000 family, of which the Ryzen 7040 included the first use of what AMD then called XDNA architecture, which powers Ryzen AI. (When rival Intel introduced its Meteor Lake processor last summer, Intel started calling the AI accelerator an NPU, and the name stuck.)
Reading tip: 5 reasons for an AMD CPU
More than 50 laptop models are already being delivered with Ryzen AI, those responsible explain. Now it’s about AMD’s next NPU, Hawk Point, in the Ryzen 8040. In AMD’s case, the XDNA NPU supports the Zen CPU, while the GPU’s Radeon RDNA architecture provides the graphics performance. But all three logical components work in harmony and contribute to a larger whole.
“We see AI as the most transformative technology of the last decade,” said Dr. Lisa Su, AMD’s Chief Executive, at the start of AMD’s “Advancing AI” presentation on Wednesday.
These applications should benefit from AI
Now the “battle” is being waged on multiple fronts. While Microsoft and Google want AI to be computed in the cloud, all of the heavyweight chipmakers are advocating for it to be processed locally on the PC. This means finding applications that can take advantage of local AI processing capabilities. And the means working with software developers to program applications for specific processors. The result is that AMD and its competitors must provide software tools to allow these applications to communicate with their chips.
Of course, AMD, Intel, and Qualcomm want these applications to run most effectively on their own silicon, so chipmakers have to compete in two different ways: Not only do they have to make the most powerful AI silicon, but they also have to ensure that app developers can can program with their chips in the most efficient way possible.
AMD
Chipmakers have been trying to get game developers to do the same for years. And although AI can seem inscrutable from the outside, familiar concepts can be teased out: quantization, for example, can be seen as a form of data compression that allows large-language models that typically run on powerful server processors to reduce their complexity and run on local ones Processors like the Ryzen 8000 series run. These types of tools are critical to the success of “local AI.”
The eight mobile Ryzen 8040 processors
AMD has announced the Ryzen 8040 series now, but you won’t see them in laptops until next year.
The Ryzen 8040 series combines AMD’s Zen 4 architecture, its RDNA 3 GPUs and (still) the first generation XDNA architecture. However, the new chips use AMD’s second NPU. The first, “Phoenix,” included 10 NPU trillion operations per second (TOPS) and 33 total TOPS, with the rest coming from the CPU and GPU. The 8040 series includes “Hawk Point” which increases the NPU TOPS to 16 TOPS and 39 TOPS overall.
Donny Woligroski, senior mobile processor technical marketing manager, told members of the press that the CPU AVX-512 uses VNNI instructions to run lightweight AI functions on the CPU. AI can also run on the GPU, but with high, inefficient power consumption – an attitude we also know from Intel.
“When it comes to efficiency, raw power is not enough,” Woligroski emphasized. “You have to be able to run the whole thing in a laptop.”
There are nine members of the Ryzen 8040 series. The Hawk Point NPU is one of the seven most powerful. They vary from 8 cores/16 threads and a boost clock of 5.2 GHz at the top to 4 cores/8 threads and 4.7 GHz. TDPs range from a minimum of 15W to a 35W processor at the top end, reaching up to 54W.
AMD
The nine new chips include:
- Ryzen 9 8945HS: 8 cores/16 threads, 5.2 GHz (Boost); Radeon 780M Graphics, 35-54W
- Ryzen 7 8845HS : 8 cores/16 threads, 5.1 GHz (Boost); Radeon 780M graphics, 35-54W
- Ryzen 7 8840HS: 8 cores/16 threads, 5.1GHz (Boost), Radeon 780M graphics, 20-30W
- Ryzen 7 8840U: 8 cores/16 threads, 5.1GHz (Boost), Radeon 780M graphics, 15-30W
- Ryzen 5 8645HS: 6 cores/12 threads, 5.0 GHz (Boost), Radeon 760M graphics, 35-54 W
- Ryzen 5 8640HS: 6 cores/12 threads, 4.9 GHz (Boost), Radeon 760M graphics card, 20-30 W
- Ryzen 5 8640U: 6 cores/12 threads, 5.1 GHz (Boost), Radeon 760M graphics card, 20-30 W
- Ryzen 5 8540U: 6 cores/12 threads, 4.9 GHz (Boost), Radeon 740M graphics, 15-30 W
- Ryzen 3 8440U: 4 cores/8 threads, 4.7 GHz (Boost), Radeon 740M graphics, 15-30 W
According to AMD’s “Decoder Ring” model number, which Intel recently described as “snake oil,” all new processors use Zen 4 architecture and will ship in laptops in 2024. AMD offers three integrated GPUs – the 780M (12 cores, up to 2.7 GHz), the 760M (8 cores, up to 2.6 GHz) and the 740M (4 cores, up to 2.5 GHz) – , which are based on the RDNA3 graphics architecture and DDR5/LPDDR5 support. These iGPUs were already included in the Ryzen 7040 mobile chips that were introduced earlier this year with the Phoenix NPU.
Interestingly, AMD isn’t announcing any “HX” variants for premium gaming, at least not yet.
AMD also announces a third generation NPU called “Strix Point” to ship sometime later in 2024, presumably in a next-generation Ryzen processor. AMD didn’t reveal specific specifications for the Strix Point, but said it will deliver more than three times the generative AI performance of the previous generation.
AMD
Ryzen 8040: Performance
AMD has released some general benchmark evaluations comparing the 8940H to the Intel Core i9-13900H at 1080p and low settings, and claims that its own chip outperforms Intel’s by 1.8x. (AMD used nine games for comparison and didn’t really say how they arrived at the numbers). AMD cites a 1.4x performance increase on the same chips by somehow merging Cinebench R23 and Geekbench 6.
AMD is also talking about a 1.4x improvement in AI compared between the 7940HS and the 8840HS in Facebook’s large language model Llama 2 and the “Vision models”.
AMD
Ryzen AI software: a new tool for AI enthusiasts
Because AI is still in its infancy, silicon manufacturers don’t have many points of comparison for traditional benchmarks. AMD executives highlighted localized AI-powered experiences, such as the various neural filters in Photoshop, the masking tools in Lightroom and a suite of tools in BlackMagic’s DaVinci Resolve. The underlying message is that this is why you need local AI instead of running it in the cloud.
AMD is also introducing AMD Ryzen AI Software, a tool that allows a model developed in PyTorch or TensorFlow for workstations and servers to run on a local AI-ready Ryzen chip.
The problem with running a really large, large-language model like a local chatbot, speech-changing software, or other model is that servers and workstations have more powerful processors and more available memory. Laptops don’t have that. The Ryzen AI software’s job is to convert the LLM into a simpler, less intensive version that can run on the more limited memory and processing power of a Ryzen laptop.
In other words, most of what you imagine a chatbot or LLM to be like is actually the weights or parameters – the relationships between different concepts and words. LLMs like GPT-3 have billions of parameters that require enormous computing resources to store and execute (inferencing). Quantization is a bit like image compression, reducing the size of the weights without affecting the “intelligence” of the model.
AMD and Microsoft use ONNX, an open source runtime environment with built-in optimizations and simple startup scripts. The Ryzen AI software enables this quantization automatically and saves the model in an ONNX format that can run on a Ryzen chip.
AMD
AMD executives said this Ryzen AI Software Tool will be released today, offering both independent developers and enthusiasts an easier way to try out AI for themselves.
AMD is also launching the “AMD Pervasive AI Contest” with prizes for the development of robotic AI, generative AI and PC AI. Prices for PC AI, which involves developing unique voice or image processing applications, start at $3,000 and go up to $10,000.
All of this helps propel AMD forward in the fledgling race to establish AI, particularly on the client PC. Next year will show where the individual chip manufacturers stand.
This article originally appeared here on PC-WORLD and was translated into German by us.