AI PCs are Hiding the Truth Behind More TOPS

Lisa Su, AMD’s CEO, quipped to Microsoft’s Pavan Davuluri, “You’re always saying more TOPS, Lisa. More TOPS, what are you doing with all those TOPS? What is your vision?”

Everyone—from HP and ASUS to Acer and Dell—wants to make AI PCs and wants everyone to own one. AI PCs are also the self-proclaimed future of Windows’ personal computing, as per these manufacturers, the chip makers, and, of course, Microsoft.

“Everyone should have their own AI PC that allows you to run your model locally and operate on your data locally,” echoed Su at a recent fireside chat in IISc Bangalore.

Speaking along similar lines at Computex 2024, Intel CEO Pat Gelsinger said, “When I think about the PC market, this is the most exciting moment in 25 years since the arrival of WiFi.” Likewise, Qualcomm CEO Cristiano Amon called it the “most important development” since the Microsoft 95 operating system.

Sure, AI PCs have a bright future, but for now, it’s just a spec-sheet arms race, driven by adoption rates that fall short of industry expectations.

The Numbers Game

GPUs primarily drive resource-intensive tasks involving AI, but it is not possible to ship one inside a laptop meant to stay lightweight yet maintain high power efficiency.

Hence NPUs, or neural processing units—a dedicated unit on an SoC, perform all the calculations for AI-related tasks. Most, if not all, AI PCs are built on top of Microsoft Copilot+ capabilities, and Microsoft states a minimum of 16 GB of memory, 256GB SSD and an NPU capable of processing at least 40 TOPS, or trillion operations per second while running its ‘world-class’ SLMs.

Although marginal, AMD Ryzen AI 9 HX PRO 370 stands on top with 50 TOPS, while Qualcomm’s Snapdragon X Elite Plus, and Intel’s Lunar Lake processors offer 45 TOPS. That said, Intel mentioned that their Lunar Lake-enabled AI PCs can also use an additional 60 TOPS from the GPU.

It is also worth noting that TOPS is just a theoretical value and only an indicator of a chip’s peak performance capabilities under ideal conditions. While AMD claims an edge, it may not matter much.

“After all, a high TOPS number alone does not guarantee optimal AI performance; it’s the culmination of various factors working in tandem that genuinely defines an NPU’s prowess,” said Qualcomm in a blog post.

While Apple’s ARM-based M-Series MacBooks are equipped with a capable NPU, the company refrains from using the ‘AI’ monicker and has its own version, the ‘Apple Intelligence’.

Although Apple isn’t big on numbers and specifications, AI PC makers and enablers are upselling their products by frequently comparing them with the Cupertino giant.

Source: Lisa Su, at AMD Computex 2024

Why Do We Need NPUs Now?

The current capabilities seem quite limited, and buying an AI PC may also seem redundant since most of the features are readily available on cloud platforms. However, Microsoft is striving to provide more use cases that can run locally inside a Copilot + PC.

The current use cases on the Copilot + PCs include image generation, background blur during video calls, photo editing features, and so on. Of course, there’s also the controversial Microsoft Recall feature. But at this point, they’re just taking baby steps, and are far from offering groundbreaking AI features.

However, Microsoft is regularly adding new capabilities. At the Ignite 2024 event, the company announced that Copilot 365, the AI features embedded into Microsoft’s Office suite, will use AI models locally to execute tasks.

Offloading AI-related tasks to the cloud isn’t the best for efficiency and there are also added concerns around data privacy. With powerful NPUs, most of these tasks are processed locally, with minimal impact on the battery life.

It is also worth noting that NPUs are built for peak ‘power efficiency’ rather than ‘peak performance’.

Meanwhile, GPU giant NVIDIA had a few harsh words to say. According to sources, NVIDIA believes that an NPU’s capability of processing 40 TOPS is enough for just the “basic tasks”. GPUs and NPUs were never meant to be in the same conversation in the first place.

While NVIDIA’s GPU-powered chipset offers significantly better performance over an NPU-equipped chip, the former will also end up drawing around 80-150W of power, which means three to five times more power consumption.

Comment
byu/giuliomagnifico from discussion
inhardware

However, the race to a highly capable NPU may just be beginning. At this year’s Computex event, AMD envisioned a future where AI PCs would be capable of running LLMs with 30B parameters and 100 TOPS.

There’s certainly room for improvement. In an article, a Hugging Face community member revealed that using the Gemma 2B model utilised 63% of the NPU during inferencing on an Intel Core Ultra-powered AI PC. And, of course, you wouldn’t want a bottleneck when running larger models in the future.

In another instance, Peter Walden, CEO of Useful Sensors Inc. and a founding member of the TensorFlow team observed a critical limitation in his real-world testing. In an open-source benchmark, where Walden was “trying to get the best” out of these AI PCs on a foundational model, he observed Qualcomm’s NPU processing only 573 billion operations per second, which was significantly lower than the claimed 45 TOPS.

“Unfortunately, I struggled to get anywhere near the advertised performance using the NPU. In fact, in my experience, it was usually significantly slower than the CPU,” said Walden.

Slowly but Surely

To a certain extent, users are still sceptical. A study commissioned by Intel surveyed 6,000 people in European markets, such as Germany, France, and the UK. About 44% of the respondents believed that AI PCs were a gimmick, and a whopping 86% were “concerned about the privacy and security of their data when using an AI PC”.

The report also said that “a more worrying statistic identified that, on average, consumers who own an AI PC spent longer on computer chores than those who had a normal PC or laptop.”

At the launch of the Copilot + program in May, Microsoft CEO Satya Nadella said that he expects 50 million sales of AI PCs in the next twelve months. However, according to a Gartner report, global PC sales have only declined in the past few months.

“Even with a full lineup of Windows-based AI PCs for both Arm and x86 in the third quarter of 2024, AI PCs did not boost the demand for PCs since buyers have yet to see their clear benefits or business value,” said Mikako Kitagawa, director analyst at Gartner.

According to a report from Canalys, only 14% of PCs shipped in Q2 2024 were “AI-capable”. However, the figure grew to 20% in their latest report for Q3. It also reported that AI PCs equipped with the Qualcomm Snapdragon X series chips accounted for only 0.8% of all PCs sold in Q3 24, amounting to just 720,000 units.

That said, predictions for the future favour companies’ goals with AI PCs. “We’re projecting AI-enabled PC shipments to grow with a CAGR of 42.1% from 2023 to 2028,” read an IDC report. Yet another report from Markets and Markets says that the market is projected to grow from $50.61 billion in 2024 to $231.30 billion by 2030, at a CAGR of 28.82%.

AI PCs did, in fact, start with @Qualcomm and Snapdragon X Elite. @AlexKatouzian says they expect ~100M notebooks per year by 2029 at over $500 pic.twitter.com/7HmZ0LW0Co

— Max Weinbach (@MaxWinebach) November 19, 2024

Interestingly, the enterprise arena is expected to be a major driver of AI PC adoption. Another report from Gartner suggests that AI PCs will occupy 100% of the enterprise market by 2026. Moreover, the end of support for Windows 10 PCs in October 2025 is also likely to drive the adoption of newer, more capable AI PCs.

The post AI PCs are Hiding the Truth Behind More TOPS appeared first on Analytics India Magazine.

Follow us on Twitter, Facebook
0 0 votes
Article Rating
Subscribe
Notify of
guest
0 comments
Oldest
New Most Voted
Inline Feedbacks
View all comments

Latest stories

You might also like...