AMD and Nvidia talking up local AI for consumers could be bad news for those ChatGPT subscriptions but good news for us PC gamers
It's all a bit doom and gloom in the PC gaming market lately, what with memory shortages and incessant talk about AI above all else. So why not put our idealist hats on for a bit and see that glass as half-full? I'll help us get started: There seem to be the slight rumblings of a focus on local rather than cloud AI by the big hardware companies, which could spell good news for the PC gaming industry and, dare I say it, the memory market.
Primarily, these rumblings have come from AMD's CES 2026 keynote, but there's also a glimmer of hope granted by Nvidia, too. On the former front, AMD CEO Dr. Lisa Su actually spent a fair amount of time talking about local AI.
Admittedly, much of this was in the form of bigging up the company's new mobile chips, the Ryzen AI 400-series, which is essentially the Strix Point 300-series with some clock speed bumps. But that's to be expected, and hardly goes against my point: these companies still want people to be using devices—preferably ones with their silicon—and if AI is all the rage, local AI on devices is a big selling point.
That was, of course, the idea with the "AI PC", a term Microsoft and co. has repeatedly jammed down our throat ever since the launch of Copilot in mid-2024. The local AI industry was just fledgling back then, though, and AI PCs didn't seem all too capable, so it all seemed a little much.
Since then, we've seen a massive boom in the cloud AI industry, which now has hundreds of billions of dollars wrapped up in it. And Su sees, or perhaps hopes, that it will continue booming, exponentially: "To enable AI everywhere, we need to increase the world's compute capacity another 100 times over the next few years, to more than 10 yottaflops over the next five years." That, she tells us, is 10,000 times more compute than we had in 2022.
Catch up with CES 2026: We're on the ground in sunny Las Vegas covering all the latest announcements from some of the biggest names in tech, including Nvidia, AMD, Intel, Asus, Razer, MSI and more.
Su also highlighted the software side of the equation, by bringing on-stage Ramin Hasani, cofounder and CEO of Liquid AI. Liquid's whole goal is to make AI that is "processor-optimised" and scales down well to on-device, local AI. Its current model, LFM2.5, uses just 1.2 billion parameters, and according to Hasani, does better than DeepSeek and Gemini Pro 2.5 on-device.
"The goal", Hasani says, "is to substantially reduce the computational cost of intelligence from first principles, without sacrificing quality. That means liquid models deliver frontier model quality right on a device."
Local AI has had less specific airtime over the last few days from Nvidia—unless we're counting DLSS 4.5, that is—but there's still been something to kindle our hopes. Namely, one specific slide for the GeForce On Community Update which states: "PC models closing the gap with cloud." The graph corresponding to this text doesn't list any actual metrics, but it's showing local models seeming to converge on cloud models.
The elephant in the room here is that this is presumably all referring to cloud-based AI subscriptions and AI inference, not AI training, which will still leave a giant bulk of work for all those giant servers. It's kind of a cake-and-eat-it situation for Nvidia, with all the cloud companies using its GPUs in the datacentre and its consumer graphics chips being perfectly suited for accelerating local AI workloads.
Still, that there's at least some focus on local AI could be positive for us puny consumers. Especially because running AI models locally requires memory, either on the system side, à la Strix Halo, or with a ton of VRAM, such as on the RTX 5090.
So, if these companies want us running local AI they'd better put the pressure on those memory suppliers to set some aside for us individual consumers. Yeah, I'm not holding my breath either. I'll take my idealist hat off, now.