CoreWeave co-founder says even older GPUs are still rising in price and are in demand for AI servers
The AI industry is often seen as pushing the bleeding edge of hardware and tech, but not every AI server is running on the absolute latest accelerator chips. CoreWeave's co-founder and chief development officer (CDO) Brannin McBee recently testified as much on tech and business talk show TBPN (via Ray Wang).
The CDO explained that today we can see "late 2010's SKUs that are available in market on clouds. They're not running that infrastructure unprofitably."
"Our pricing on Ampere went up during 2025, and our pricing on Hopper stayed within 10% or so of where we started the year.
Why is that happening? I think it's largely driven by inference, and ultimately this concept that not every workload needs the latest, greatest piece of compute... There are different types of workloads that need different types of infrastructure, and the market is simply efficiently pairing the right compute with the workload that they need.
So, we see incredibly robust demand all the way back to the Ampere generation... We're really not seeing any deterioration of these older SKU demand profiles, while also seeing accelerating demand for latest-generation infrastructure as well."
'Ampere' refers to Nvidia's circa 2020 microarchitecture which we gamers know as the the one that underpins the RTX 30-series. McBee is referring to the server chips, however. Back at launch, the AI boom hadn't really started, but a couple of years later things would take off with the launch of ChatGPT 3.5. Before long, AI servers were training and inferencing AI using Ampere chips like the A100.
CoreWeave cofounder @branninmcbee says older GPUs and servers still have strong demand, even as new generations ramp up:"Our pricing on Ampere went up during 2025, and our pricing on Hopper stayed within 10% or so of where we started the year.""Why is that happening? I think… pic.twitter.com/qtua4lKpYHApril 1, 2026
Following this we of course had the RTX 40-series with 'Hopper' architecture, and server chips like the H100. Then RTX 50-series 'Grace Blackwell' GPUs with server chips like the GB100. And later this year we're expecting 'Rubin' server chips.
In other words, things have moved on quite a bit from Ampere. It makes sense that these older chips are still being used, though, because the amount of money pouring into the AI industry doesn't seem to be letting up—something I'm sure we as PC gamers are all aware of thanks to the resultant high memory prices.
Given there's such high demand and chips are flying off the proverbial shelves and being wrapped up tight in ginormous contracts, I expect it's a game of 'take what you can, when you can.' China, for instance, is set to receive a capped amount of Hopper chips, with no Blackwell allowed.
That doesn't mean Hopper is just as good as Blackwell, rather that anything is better than nothing. And if prices rise, that might just be because demand is rising in a still very supply-constrained market.