This year’s CES (Consumer Electronics Show) proved to be a disappointing event for the average gamer. Intel’s presentation was lackluster, more suitable as a company memo than a keynote, while AMD seemed to spend the majority of its time sucking off its OEMs and industrial partners.
Among its sparse announcements were the Ryzen 9 9950X3D, which lacks 3D V-Cache on both CCDs, rendering it less impactful, and the Radeon RX 9070 Series, whose naming felt convoluted, and offered no official performance benchmarks, only a press release.
On the other hand, NVIDIA made a notable impact, albeit in typical monopolistic fashion. The company introduced a new iteration of its DLSS technology, now in its fourth version, exclusive to the new RTX 5000 series.
The technology’s touted 3:1 frame interpolation was marketed as providing “monumental performance boosts” of up to 10x the framerate. However, in reality, frame interpolation doesn’t inherently boost performance, it increases I/O latency and acts as a frame-smoothing algorithm that introduces artifacts and ghosting.
Yet, it still appeals to those looking to artificially inflate their FPS on an OSD.
Alongside the dubious DLSS 4 announcement, NVIDIA also unveiled the GeForce RTX 5070, 5070 Ti, 5080, and 5090 graphics cards. However, these new offerings come with no concrete performance metrics to help gauge improvements over the previous generation, except for vague references to AI performance and gains in games with both Ray Tracing and DLSS Frame Generation enabled.
Much of the emphasis seems to rely on DLSS 4’s additional frame interpolation, but without clear benchmarks, it’s hard to assess any meaningful advancements in rasterization, the only true figure that measures actual gaming performance.
NVIDIA attempted to hype up the $1600 RTX 4090 as the ultimate investment for gamers, before making the dubious claim that the $549 RTX 5070 could deliver RTX 4090-level gaming performance. This claim is hard to take seriously given the 5070’s modest specifications, just 12GB of GDDR7 memory and 6,144 CUDA Cores, compared to the 16,384 CUDA Cores and approximately 50% greater memory bandwidth of the previous generation’s RTX 4090.
The likelihood of the 5070 matching or exceeding the 4090’s performance is virtually impossible, unless of course NVIDIA have managed to bend the laws of physics themselves to provide a 167% performance leap from one architecture to the next.
To make matters worse, the RTX 4090’s replacement is now priced at a staggering $2000, a fact that CEO Jensen Huang conveniently avoided addressing. For those focusing on deep learning or AI workloads, however, NVIDIA’s Blackwell GPUs stand out, offering meaningful improvements in INT4 performance over the previous Ada Lovelace architecture but that’s about it.
The AAA gaming industry is increasingly plagued by broken, buggy, and unoptimized PC ports, with developers relying heavily on artificial upscaling technologies like DLSS, FSR, and XESS to achieve a playable framerate, going as far as to thrust upscaling in recommended system specifications.
Worse yet, some games now use fake frame interpolation as a shortcut to “boost” performance, which only makes things look worse in the long run. With upcoming technologies like AMD’s ML-based FSR4 and NVIDIA’s DLSS 4, this trend is likely to continue.
That said, the real highlight of CES 2025 was AMD’s surprising decision to actually talk about its products, albeit behind closed doors. AMD’s presentation may have been better off not happening at all, but in a roundtable interview with AMD executives, Frank Azor, Chief Architect of Gaming Solutions and Marketing at AMD, told Tom’s Hardware that the overwhelming demand for their fast gaming processors is attributed to Intel’s Arrow Lake launch.
The latest Core generation from Intel was so underwhelming that it led to a regression in gaming performance compared to their previous 13th and 14th gen Raptor Lake chips, further damaging Intel’s reputation due to widespread stability issues, silicon degradation, and premature hardware failures.
Similar to Intel’s own marketing tactics, which often omit key facts, former Intel CEO Pat Gelsinger once claimed that AMD was in the “rearview mirror” in terms of performance leadership. In contrast, AMD executives were much more candid in an interview with Tom’s Hardware.
They stated, “We knew we built a great part. We didn’t know the competitor had built a horrible one.” While AMD’s overall product marketing is woeful dogshit, at least Frank Azor knows how to throw a sharp and truthful jab at the competition.
The AMD Ryzen 7 9800X3D, lauded for its outstanding gaming performance, has faced severe shortages since its release. A major factor contributing to this surge in demand is Intel’s Arrow Lake processors, which have been widely criticized for their lackluster performance, causing consumers to lean towards AMD’s superior offerings.
Meanwhile, AMD has reduced production of the Ryzen 7 7800X3D and older Zen 3-based X3D processors, further amplifying the scarcity. This shortage and high demand has led to retailers inflating prices as the Ryzen 7 9800X3D is generally out of stock, making it even more difficult for consumers to secure the highly sought after CPU at its original retail price.
AMD’s David McAfee shared with the Tom’s Hardware reporter that the company has been “ramping up our manufacturing capacity” significantly, increasing both monthly and quarterly output of X3D parts, including the 7000 and 9000 X3D series.
He remarked, “It’s crazy how much we have increased over what we were planning. The demand we’ve seen for the 9800X3D and the 7800X3D has been unprecedented. The demand has been higher than ever.”
He continued, “Building a traditional semiconductor takes about 12 to 13 weeks from the start of the wafer process to when the product is finished. The 3D V-Cache stacking process adds extra time to that, so it’s longer than a quarter horizon (three months) to ramp up output of those products. We’re working hard to catch up with demand, and I believe that throughout the first half of this year, you’ll see us continue to increase X3D output. There’s no secret, X3D has become a much more important part of our CPU portfolio than any of us predicted a year ago, and I expect that trend to continue. We’re ramping capacity to meet the demand as long as customers want those X3D parts.”
For now, Intel is far less competitive than AMD had anticipated. This is further complicated by the extended lead time required to produce an X3D processor. Given the lengthy build cycle of modern high-end processors, forecasting demand is crucial, but AMD’s original production targets have been overwhelmed by the demand. As a result, the company is increasing production, though it will take some time to reflect in retail markets.
“It’s the issue of the lead time to build the individual CCD wafer, the cache wafer, and the stacking process that follows, which adds significant time. It’s a complex and non-trivial timeline when building X3D products,” McAfee explained.
He also mentioned that the new 9900X3D/9950X3D products wouldn’t help alleviate the demand, as customers are favoring the 8-core X3D models 10-to-1 over the higher core-count variants.
Since AMD continues to add 3D V-Cache to a single CCD, the benefits of opting for 12-core or 16-core models such as improved binning and higher frequencies are outweighed by the difficulty of making sensitive game applications use the CCD chiplet with the added L3 cache buffer. This often requires using tools like Project Lasso to “park” the cores on the different die complex.
AMD has been successfully executing with stacked 3D V-Cache X3D processors for three generations, integrating the technology into more products, including mobile devices and servers.
It’s a free performance boost for memory-sensitive applications, while Intel has failed to respond with any comparable technology for years, effectively ceding the high-margin, premium gaming segment to AMD.
AMD’s use of 3D V-Cache tech takes a page right out of Intel’s 5th-gen Broadwell CPUs, which used a chiplet design and slapped 128MB of eDRAM as L4 cache when the iGPU was disabled. Too bad Intel ditched that due to sky-high production costs and zero competition, leaving their lineup stuck in the past with Skylake for almost half a decade.
Sure, Broadwell was based on the old Haswell architecture and came from a messy 14nm process, but it still kicks Skylake’s ass in today’s games when you look back at it and that’s exactly why Intel deserves to be in the position they find themselves in currently. With no official word from Intel on faster technology and no leaks or industry hints suggesting a new competing technology, AMD is poised to continue dominating the gaming CPU market in 2025.
For now, AMD controls the gaming world, as Intel trails behind.