There have been a lot of defining battles in 2020: Trump v Biden; PlayStation 5 v Xbox Series X; Ryan Reynolds v Hugh Jackman. Some of them pale in comparison to NVIDIA RTX30xx series v AMD Radeon RX6000 series, the epic battle that has been shaping up all year between these two behemoths and their newest GPUs. The two tech giants have spent a couple of years in the development of their newest tech, and months teasing the PC worlds with rumors of their new products. We have collectively, as the PC gaming community, spent millions of hours eating it all up, to the benefit of these goliaths marketing departments.
We’ve had some time to digest NVIDIA’s newest offerings. Its been a couple of months now since NVIDIA’s Jensen Haung held NVIDIA’s highly produced reveal event on Sept 1st. We watched the event, collectively transfixed by all of the details given about its new line of Ampere-bases cards, including all of the new software goodies that would be coming along with the new platform. The leaps NVIDIA seemed to have made were enormous, with Huang bragging (much to the chagrin of many a 2080Ti owner) that their entry-level GeForce 3070 model would outperform the 2080Ti, and do it at just over 1/3rd the cost. Of course, as difficult as this is to stomach, it’s also a fact of technological advancement, especially in the field of computing. It’s a fact that makes Jensen Huang’s bragging about how getting a new Ampere-based RTX3080 is tantamount to “future-proofing” one’s rig a fallacious joke, at best. (“Future-proofing” should be a term that is best left to technologies that need, nor are really subject to technological changes…Like the wheel or the hammer.)
That being said, we WERE blown away at how far NVIDIA had come since its last generation. The advancements are largely the result of vast improvements in the use of ray tracing and AI processing that are being more heavily integrated into GPUs, along with the continued reduction in processor die sizes, allowing larger numbers of cores to be crammed into the processors going into these devices. We were also surprised by the pricing.
Getting back to NVIDIA’s reveal event, NVIDIA did a damn good job of selling their newest tech. Talking to the other guys here at the team at SCB, I declared that I could see myself switching to the 3080 (as soon as I could get my hands on one,) and do so without remorse of abandoning my last generation overclocked AMD RX580 (a Sapphire Nitro+, to be exact). “Discretion is the better part of valor,” said Falstaff in Shakespeare’s King Henry the Fourth, and fortunately my ego was able to negotiate my id’s desire to own one, without care for the large quantity of money I would be spending and wasn’t exactly sitting on. Part of that being tied to the fact that AMD was due to come out with a bunch of details in fairly short order on its newest RDNA2 based GPU’s. Matched with the reality that I stood about a snowballs-chance in hell of even finding a 3080 Founders Edition card (I take that back, a snowball in hell had a better chance), waited I did, for the official specs on AMD’s new offerings.
Its also worth noting, briefly, that NVIDIAs new card release has not been… without controversy. Most expected a shortage of cards, but it seemed right from the get-go that NVIDIA did little to alleviate concerns about its inventory being quickly bought up by scalpers looking to turn a tidy profit on Ebay and elsewhere by poaching the new cards with the help of bots, and then reselling them only days later for twice the price. It’s not like this is our first rodeo, and this sort of thing seems to happen with each successive generational release of GPUs.
Got Next Gen Console on the Brain?
For a number of years, it was very questionable as to what was even going on over at AMD. They had been slow to react to Intel in terms of their CPU lines, and, though AMD had, in the past, been at the forefront of graphics tech, had there too seems to fall behind NVIDIA’s aggressive upgrade and marketing efforts in the world of GPUs. AMD had more recently started to awake from its apparent slumber. With its RYZEN ZEN CPU line, and now seemingly with its RDNA2 GPU line as well, AMD is giving both Intel and NVIDIA a run for their money. We needn’t look further than the fact that AMD is the sole provider of processing power for both Sony and Microsoft and their next-gen consoles, contributing a RYZEN processor and RDNA2 based GPU to both of the next-gen consoles due out next week. Where is Intel? Where is NVIDIA? Not in your newest PlayStation5 or Xbox Series X/S.
So last week, when AMD was holding their reveal event for the new AMD Radeon RX6000 generation GPUs, and especially after watching NVIDIA’s show 2 months ago, I had really high hopes from the AMD reveal. I’m a long-time fan of AMD, and until more recently, I always bought my CPU’s and GPU’s from them. I’d been somewhat disappointed up until Ryzen came out though, and switched to Intel with my last build.
To be perfectly honest, AMD’s Radeon RX6000 reveal didn’t really evoke for me the kind of emotional response I got from NVIDIA’s back in September. I mean, I did respond emotionally, but it was more along the lines of “WTF was that.” It may be hyperbole, but I thought it was the most boring thing I’ve watched since the Democratic debates. Frankly, the only saving grace I saw from it was that it was only about 25 minutes long, so not too much time lost. The NVIDIA reveal was SOOO GOOD, and I watched sections of it repeatedly, partly in prepping a story about the reveal for this page. I didn’t find the AMD reveal particularly exciting or compelling. It was missing a lot of the “wow” factor that I seemed to get wrapped up in with the NVIDIA event.
However, after doing some soul-searching and thinking on it, what struck me was this – AMD didn’t NEED the glitzy show. AMD stuck to the tech, the benchmarks, the details, some gameplay video, and they dropped brags about the previously mentioned fact that they were partnered up with Xbox and PlayStation for the next-gen consoles. In short, AMD came into the room, wasted no time in getting right to the point, telling us what we were going to hear, tell us that stuff, and then telling us what we just heard. AMD’s reveal was the antithesis of NVIDIA’s, and more or less was a big “F— your fancy dog and pony show NVIDIA” and showed NVIDIA the proverbial middle finger by just cutting to the chase. What is that chase they cut to, you ask? AMD is coming out with an equally good top-end GPU, and doing it cheaper…As Always.
A Game of Cat and Mouse
Its a cat and mouse game between AMD and Intel and AMD and Nvidia. Its one in which AMD has slowly and quietly become the cat after being the mouse for so long. What has AMD done to outdo NVIDIA this round? With the Big Navi RDNA2 processor that is central to its new AMD Radeon RX6000 series boards, they cram 26.8 billion transistors onto a chip using 7 nm die architecture that puts out significantly higher speeds while using less power. When we compare the RTX3090 to the AMD Radeon RX6900 XT, there are trades made on both sides and these are some of the highlights I think are worth mentioning.
|NVIDIA RTX 3090||RADEON RX 6900 XT|
|Boost Clock||1700 MHz||up to 2250 MHz|
|Base Clock||1400 MHz||up to 2015 MHz|
|RAM||24 GB GDDR6X||16 GB GDDR6|
|Power Consumed||350 Watt||300 W|
|Dimensions:||313mm x 3 Slots||267 mm x 2.5 slots|
It gets somewhat challenging to compare apples to apples here, but there are a few things we can glean from the tech specs on each NVIDIA and AMD’s websites. Perhaps the thing AMD’s Dr. Lisa Su, President and CEO of AMD, was selling hardest in her presentation was Big Navi’s 7 nm standard, and the resulting improvement in performance-to-watt ratio. The AMD Radeon RX6000 series cards all run on comparatively less power than NVIDIA and less than the RDNA gen 1 cards, both by way of the improved die architecture as well as improved compute unit design, while doubling performance from the last generation.
While NVIDIA’s 3090 has the advantage on RAM capacity as well as its memory bandwidth, AMD utilizes 128 mb of its “Infinity Cache” which is cache memory designed to minimize bottlenecking to the GPU. Big Navi and the Radeon series card at each level also can operate at a higher clock speed than the comparable NVIDIA RTX cards. RTX 3090 also has a large advantage in VRAM capacity, its worth noting that only proves so at the level of the 3090, as the 6900, 6800XT and 6800 ALL feature 16 GB GDDR 6, where the RTX3080 and 3070 diminish to 10GB GDDR6X and 8 GB GRRD6, respectively.
Across numerous side by side benchmark comparisons between the RTX3080 and the Radeon 6800XT, each card has slight advantages over the other depending on the title, but what is apparent is the closeness of the performance between the two. It seems pretty evident, at least based on the manufacturer data, that the new AMD Radeon cards and set to perform on par with NVIDIA.
There are a few kickers that work in AMD’s favor though. AMD has an excellent GPU management suite that pairs with its cards (despite my railing against Adrenalin earlier this year.) Additionally, the new AMD Radeon RX6000 cards use less power and fit in a footprint that will still fit inside my older CoolerMaster Storm LAN party gaming chassis. (Seriously, its a tank. pretty sure it has a bulletproof rating against .45 cal rounds.) At just 267 mm for all three of its new cards, it takes up considerably less space than even the RTX 3070. It also doesn’t change as the power of the card increases. For owners of a slightly old rig, such as myself, it means far less modification. Lets not forget to mention cost. Its on cost especially, that thing really play toward AMDs favor.
- Nvidia RTX 3070/3090/3090 its “Starting at” $499/$699/$1499. (good luck finding them at those prices!)
- AMD Radeon RX6000 – 6800/6800XT/6900 : $579/$649/$999
Beyond the RX6800 which will run about $80 more than the RTX3070, NVIDIA’s pricing exponentially increases compared to the comparable AMD cards. I think this is the thing that will seal the deal for gamers in the budget and intermediate levels of gaming. I would consider my rig to be an upper-middle level rig at present. When I built it, I did what many do and struck a balance between ultra performance without regard for cost and ultra budget at total disregard for performance. I built and upper-mid level or maybe low cost high-end gaming machine. For one, to take the 3090 or 3080 card would require for me a new case as neither would fit in my machine, a new MOBO (and also processor) to really take full advantage of the full performance of either card, a new power supply to not be excessively draining on my current 750 W PSU. Easily, i’m in for a new gaming rig. So the true cost of taking full advantage of the 3080 or 3090 is far beyond their $700 and $1500 price tags. I can’t even really take full advantage of the new RX6800XT or RX6900 cards, but they will still fit in my machine and at worst, I’d have to pop for a new PSU as mine is less than the recommended 850 watts of the RX6900.
The bottom line is this – though NVIDIA has had a rough start in its release, especially due to controversies over perceived supply throttling, AMD could easily be subject to the same sorts of pre-order and day-one supply and ordering challenges. That being said, AMD is bringing a product to market that performs, based on all early indications, as well, if not better, than NVIDIA’s newest generation of video cards. For the majority of gamers, who are buying a card to upgrade an existing rig, and don’t already have a top-of-the-line setup that they are just tweaking a bit further, AMD would seem to be the far superior solution, especially considering the price to performance metric. If you are looking to choose between the top of the line NVIDIA offering vs the top of the line AMD offering, it would seem to really be a no-brainer. (As implied, its AMD.)