Nvidia Unveils Next Gen Turing RTX GPUs – Specs, Pricing, What It All Means

Let’s cut straight to the chase. Today, Nvidia unveiled three new cards in the their next generation GPU lineup, built on the Turing architecture. These cards are the RTX (not GTX) 2070, 2080, and yes, the 2080 Ti. The RTX 2080 and RTX 2080 Ti will release September 20th, with the RTX 2070 releasing in October, per Nvidia. Preorders are now live for the 2080 and 2080 Ti here.

Here are the specs and price breakdown, courtesy of a nice handy table from PCGamesN:

The pricing in the table above reflects the “stock” cards, not the Founders Edition you may see on Nvidia’s preorder page. As far as I can tell, the Founders Edition feature an overclock per the screenshot example of the 2080 Ti below. Everything else (barring price) seems exactly the same:

In layman’s terms, you can now buy a graphics card more powerful than the Titan Xp for $500 in the RTX 2070, according to Nvidia. In practice? Well, this is where things become very interesting.

Each one of these graphics cards feature a Tensor core, a ray tracing core (RT core), and traditional CUDA cores and shaders. The Tensor cores are there to enable deep learning and AI. The RT cores are there to enable ray tracing. So then, what exactly is ray tracing?

To understand ray tracing, we must first understand how games today are normally lit. Typically, scenes in a game can have two or three light sources. This is relatively easy to do. But what happens when you have a single massive source of light that lights the entire scene, say, the sun? This becomes a problem because this source of light (an area light) is effectively an infinite number of light sources, affecting every single thing in the scene. This is not feasible with standard rendering techniques (called rasterization).

Of course, we know in real life when I shine a flashlight into a pitch black garage, the wall opposite me is lit because the light from my flashlight is directly hitting this wall. This is called direct light. However, the walls on either side of me and indeed behind me, those walls not directly hit by my flashlight, are still visible. This is because the light from my flashlight hits the wall directly opposite me (direct light) and then bounces around and hits the other objects in the room, making those other objects visible. This is called indirect light.

To create indirect light in games, we leverage something called global illumination. In short, we need to approximate the effect of this indirect light by creating ambient light sources to light up the otherwise pitch black portions of the room. This you may recognize as ambient occlusion graphics settings in games, for example. The problem with this is that, while it can look convincing if done properly, it’s still not correct in the physical and mathematical sense. Lighting can look too even and just fake. That’s because, well, it’s faked.

Enter ray tracing. Remember that flashlight example I gave above? In that example, light emitted from my flashlight hit the wall opposite me and bounced around the garage, lighting up the whole garage. Ray tracing is simply tracing the path of each of those light particles and calculating how they interact with the objects in your scene. Because this ray tracing is mathematically and physically correct, it can very easily maintain the real-world properties of the objects it hits, the shadows it creates, and the reflections it creates. In short, shadows, reflections, and light all look and behave like they do in real-life.

Jen-Hsun Huang showed off multiple examples of ray tracing in Shadow of the Tomb Raider, Metro Exodus, and most impressively, Battlefield V. I have to admit, my jaw was on the floor when I saw this technology shown off in Battlefield V in real-time. I simply did not think this was possible in games, but there we are. I legitimately could not believe it when I saw the blast from a tank being reflected in real-time in the eyes of a soldier. For me personally, it was a defining moment in how far we’ve come in PC graphics technology.

This isn’t to say I don’t have concerns. Some of Nvidia’s slides shown off were misleading. For example, the slide proclaiming “From $499,” yet displaying the 2080 Ti as the graphic wrongfully invites the consumer to believe that the 2080 Ti is $499. Of course, this is not the case.

Additionally, Nvidia were showing off a metric they’ve dubbed “RTX-OPS.” This is an aggregate of the weighted performance contributions of the Tensor cores, RTX Cores, and CUDA cores and shaders. How Nvidia is aggregating and weighting these various cores remains a mystery to me. But they then show this slide, showing the “difference” between Pascal (their previous GPU architecture) to Turing. And, well, it’s again misleading.

I suspect Nvidia wants to market the big performance difference between Pascal and Turing. However, comparing Pascal to Turing in terms of RTX-OPS is misleading because the calculation of an RTX-OP involves Tensor cores and RT cores. These are things that Pascal simply doesn’t have. It’s like me saying that I have a larger garage than you when you don’t have a garage. Of course I have the larger garage. I have the larger garage by default because you simply don’t have one. Similarly, of course Turing’s RTX-OPS number will be higher than Pascal’s. Pascal doesn’t contain Tensor cores nor RT cores.

So what are some implications I take away from this? The ray tracing implications, I think, are that this technology is brand new. It will take some time before we see PC adoption, especially since these are the only cards now that can do it, i.e. a tiny subset of consumers have this capability. However, seeing a swathe of upcoming high-profile games feature this technology is heartening. Early adoption is good. Sustained adoption is better.

Regarding price, the 70 is creeping up towards 80 pricing when compared to previous generations. Additionally, the Ti is basically Titan-level pricing. Nvidia can do this because there’s no competition from AMD. The Vega 56 trades blows with the 1070 and 1070 Ti, while the Vega 64 cannot outperform the 1080 in most cases. All the while, AMD has nothing to compete with the 1080 Ti. AMD still cannot compete with Nvidia’s GPUs from 2014.

Now, AMD is even further behind because they have no high end cards set to release this year. Nvidia is completely alone in this regard so of course they can charge these prices. There simply isn’t any competition. It is in this vacuum where Nvidia can and will dictate terms. This frustrates me because AMD has seemingly ceded the high-end market to Nvidia. What the hell is AMD actually doing in the GPU space?

It’s one thing to rely on your strength (Ryzen CPUs), but to, at least outwardly, seemingly not compete in the GPU space begs the question: Why is AMD still in it? Do they even want to be in it? I don’t ask these questions lightly, but as someone who cares about this industry, I cannot help but wonder. Repeat after me: we need competition. That being said, it certainly looks like this lack of competition hasn’t stopped Nvidia from pushing the envelope.

On the console front, I just don’t see consoles achieving this performance envelope next generation. I believe consoles simply won’t get ray tracing capabilities (I expand on this in just a bit). And by the time they launch in 2019 or 2020, they’re going to be so utterly far behind. Consoles used to be able to compete with PCs somewhat in the horsepower race. Those days are long gone.

Considering that AMD will again most likely be the vendor of choice for Microsoft and Sony, and I haven’t seen any ray tracing discussions for AMD’s roadmap, I just fundamentally doubt we’ll see this in next gen consoles…at least on the next gen PlayStation. We know that Microsoft and Nvidia worked together to create this ray tracing foundation dubbed DXR so, if anything, the next gen Xbox might stand a chance of seeing hardware to allow ray tracing. But again, to harness this power and provide an affordable console price just doesn’t seem likely to me. I just don’t trust Microsoft nor Sony to do the right thing with respect to advancing technology.

The card I’m interested in most is the 2080 Ti. I love these showcases of pure unrelenting unapologetic power from the PC. This is progress. If you had told me in January that by September, we’d see real time ray tracing in already technical-envelope-pushing games like Battlefield, I would have laughed at you. But it’s here. It’s real.

We still need to wait for real-world performance testing to truly gauge the power of these cards. Only then will we know what is truth and what is hype. Personally, I can’t wait. The PC is the only place in this industry where innovation like this can happen. When you invent for an inherently boundless platform, great things will happen. I absolutely love PC gaming.

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.

Lost Password

Please enter your username or email address. You will receive a link to create a new password via email.