Nvidia GeForce RTX 2080 Ti review: The future's not here yet
Excellent core speed, but the RTX is hindered by high prices and missing features
A new consumer graphics card doesn't usually warrant a review on IT Pro, but not every graphics card is as exciting as the Nvidia GeForce RTX 2080 Ti.
This flagship GPU from Nvidia doesn't just deliver the usual boost to performance in graphical applications and games, it's also got enticing new features that could, potentially, make a huge difference to professional applications.
Nvidia GeForce RTX 2080 Ti: Turing Architecture
The GeForce RTX 2080 Ti, and the upcoming RTX 2080 and RTX 2070, use Nvidia's new Turing architecture, which was first introduced on Quadro professional GPUs in August.
The new design doesn't just beef up existing hardware. It introduces two new concepts that, on paper, promise great things for graphics in work as well as play.
The first, ray-tracing, will be familiar to anyone who works in the film, media or design industries. It's a technique that massively improves how PCs render light, shadows and reflections, with more detail and accuracy than traditional, time-consuming rasterisation, and without the computing overheads.
Whether you're creating CGI, rendering CAD models or building video games, being able to do this in real-time is a big leap forward, and potentially a change that could speed up workflows with graphics cards that are far cheaper than conventional professional GPUs.
The RTX 2080 Ti deploys 68 dedicated cores to handle ray-tracing, and Nvidia also uses more dedicated cores to handle its other big feature AI-powered graphics.
It uses AI for a new technique called Deep Learning Super Sampling (DLSS), which is designed to use machine learning to produce a better take on temporal anti-aliasing (TAA) a graphical technique that smooths out textures to make them cleaner and crisper - while also improving performance.
Traditionally, anti-aliasing is a computationally expensive process. However, with Turing, the card only does half of the work using conventional methods to render an image at a lower base resolution. The other half of the process is handled by Nvidia's dedicated Tensor Cores, which analyse the existing anti-aliased graphics and use it to produce an image at high resolution with less undesirable effects like transparency and blurring TAA can cause.
For example, a Turing GPU can render an image at a 1440p base resolution, then DLSS uses machine learning to create a 4K resolution image that looks equal or better to that of a native 4K image using TAA, but with less of an impact to performance.
DLSS has a dual impact: it makes anti-aliasing easier and more effective, while also freeing up more of the card to handle other tasks. And, while it's been developed for gaming, this is another addition that could make the RTX 2080 Ti a great option for film, development or design.
On paper, DLSS and real-time ray tracing are great. In reality, the situation is a little more complex.
At launch, real-time ray-tracing doesn't work. Nvidia has built this technology on top of Microsoft's ray-tracing API, and that won't be introduced until a Windows 10 update arrives; something rumoured for October. Nvidia also needs to convince developers to add ray-tracing to their applications.
Support is almost non-existent currently. Autodesk, SolidWorks, OPTIS and Isotropix are on-board, but it'll still take time for their applications to fully support ray-tracing - and the new feature is no good if you don't use software from supportive companies. On the consumer side of things, only eleven games support ray-tracing.
DLSS requires input from both Nvidia and developers to get it working, too, because the AI needs to be programmed on an application-by-application basis. Just like ray-tracing, it's currently only supported in a handful of games.
Away from these two misfiring additions, Nvidia has beefed up Turing in more conventional ways. There are improvements to integer and floating-point operations, better shaders, and improved memory and texture caching. Turing cards mark the debut of GDDR6 memory, which has 27% more bandwidth than the GDDR5X included in older cards.
Nvidia GeForce RTX 2080 Ti: Specs
The flagship RTX 2080 Ti uses the TU102 GPU and has 18.9 billion transistors almost 7 billion more than the GTX 1080 Ti. It has 4,352 stream processors compared to the 3,584 included in the older GPU, and they're divided into blocks that are half the size of the GTX 1080 Ti, so they're more versatile.
The RTX 2080 Ti rattles along at 1,350MHz and a standard boost speed of 1,545MHz. Those are solid, but the GTX 1080 Ti is faster, with speeds of 1,480MHz and 1,582MHz. It's got 11GB of memory the same amount as the GTX 1080 Ti, albeit with the newer, faster GDDR6.
However, remember that base clock is less important now that GPUs spend more time bouncing between different boost clocks. Nvidia's Turing software allows users to tweak boosting algorithms and overclock with more ease, and most users will buy cards from board partners with factory overclocks.
The theoretical numbers indicate that the RTX 2080 Ti delivers a performance boost. The new card has a single-precision performance output of 14.2 TFLOPS about 2.5 TFLOPS better than the GTX 1080 Ti.
The RTX 2080 Ti has an impressive specification, but it also requires a bit more power than its predecessors.
The card has a thermal design power (TDP) of 260W, which is 10W higher than the GTX 1080 Ti. It also requires two eight-pin power connectors rather than sole six- and eight-pin connectors, and RTX 2080 Ti cards are noticeably physically longer than their predecessors.
The new card adheres to the same APIs as its predecessors. It supports DirectX 12, OpenGL 4.6, Vulkan 1.1 and CUDA 7.5. It also works with VirtualLink, which improves VR connectivity on newer headsets, ensuring they only need to use one USB Type-C port. The RTX 2080 Ti is compatible with DisplayPort 1.4 and HDMI 2.0b connections and outputs to a maximum resolution of 7,680 x 4,320.
You'll have to pay a lot for this new technology, though. The Nvidia Founders Edition version of the card, which has a machined aluminium case and a small overclock, costs 916 exc VAT. Board partner cards, with a variety of cooling and overclocked options, vary between 916 exc VAT and 1,116 exc VAT.
We've used a Zotac AMP card in our review, which has three fans and a solid overclock, and that model costs 1,020 exc VAT. The older GTX 1080 Ti card, meanwhile, can still be found regularly for less than 600 exc VAT, and the RTX 2080 can be picked up for 624 exc VAT.
Nvidia GeForce RTX 2080 Ti: Performance
The RTX 2080 Ti delivers a solid performance boost, with big gains in many productivity tests, but it's not always able to outpace the GTX 1080 Ti.
SPECviewperf's suite of benchmarks is important for the RTX 2080 Ti, because it measures graphical performance in professional applications.
It has nine modules that run GPUs through tests that evaluate anti-aliasing, lighting, shading, depth of field and ambient occlusion all techniques that are consistently used in graphical applications. The suite also tests models that require millions of vertices, voxels and polygons, and it evaluates GPU performance when handling geophysical surveys and medical models.
It is, in short, an in-depth test of a graphics card's professional abilities. And, here, the RTX 2080 Ti performed very well, outpacing the GTX 1080 Ti in every benchmark, although the gaps between the two cards weren't always particularly wide. Every score can be examined in the table below, but on average, the 2080 Ti performed 30% better in the SPECviewperf tests.
|Viewset||GTX 1080 Ti||RTX 2080 Ti|
The RTX 2080 Ti's SPECviewperf results are excellent, but the new GPU wasn't able to open big leads elsewhere. Its Cinebench GPU result of 196.91fps is barely ahead of the 192.68fps scored by the GTX 1080 Ti.
Luxmark is an OpenCL benchmark that evaluates GPU rendering performance and, in this test, the RTX 2080 Ti was impressive. Its GPU score of 8,492 was virtually double the pace of the GTX 1080 Ti.
In 3D Mark's VRMark tests the RTX 2080 Ti could only lead in two out of the three tests it faltered in the Orange Room run, which is optimised for the HTC Vive and Oculus Rift.
In Unigine Heaven's 4K Extreme benchmark the RTX 2080 Ti averaged 114fps, which outpaced the 91.2fps scored by the older card. That solid gap was consistent with the RTX 2080 Ti's performance in gaming tests, where its averages were between 15% and 30% better than the GTX 1080 Ti, and in most games the RTX 2080 Ti played titles at 4K and their maximum quality settings at 60fps or higher.
These are good benchmarks, but right now they don't paint a complete picture. Real-time ray-tracing will have a heavy effect on the RTX 2080 Ti's performance in exchange for improved lighting quality, while DLSS promises boosted performance when rendering high-quality images at high resolutions.
However, it's impossible to say how much they will improve things and, sadly, it's hard to say at the moment if your preferred applications will benefit from Nvidia's new technology.
Nvidia GeForce RTX 2080 Ti: Verdict
That's the biggest issue with the RTX 2080 Ti and why we can't wholeheartedly recommend it right now. The more conventional upgrades to Turing work well, with solid performance gains in professional benchmarks and games, but it's impossible to say how much of an impact ray-tracing and super-sampling will have in work scenarios.
When a graphics card costs north of 1,000 and its main features don't yet work, then it's worth waiting instead of buying on day one. The RTX 2080 Ti does offer speedy performance, but if you wait for a couple of months or even until early 2019 then the new features will be better-supported, more board partner cards will be available, and the prices may have even dropped a little.
Nvidia's latest card has impressive core performance that outpaces the GTX 1080 Ti in almost all benchmarks, but its flagship features don't work yet – and the new card is extremely expensive, too. It may prove to be worthwhile, but we wouldn't recommend anyone buy just yet
|Base clock||1,350 MHz|
|Boost clock||1,545 MHz|
|Memory||11GB, 352-bit 7,000 MHz GDDR6|
|Connectivity||PCI Express 3.0|
|Display outputs||DisplayPort 1.4, HDMI 2.0b|
|Max resolution||7,680 x 4,320|
|Power connections||2 x 8-pin|
|Supported APIs||DirectX 12, OpenGL 4.6, OpenCL 2.2, Vulkan 1.1, CUDA 7.5|
Security analytics for your multi-cloud deployments
IBM Security QRadar SIEM solution briefDownload now
Five reasons to move to the cloud
Join the enterprises moving their workloads to the cloudDownload now
Architecting hybrid IT and edge for digital advantage
Why business leaders should consider a hybrid IT strategyDownload now
Six reasons to accelerate remote asset monitoring with AI
How to optimise resources, increase productivity, and grow profit margins with AIDownload now