NVIDIA GeForce GTX 980 Ti with 6 gigabytes of main memory in the test

The pricing structure that NVIDIA creates from todayIntro

NVIDIA's GeForce GTX Titan X is currently the spearhead in graphics cards, but with a purchase price of well over 1.000 euros, there is a huge gap to the next NVIDIA model, the GeForce GTX 980. The manufacturer closes this gap today with the presentation of the GeForce GTX 980 Ti and is equipping itself in terms of price for the upcoming new AMD Radeon graphics card with Fiji chip and HBM memory technology. Our test clarifies where the GTX 980 Ti is.

As a result, we described the NVIDIA GeForce GTX Titan X as the first single GPU graphics card, which is usually able to run current monitors with 4K resolution in games without having to forego any restrictions in quality settings. But the prices for such graphics cards are enormous and, due to the poor euro exchange rate, are sometimes around 1.200 euros.

The next smaller NVIDIA graphics card so far - the GeForce GTX 980 - is around 25 to 40 percent slower than the GTX Titan X and mostly not able to deliver 4K resolutions in modern games at full details. On the other hand, this service is still reasonably affordable at around 500 euros. However, the price gap between the two models is enormous and should be filled today.

The NVIDIA GeForce GTX 980 Ti is the latest addition to the GeForce series and goes on sale today. In terms of price, it is clearly cheaper than the Titan X, but it also comes with less power. Today's article shows exactly where the performance levels off - and of course also what power consumption or even what background noise the interested customer can expect.

But that is not the only goal that NVIDIA is pursuing, because in a few weeks competitor AMD wants to present its new Fiji GPU, and it is said to be in the price range around 700 euros. It is not yet possible to give an assessment of the performance values, but speculations are expecting an attack on the GTX-Titan-X performance. At least in the expected price range, NVIDIA would have to offer a suitable solution now.

One last word before we start: We took this test as an opportunity and revised our test station. There were slight modifications to the CPU, memory and hard drive. Of course, we also brought the software up to date and also radically revised the benchmarks. Of course, all of this means that today's results can no longer be compared with previous measurements. We ask you to take this into account. As usual, details can be found in the extensive test environment.

Bookmark:

Test environment

Hardware: graphics cards

The test candidate

  • NVIDIA GeForce GTX 980 Ti

Firmware test pattern
Reference graphics cards

Monitor resolutions and boost clock rates

Resolutions

We are currently testing in the resolutions 1.680 x 1.050, 1.920 x 1.080 and 2.560 x 1.440. While the former resolution is still the most widespread, the resolution of 1.920 x 1.080 pixels is currently emerging to permanently replace the lower resolution. The highest resolution of 2.560 x 1.440 pixels is currently only used by enthusiasts. Corresponding monitors that support this are still quite expensive. On the other hand, screens with 4K resolutions are slowly becoming affordable, but these are still not mainstream.

However, the resolutions have a demanding effect on the performance of the graphics cards. The higher the resolution, the slower the graphics cards are in displaying their images per second, and of course there are some representatives of the above graphics cards that are not able to display games in the highest resolution.

We have therefore divided the test candidates into three groups:

  • Ultra High Quality (up to 3840 x 2160)
  • High Quality (up to 2560 x 1440)
  • Quality (up to 1920 x 1080)
  • Low Quality (up to 1680 x 1050)

Only in the ultra and high quality group do we also allow runs with supersampling and / or eightfold anti-aliasing in the quality settings. These are mostly missing in the smaller groups. There are still a few exceptions.

In the ultra-high group, however, there are only absolute high-end graphics cards. So far, this segment has primarily been reserved for dual GPU solutions.

4K resolution and anti-aliasing

At some point someone in the press probably said that with an Ultra HD resolution of 3.840 x 2.160 pixels, anti-aliasing is no longer necessary. This has established itself in the minds of many users and anchored it as a fact. This generalization is absolutely incorrect.

The much higher pixel density in 4K actually ensures a clearly sharper image, but only in some cases eliminates unattractive staircase formation at edges. In some games, the resolution actually eliminates the need for normal multi-sample antialiasing, but unfortunately in some games not at all.

If stairs or flickering edges are left behind, the picture fetishists in particular will not want to live with such a circumstance and will then try all possible adjusting screws in the game to eliminate this. This is exactly the reason why we still keep this setting at 4K resolution for applications that we benchmark with multi-sampling AA. Admittedly: The 8-fold MSAA benchmarks could indeed be given away - the results are available after the runs and are then simply given for the sake of completeness.

Tomb Raider at 4K resolution
4K resolution and monitor

So far, there has always been talk of Full HD, which means the resolution of 1.920 x 1.080 pixels on a display. 4K or Ultra HD gets its name from the pixels of the monitor's horizontal resolution of almost 4.000 pixels. An Ultra HD monitor correctly displays 3.840 x 2.160 pixels - 4.000 pixels horizontally are therefore a little rounded up.

While the technology is still quite new and has usually been launched with IPS displays so far, a few manufacturers are currently following in the PC sector, who rely on the cheaper TN panels, making this technology more affordable. However, some of the offers have their pitfalls! So we had ours Dell P2815Q* Discarded again very quickly, as only 30 Hertz operation was possible here, which can very quickly lead to symptoms of fatigue during daily work. It finally followed Samsung U28D59P*, which is able to guarantee operation at 60 Hz via a DisplayPort connection.

In addition, all common smaller resolutions are supported, which seemed ideal for our test purposes. Due to the panel used, this monitor (and others) can hardly be used by professional users in the graphics sector. The viewing angle, but especially the color fidelity, leaves a lot to be desired for this area.

In the TV sector there are some expensive offers that rely on 4K, but so far there has been no suitable image material on DVD or Blu-ray disc, let alone suitable devices among the players. Some boast upscale features, but that's just a consolation. In the PC area, the whole thing looks a little different. The 4K resolution brings - if the image material supports it - a significantly sharper image.

However, with this resolution on the PC - at least with games - there is the unpleasant side effect that a really powerful graphics card has to be used. In our test runs, we found that even high-end single GPU graphics cards like the Radeon HD 290X or GeForce GTX 780 Ti are in principle overwhelmed if you want to play top titles with maximum detail level and anti-aliasing.

At that point the crux arises. Either cut corners, despite the expensive graphics card, or rely on a dual team that can overcome the hurdles. The current status quo is definitely that 4K monitors like dual GPU graphics cards fall into the absolute high-end segment, where they have their right to exist, but also have to struggle with certain weaknesses.

GPU clock

On the GPU boost gadgetswhich appear more and more and falsify benchmark results, we have so far gone into it often enough. We normally counteract this by intervening in the driver. With NVIDIA graphics cards, we usually only show the performance at typical boost, as specified by the manufacturer. In some cases that is even too high - based on a reference graphics card. But now such eyewashes can also be found with AMD graphics cards, which is why we have to intervene there too. We mention the clock rates separately in the benchmark diagrams.

Hardware: test system

Closed housing

A closed computer case is not representative, and we will go into this again in the following lines. In some cases, however, it is essential to be able to judge certain things. And these cases were almost exclusively triggered by new technologies such as Boost 2.0 from NVIDIA or AMD's new edition of PowerTune.

That is why we carried out additional measurements in a closed housing for this test. We decided on a player case from Cooler Master, namely that CM Storm Enforcer. The Enforcer showed its volume as the biggest drawback in our test. That is why we have the two rear fans with Silent Wings from be quiet! replaced (one in the back, one in the lid) and this together with the 200 mm fan in the front is connected to a fan control and operated at the lowest control level.

The case fans, including the CPU cooler, work as quiet as a whisper, and we also place our test candidates in such an image. At this point you can complain as you like, because in the end the background noise remains something subjective. The environment we have chosen can be accepted as whisper-quiet.

In addition, we have attached two quickly reacting temperature sensors. The first sensor is located in front of the housing at the height of the front fan and monitors the sucked in room temperature. The second sensor was attached directly below the graphics card fan and used it to monitor the fan intake temperature of the graphics card.

The measurements in the housing are made at the usual 21 ° C room temperature.

Typical test station

Here, too, we would like to add a few additional words to the following lists. We deliberately used the processor Intel Core i7 4820 the turbo function, but also deactivated Hyper-Threading. This is basically impractical, but it allows us to rule out possible sources of error in the tests. In our cases, the CPU or its clock rate usually only plays a very subordinate role, since the selected game scenes are very GPU-limiting and therefore the processor is usually only rarely used. It is therefore sufficient to use a smaller cooler model from Scythe as this is practically never required. In our tests, the processor's fan works practically inaudibly.

A word also applies to our open test stand. Since there is practically no PC case that could be representative of the home user in any way, we rely on an open test stand. Depending on the housing used at home, this can be an advantage or a disadvantage. In well thought-out case ventilation, some graphics card coolers should show themselves better in terms of noise behavior, in average concepts probably on the level of the open test stand, and in poorly ventilated cases with clear disadvantages. But that, in turn, is all dependent on many factors, which is why we see a sensible and reproducible way in our test stand. The aforementioned exception naturally applies in special cases that we know how to weigh.

Test station:

Attentive observers will find that there have been slight changes to the test station. On the one hand, we gave the system a CPU upgrade. The Intel Core i7-3820K had to give way to an i7-4820K. We have also overclocked this via the turbo level so that it always addresses all four cores with 3,9 GHz.

Of course, we also had to update the motherboard BIOS of our ASUS mainboard to the latest version, and we also said goodbye to 8 GB of main memory and are now relying on a 16 GB kit from Kingston from the Hyper-X-Beast -Series.

One last change in the test system took place on the hard drive. A 2 TB model from Western Digital from the Enterprise series is currently used here. And of course all of these changes also have the result that previous measurement results can no longer be compared with the current results. We ask you to take this into account.

Other hardware:

Picture gallery Lian Li T60

Hardware: measuring devices

We like to use high-quality measuring devices in our tests. Volume measuring stations, thermographic cameras, infrared thermometers, clamp ammeters or simply voltage measuring devices (voltmeters) are used.

Depending on the area and purpose, we sometimes rely on well-known manufacturers such as Fluke or Tenma, in other cases also on Conrad's own Voltcraft brand. When it comes to noise emissions, we use special equipment from ulteaudiotechnikwhich enable us to carry out sone measurements in addition to dB (A). Further details on the measurement technology we use can be found here .

Software: driver

  • Windows 7 64 bit, including all updates up to April 2015
  • Intel chipset driver 10.0.27
  • DirectX 9.0c (June 2010 Update)
  • Intel LAN Driver V. 16.6.0.0
  • Audio driver: Realtek (Windows 7 integrated)
  • Marvell SATA 6GB / s V. 1.2.0.1014
  • ASMedia USB 3.0 V1.14.3.0
  • ASUS AI Center II driver for Marvell caching function

Graphics card driver

Driver under test

  • AMD Catalyst 15.5 Beta
  • NVIDIA GeForce Driver Version 352.90 Beta

Software: testing philosophy

Of course we revise our test course here and there. New game titles are added and some benchmarks are dropped. In today's test there is the special feature that we have now included a whole lot of new titles in the course of our validations and of course older representatives had to say goodbye. We're still working on The Witcher 3: Wild Hunt, our exams are not over yet, and before we show something undercooked, we'd better wait a little longer.

It is one of our ambitions when selecting the titles that we can offer a healthy mix of DirectX 9, DirectX 10 and DirectX 11 titles as well as OpenGL, which covers different game genres or game engines. However, the past 30 months have shown us more than clearly that hardly any DirectX 9 titles are published any more, and the number of OpenGL titles that are interesting and new can be counted on one hand. In addition, DirectX 12 is now shaking the door. The most recent selection of new titles could currently consist exclusively of DirectX 11 titles.

What remains to be said is that you can exert yourself as you like: no benchmark course is consistently fair. There are far too many applications on the market that turn out to be one side or the other. And if we were to follow AMD or NVIDIA with the recommendations in the selection, one or the other product from the respective manufacturer would always win in every test.

This means that the status quo remains that we derive our conclusions and findings from the applications that we consulted in these tests.

Software: the benchmarks

Game benchmarks

A look at the list of the new benchmarks shows very quickly that some things have changed and some things stayed the same - but only at first glance, because we were also heavily involved in some of the older titles.

An example: We just didn't want to part with The Elder Scrolls V: Skyrim. Why? It continues to have a large following, it will continue to be played but with modifications. We have therefore decided to install some modifications and continue to use TES V. The results shown have nothing to do with the original game.

But we also didn't want to part with Crysis 3 or Tomb Raider - also two former top titles that one or the other likes to dig out and play and which can also still be described as demanding via their engines. With Crysis 3 we have not only changed the test scene in the game from today, we have also preset the settings to ultra-high details. Tomb Raider only changed the test sequence.

The following games were brought up to date in May 2015!

This means we are faced with 18 gaming benchmarks again that we have to master, and if nothing goes wrong, we need around five hours for a high-end graphics card for this test course.

We are only making a cut over the selected applications and the scenes used for them. We try to make sure that the chosen scene corresponds to what the game entails. If we encounter worst-case scenarios, we prefer to choose such a scene, because that is what can decisively damage the flow of the game.

Why is XYZ missing?

Why is Battlefield Hardline missing as the successor to Battlefield 4? Why didn't Lords of the Fallen make the test? Why don't we have Mordor's shadow on the course?

There are certain factors to be mentioned that prevented this. Battlefield Hardline comes with a new test mechanism. It is only possible to change the graphics card four times in a 24-hour period before the game calls for a ban. We had reported about it, we had contact with Electronic Arts, but we were simply referred to the end customer support - and we cannot work that way. Battlefield 4 is currently in the benchmark course because of the Frostbite 3 engine, which is also used by Hardline.

Lords of the Fallen - awarded several times as the best German game - ate up our memory games in the test, and seriously: it wasn't that good that we wanted to play it again. Mordor's shadow has the problem that you have to play every benchmark position again, which would be much too tedious for us, but also too imprecise for our measurements, because there is no option to reach the exact same playing position again and at the same time Maintains viewing direction and the like.

Race simulations: Unfortunately, we are currently very disappointed and surprised at the same time. We wanted to replace DiRT Showdown with a Codemasters successor. The manufacturers apparently did not want to work hard and hardly present any visual improvements - the hardware requirements remained the same. Assetto Corsa or Project CARS would be extremely interesting in terms of their requirements, but unfortunately offer no options for comparability or a reproducible internal benchmark. Our secret favorite would actually have been Ubisoft's The Crew, as the title is not only fun, but also offers a lot of optical options. After months of testing, the title was ruled out for similar reasons, but more because it is simply only possible to play The Crew in permanent online status. Since our test station remains at the same level X, so that the results remain comparable and since we have to forego anti-virus programs and similar protective measures for the same reasons, this title is unfortunately also excluded. So at this point we are waiting for our possible favorite, which is not only fun and pleasing to the eye, but also allows permanently reproducible results.

GPU Computing Benchmarks

It is a little sad to see that recent GPGPU implementations on modern applications do not fully exploit the performance of graphics cards. Suddenly applications like Adobe Photoshop or GIMP - very popular graphics programs - rely on GPGPU acceleration, but this is not strictly implemented. Ultimately, this is also the reason why some more powerful graphics cards cannot stand out in such a comparison. At the same time, this is also the reason why there are so few applications in this test area, and also often exotic ones. The software industry has not yet recognized the potential of graphics cards as arithmetic units - or the lobby from the CPU warehouse is too big.

And once again all of this is the motivation for separating from another representative. After we had to part with OCL-Hashcat as a hobby project, we say goodbye (at least temporarily) to the CL benchmark, which suddenly can no longer be used in the previous version and refers to a newer variant. The results can absolutely no longer be compared. The CL benchmark will probably be the next building site of our work to include it again in a new version. Unfortunately, this was not possible for today's test.

Further software in the test:

  • Tom Clancy's HAWX (Power Consumption Games)
  • Furmark 1.6.5 (power consumption simulated full load)
  • PowerDVD 9 Ultra V. 9.0.4105.51 (power consumption Blu-ray playback)
  • MSI Afterburner

NVIDIA GeForce GTX 980 Ti

Technical consideration

Key data GeForce GTX TitanX GeForce GTX 980 Ti GeForce GTX 980 GeForce GTX 780 Ti GeForce GTX Titan
Codename GM200 (Maxwell) GM200 (Maxwell) GM204 (Maxwell) GK110 (Kepler) GK110 (Kepler)
Production 28 nm 28 nm 28 nm 28 nm 28 nm
Transistors 8 Billion 8 Billion 5,2 Billion 7,1 Billion 7,1 Billion
Chip clock rate (base) 1.000 MHz 1.000 MHz 1.126 MHz 875 MHz 837 MHz
Chip clock rate (averaged boost) 1.075 MHz 1.075 MHz 1.216 MHz 928 MHz 875 MHz
Memory clock rate (MHz) 1.752 MHz 1.752 MHz 1.752 MHz 1.752 MHz 1.502 MHz
Memory clock rate (Mbps) 7.000 Mbps 7.000 Mbps 7.000 Mbps 7.010 Mbps 6.008 Mbps
Storage type GDDR5 GDDR5 GDDR5 GDDR5 GDDR5
Typical memory size 12.288 MB 6.144 MB 4.096 MB 3.072 MB 6.144 MB
memory interface 384 bit 384 bit 256 bit 384 bit 384 bit
Shader arithmetic units 3.072 2.816 2.048 2.880 2.688
Command architecture Scalar Scalar Scalar Scalar Scalar
Skills per shader unit MADD MADD MADD MADD MADD
Double Precision Support Yes - 1/32 SP performance Yes - 1/32 SP performance Yes - 1/32 SP performance Yes - 1/24 SP performance Yes - 1/3 SP performance
Texture Units (TMUs) 192 176 128 240 224
Raster Operation Units (ROP) 96 96 64 48 48
Shader model version 5.0 5.0 5.0 5.0 5.0
DirectX version DirectX 11 DirectX 11 DirectX 11 DirectX 11 DirectX 11
Audio controller 7.1 (HD bitstream) 7.1 (HD bitstream) 7.1 (HD bitstream) 7.1 (HD bitstream) 7.1 (HD bitstream)
Video processor VP5 VP5 VP5 VP5 VP5
Typical power consumption (manufacturer information) ? ? 165 W ? ?
Maximum power consumption (manufacturer information) 250 W 250 W 180 W 250 W 250 W

The GeForce GTX 980 Ti is based on the GM200 chip, which has 8 billion transistors. NVIDIA first introduced the GM200 in April with the current graphics card flagship GeForce GTX TitanX.

Block diagram of the GeForce GTX 980 Ti with an exemplary marking of two deactivated SMMs

NVIDIA uses the GM980 chip for the GeForce GTX 200 Ti, but compared to the GeForce GTX Titan X, some functional units have been deactivated. This enables NVIDIA to downgrade its performance and at the same time bring partially defective GM200 chips onto the market. Only the streaming multiprocessors (SMM) are affected by the deactivations of the GeForce GTX 980 Ti: While the GTX Titan X starts with 24 SMMs, the number of SMMs in the GeForce GTX 980 Ti has been reduced to 22.

Each SMM has eight Vec16 arithmetic units and eight texturing units (TMU), which means the GeForce GTX 980 Ti with 22 SMMs has a total of 176 TMUs and 176 Vec16 arithmetic units (corresponds to 2.816 CUDA cores [176 SMMs * 16 slots]). With the memory interface (384 bit) and the ROPs (96) the card is not restricted. However, the GeForce GTX 980 Ti does not use 4 Gbit GDDR5 chips, but chips with a capacity of 2 Gbit. This reduces the storage capacity of the GTX 980 Ti to 6 GB, compared to the 12 GB of the GTX Titan X.

GeForce GTX TitanX GeForce GTX 980 Ti GeForce GTX 980 GeForce GTX 780 Ti GeForce GTX Titan
Computing Power - SP (MADD) 6.144 GFLOPs 5.632 GFLOPs 4.612 GFLOPs 5.040 GFLOPs 4.500 GFLOPs
Computing power - DP (MADD) 192 GFLOPs 176 GFLOPs 144 GFLOPs 210 GFLOPs 1.500 GFLOPs
Texturing performance (INT8 bilinear) 192,0 GTex / s 176,0 GTex / s 144,1 GTex / s 210,0 GTex / s 187,5 GTex / s
Pixel fill rate 96,0 GPix / s 96,0 GPix / s 72,1 GPix / s 42,0 GPix / s 40,2 GPix / s
Memory Bandwidth 336,0 GB / s 336,0 GB / s 224,0 GB / s 336,4 GB / s 288,4 GB / s

Since the GTX 980 Ti starts with the same clock rates as the GeForce GTX Titan X, there are no differences in the theoretical key data in the memory bandwidth and fill rate. Due to the two deactivated SMMs, the GTX 980 Ti is around 8 percent behind the flagship GTX Titan X in terms of computing and texturing performance.

Clock rates and limitations

Boost and base clock

As is common now, NVIDIA issues two clocks for the GPU: the base and the averaged boost clock. The base clock of the GeForce GTX 980 Ti is in the same range as the Titan X, i.e. 1.000 MHz, and should not be undercut in any application. The boost clock specification is also identical to the Titan X at 1.075 MHz, and our model managed at best 1.215 MHz, with an additional voltage of 1.240 MHz.

NVIDIA only understands the averaged boost clock to be the maximum clock rate that all graphics cards in this series on the market can achieve - this information is by no means a guarantee that the clock will always be there. Only the base clock represents a guaranteed clock.

For the first time with the GTX 970 and GTX 980 we encountered the fact that the clock rate was not reached; when testing the Titan X again, but only when using Furmark. We couldn't find this fact in today's test of the GTX 980 Ti. Even under Furmark, the base rate was not undercut. So NVIDIA seems to have changed something in the firmware. That could possibly be the temperature limit.

Limitations

As previously known from Boost 2.0 at NVIDIA, there are two factors that can limit the speed of the GPU. This is on the one hand the set temperature limit of 83 ° C and on the other hand the power consumption of 250 watts. NVIDIA monitors the factors via chips on the board, and if the limits are reached, the driver intervenes and throttles the GPU clock and voltage.

In the case of the relatively high TDP of 250 watts, it is seldom that the power limit intervenes in practice. These peaks usually arise at the beginning of a benchmark that is demanding in terms of power consumption (e.g. Anno 2070). In this case, the clock rates of a maximum of 1.215 MHz were reduced immediately. If the temperature limit of 83 ° C is reached and the fan can keep the temperature level with the intended noise measures that are acceptable from NVIDIA's point of view, the GPU clocks down further until this requirement is also reached.

Unusual: Furmark quickly brought the card to 87 ° C and gave the fan time to work, only to throttle it down to 85 ° C. Although we did not intervene in any options, the temperature did not drop to 83 ° C and the clock rate stayed at 999 MHz. But let's leave Furmark aside, because worst-case behavior also exists in games.

The above recordings serve as an example of a 30-minute game in the named titles (in a closed case - see test environment).

With Crysis 3 we saw usual behavior. After just five minutes, the clock rates were permanently at 1.075 MHz, fell in heated battles and depending on the scene to 1.050 MHz. Nothing about that changed after 20 minutes.

Dying Light is obviously an absolute challenge for NVIDIA technology. Immediately after loading the level and entering the game, the boost collapsed completely. It initially reorganized itself with a base clock rate of 999 MHz, only to then work at 1.075 MHz for a short time and then drop off again immediately. In the selected level and in the said scenes, we mostly only saw clock rates around 30 MHz over the course of 1.025 minutes. Only when there were no action-heavy scenes (on roofs, for example) did the clock rates recover somewhat. 1.100 MHz were a rarity even then.

Playing in Tomb Raider also quickly made the GTX 980 Ti sweat. After five minutes in the game at the latest, the fun was over and the clock jerk orgy showed up in the range from 1.063 to 1.088 MHz. In the worst case it was 1.050 MHz, and when things went well, a clock of 1.100 MHz flashed for a moment.

Manual options

Possible loosening of the limits using tools

And again, of course, the user has the option of using tools to loosen the limitations set by NVIDIA to a certain extent. The manufacturer approves this, and the board partners have so far also approved it to the same extent.

In the case of the Titan X, the temperature limit can be raised from 83 to 91 ° C. The restriction on power consumption can be increased by 10 percent and thus lands on 275 watts. On the one hand that sounds like a lot, and in practice it is. For enthusiasts and tweakers who are willing to pay such a high price, these options - especially when it comes to power consumption - are a bad joke, because with simple overclocking you can drive the GTX 980 Ti in regions of 275 watts without that you still turn the tension screw.

Turning the voltage screw (limited to a maximum of +0,87 mV, as usual) is not allowed by NVIDIA or board partners and is at your own risk.

Memory usage in games

The memory expansion is currently a big topic in marketing and is often mentioned in the course of 4K resolutions. And of course, higher resolutions also need more memory. But there is always the question of how the game developer compensates for the lack of memory.

Game Resolution Memory allocation [MByte]
CE 3.840 x 2.160 1.000
Assassin's Creed Unity 3.840 x 2.160 3.750
Assassin's Creed IV: Black Flag 3.840 x 2.160 1.800
Battlefield 4 3.840 x 2.160 2.500
Brink 3.840 x 2.160 900
Call of Duty: Advanced Warfare 3.840 x 2.160 6.100
Call of Duty: Ghosts 3.840 x 2.160 5.400
Crysis 3 3.840 x 2.160 2.900
Dying Light 3.840 x 2.160 3.700
Far Cry 4 3.840 x 2.160 4.850
Hitman: Absolution 3.840 x 2.160 3.500
Lords of the traps 3.840 x 2.160 6.100
Metro Last Light 3.840 x 2.160 2.100
Middle-earth: Shadow of Mordor 3.840 x 2.160 3.900
Ryse: Son of Rome 3.840 x 2.160 3.000
TES V: Skyrim 3.840 x 2.160 2.600
Thief 3.840 x 2.160 4.000
tomb raider 3.840 x 2.160 2.700

We did not consult all of the games on today's benchmark course, but the selected 18 titles form a good basis for an overall impression. And outliers, which clearly take up more than 4 gigabytes in the respective scenes, are rare.

But even in these games you can still play without problems with a GeForce GTX 780 Ti with only 3 GB of main memory. The game then simply uses the existing memory in a different form. Playing in 4K resolution is more dependent on how well the graphics card itself works, and here almost all single GPU variants quickly reach their limits in the highest levels of detail. If you turn down the level of detail, the game's memory requirements usually decrease very quickly.

In the end, the GeForce GTX 980 Ti is really well positioned with its 6 gigabytes of GDDR5 main memory. The 12 GByte of the GTX Titan X are currently to be seen as a marketing move and only bring anything to the end customer in very few cases.

The test candidate at a glance

Key data and scope of delivery

Key data / scope of delivery NVIDIA GeForce GTX Titan X (reference) NVIDIA GeForce GTX 980 Ti (Reference) NVIDIA GeForce GTX 980 (reference)
chipset GM200 GM200 GM204
GPU clock rate (base) 1.000 MHz 1.000 MHz 1.126 MHz
GPU clock rate (Boost) 1.075 MHz 1.075 MHz 1.216 MHz
Clock rate memory 1.750 MHz 1.750 MHz 1.750 MHz
main memory 12 GB GDDR5 6 GB GDDR5 4 GB GDDR5
Monitor outputs 1 x DVI 1 x DVI 1 x DVI
3 x DisplayPort 3 x DisplayPort 3 x DisplayPort
1 X HDMI (2.0) 1 X HDMI (2.0) 1 X HDMI (2.0)
Features - - -
Measurements and weight:
Picture: Gigabyte GV-R585OC-1GD - overclocked Radeon HD 5850
Picture: Gigabyte GV-R585OC-1GD - overclocked Radeon HD 5850
Picture: Gigabyte GV-R585OC-1GD - overclocked Radeon HD 5850
Weight 915 grams 905 grams 1.030 grams
Length of PCB (including slot plate) 26,8 cm 26,8 cm 26,8 cm
Length of PCB (including cooler) 26,8 cm 26,8 cm 26,8 cm
PCB height (from slot plate) 12,6 cm 12,6 cm 12,6 cm
PCB height (incl. Cooler) 12,6 cm 12,6 cm 12,6 cm
- - - -
Scope of delivery hardware - - -
Scope of delivery software - - -
NVIDIA list price (as of June 01, 2015) 999 US Dollars 649 US Dollars 499 US Dollars

The Titan X and 980 Ti weigh a little less than the GeForce GTX 980. This is only due to the missing aluminum backplate on the back, which was really senseless. There was no contact with components, so it was never to be understood as a cooling plate, but only as a visual gimmick. NVIDIA should have made it clear from the start that this would lead to an increase in temperature.

There is not much to say about the scope of delivery as we are dealing with reference models. The Game bundle with The Witcher 3: Wild Hunt and the upcoming Batman: Arkham Knight expires on June 1, 2015. It's unclear whether NVIDIA will extend this. That would of course be a nice additional incentive to buy. But here you have to wait and see if NVIDIA will go up again.

The I / O shield typically has a DVI and an HDMI connection as well as three DisplayPort connections. A maximum of four monitors can be controlled at the same time, of which only three are intended for gaming - one is used to output chats or the like.

With the arrival of the GTX 980 Ti, prices will only change slightly. The list price of the new model is $ 649, the Titan X stays at $ 999, and the GTX 980 is $ 499. With taxes, the GTX 980 Ti should cost around 749 euros in this country - high-end graphics cards have become expensive again, and this is not only due to the current poor exchange rate, but also to NVIDIA's influence for a few years, but also to the end customer, who is the prices so assumes. The $ 500 mark is a thing of the past, and AMD's Fiji is also expected to be priced around 700 euros.

Impressions

We have entered a short chapter because there is nothing new to report. NVIDIA continues to use its standard reference cooler, which has been with us since the first GTX Titan. Here again in silver and not in black.

We can only partially share the high praise for the cooler. There is absolutely no question that NVIDIA probably created the first radial reference cooler that does not go along with quite as much noise. But anyone who perceives more than 30 dB (A) to be quiet is by no means sensitive to the background noise.

In addition, there is another problem due to the design, because the encapsulated cooler design is supposed to transport its heated exhaust air via the I / O shield and the openings there. There are also air outlet slots there. Due to the wide range of monitor connection options, the problem now arises that parts of these openings are covered by the housing lock, which means that the warm air can escape more slowly and the fan has to turn higher.

Nonetheless, NVIDIA remains true to its concept here. As soon as you allow the partners (this is not always the case) to work with a different cooler, you immediately see separate drafts from the board partners. They are not always better, but in some cases they are. It is to be expected that NVIDIA will also allow its own designs in the case of the GTX 980 Ti, after all, the first have now also been made alternative cooler for the Titan X appeared. However, no separate announcements of such titanium models that are sold with this fan have been made - NVIDIA seems to continue to issue bans here.

In terms of the external power supply, the GTX 980 Ti is of course identical to the Titan X - after all, both have a TDP of 250 watts. Theoretically, both models would be able to handle 300 watts through the connections. That would certainly arise, but NVIDIA cleverly prevents this through its limitations with temperature and power targets.

There are no changes to the converter implementation either - we are dealing with a 1: 1 design of the Titan X. The only change is in the main memory, which is now only 6 GB, which is why there are no memory chips on the back of the board.

Practical experience

Voltages and clock rates

As is well known, essential details of our articles consist in the use special measuring equipment from different areas. Especially when there is tension, the past has taught us that monitoring tools can provide clues, but their display often does not correspond to reality. So we make sure of this at this point. Different devices are used - depending on the area of ​​application.

In the case of this test area, we primarily rely on our MS-9160 measuring station or the Fluke Clampmeter 345. The Voltcraft measuring station was adjusted to the six-digit voltmeter slot of a calibrated Hewlett-Packard HP5328B and a calibrated BBC-MA5D voltmeter - the measured values ​​of our devices were then identical to those of the references to two decimal places. With the appropriate software, we are of course also able to create recordings of the measurements.

We saw the usual values ​​for modern NVIDIA graphics cards when idling. The GPU of the GTX 980 Ti clocks with 135 MHz and the memory with 202 MHz. Our sample clocks up to a maximum of 1.215 MHz under load. The memory clock is 1.750 MHz under load.

We have determined the other clock stages and the voltages applied as follows (real measured values, no tool readout):

Clock rates / voltages NVIDIA GeForce GTX 980 Ti GPU clock rate (MHz) Clock memory (MHz) GPU voltage (volts) Voltage storage (volts)
Load-free operation 135 202 0,890 1,355
Blu-ray playback 135 202 0,878 1,355
Multi-monitor operation (2 devices) 135 202 0,878 1,355
Multi-monitor operation (3 devices) 810 1.752 1,020 1,583
ATiTool 1.215 1.752 1,194 1,585
Furmark load 999 1.752 1,017 1,589

There is hardly any new knowledge to be gained in this chapter. Clock rates and voltages are very similar to the previous Maxwell GPU representatives. Interestingly, the rate does not fall below the base rate under Furmark load, as we experienced with the Titan X.

References

Clock rates / voltages NVIDIA GeForce GTX Titan X GPU clock rate (MHz) Clock memory (MHz) GPU voltage (volts) Voltage storage (volts)
Load-free operation 135 202 0,889 1,356
Blu-ray playback 135 202 0,870 1,356
Multi-monitor operation (2 devices) 135 202 0,864 1,356
Multi-monitor operation (3 devices) 810 1.752 1,025 1,585
ATiTool 1.190 1.752 1,168 1,585
Furmark load 937 1.752 1,011 1,585
Clock rates / voltages NVIDIA GeForce GTX 980 GPU clock rate (MHz) Clock memory (MHz) GPU voltage (volts) Voltage storage (volts)
Load-free operation 135 162 0,856
Blu-ray playback 135 162 0,856
Multi-monitor operation (2 devices) 135 162 0,856
Multi-monitor operation (3 devices) 911 1.752 1,025
ATiTool 1.240 1.752 1,206
Furmark load to 1.037 1.752 1,025

Note: We owe the tensions of the memory, as we could not reach a measuring point through the backplate.

Clock rates / voltages ASUS GTX 980 STRIX GPU clock rate (MHz) Clock memory (MHz) GPU voltage (volts) Voltage storage (volts)
Load-free operation 135 162 0,851 1,318
Blu-ray playback 135 162 0,851 1,318
Multi-monitor operation (2 devices) 135 162 0,851 1,318
Multi-monitor operation (3 devices) 949 1.752 1,002 1,546
ATiTool 1.316 1.752 1,222 1,531
Furmark load to 1.139 1.752 1,045 1,517
Clock rates / voltages EVGA GTX 980 SC ACX 2.0 GPU clock rate (MHz) Clock memory (MHz) GPU voltage (volts) Voltage storage (volts)
Load-free operation 135 162 0,857 1,393
Blu-ray playback 135 162 0,833 1,393
Multi-monitor operation (2 devices) 135 162 0,833 1,393
Multi-monitor operation (3 devices) 1.013 1.752 0,995 1,574
ATiTool 1.418 1.752 1,208 1,581
Furmark load to 1.088 1.752 1,05 1,580
Clock rates / voltages Inno3D iChill GTX 970 Herculez X2 GPU clock rate (MHz) Clock memory (MHz) GPU voltage (volts) Voltage storage (volts)
Load-free operation 135 162 0,877 1,381
Blu-ray playback 135 162 0,865 1,381
Multi-monitor operation (2 devices) 135 162 0,878 1,381
Multi-monitor operation (3 devices) 873 1.752 1,034 1,554
ATiTool 1.291 1.752 1,267 1,554
Furmark load to 967 1.752 1,085 1,551
Clock rates / voltages NVIDIA GeForce GTX 780 Ti GPU clock rate (MHz) Clock memory (MHz) GPU voltage (volts) Voltage storage (volts)
Load-free operation 324 162 0,878 1,356
Blu-ray playback 324 162 0,878 1,356
Multi-monitor operation (2 devices) 324 162 0,878 1,356
Multi-monitor operation (3 devices) 705 1.750 0,939 1,634
ATiTool 1.020 1.750 1,176 1,634
Furmark load 875 MHz 1.750 1,021 1,634
Clock rates / voltages NVIDIA GeForce GTX 780 GPU clock rate (MHz) Clock memory (MHz) GPU voltage (volts) Voltage storage (volts)
Load-free operation 324 162 0,875 1,375
Blu-ray playback 324 162 0,875 1,378
Multi-monitor operation (2 devices) 324 162 0,875 1,378
Multi-monitor operation (3 devices) 692 1.502 0,924 1,557
ATiTool to 993 1.502 1,147 1,561
Furmark load to 863 1.502 1,021 1,564

Temperature behavior

The inventory is taken here using monitoring tools such as the MSI Afterburner or GPU-Z. The idle values ​​are recorded after a certain load and cooling phase, which can result in measurement tolerances.

We emulate 3D gaming load using Tom Clancy's HAWX, which behaves similarly to aliens vs. Predator or The Witcher 2. We understand this measurement as a worst-case scenario for games, although our test scene from Anno 2070 currently puts more load on the graphics cards.

Finally, in this chapter it should be pointed out that, at the request of many readers, we have thinned out the comparison tables in order to provide a better overview. More comprehensive comparisons can be found in the appendix of the article.

Idle desktop

Temperatures

Idle

Palit GTX 970 Jetstream

47,00
ASUS GTX 980 Strix

43,00
ASUS GTX 970 Strix

41,00
EVGA GTX 980 SC ACX 2.0
[Max 1418MHz]

40,00
MSI GTX 970 Gaming 4G

40,00
AMD Radeon R9 290

40,00
AMD Radeon R9 290X
[PerformanceBIOS]

39,00
NVIDIA GeForce GTX 780

37,00
Sapphire Tri-X R9 290X OC

36,00
MSI R9 290X Gaming 4G

35,00
Sapphire R9 290X Tri-X 8GB

35,00
NVIDIA GeForce GTX 980 Ti

35,00
NVIDIA GeForce GTX 980
[Default]

34,00
NVIDIA GeForce GTX 980
[baseclock]

34,00
Inno3D GeForce GTX 970 Herculez X2

32,00
NVIDIA GeForce GTX Titan X

32,00
NVIDIA GeForce GTX 780 Ti

31,00
NVIDIA GeForce GTX Titan
[875MHz]

31,00
AMD Radeon R9 295X2

31,00
° C

There isn't much to talk about in the first test of the chapter. The cards move at uncritical temperature levels, and it should be emphasized that the background noise of the GeForce GTX 980 Ti is practically imperceptible. Our test sample shows a slightly higher idle temperature compared to the Titan X, but this is not relevant in these regions.

Games (HAWX)

Temperatures

Last games

AMD Radeon R9 290

94,00
AMD Radeon R9 290X
[PerformanceBIOS]

93,00
MSI R9 290X Gaming 4G

83,00
NVIDIA GeForce GTX Titan X

83,00
NVIDIA GeForce GTX 980 Ti

83,00
NVIDIA GeForce GTX 780 Ti

82,00
NVIDIA GeForce GTX 980
[Default]

81,00
NVIDIA GeForce GTX Titan
[875MHz]

80,00
NVIDIA GeForce GTX 780

80,00
NVIDIA GeForce GTX 980
[baseclock]

80,00
Inno3D GeForce GTX 970 Herculez X2

80,00
Sapphire Tri-X R9 290X OC

79,00
EVGA GTX 980 SC ACX 2.0
[Max 1418MHz]

75,00
ASUS GTX 970 Strix

75,00
Palit GTX 970 Jetstream

75,00
Sapphire R9 290X Tri-X 8GB

74,00
ASUS GTX 980 Strix

73,00
MSI GTX 970 Gaming 4G

72,00
AMD Radeon R9 295X2

68,00
° C

We didn't expect any surprises in terms of temperature in the load test either - how should it, since a temperature limit is set here. And that is 83 ° C, so a jump to 84 ° C is the maximum that can be seen briefly, provided that you do not use tools to break the limits. As expected, the volume of the GTX-980-Ti fan is clearly audible.

Surprisingly, we saw a different behavior under Furmark than usual. In contrast to the Titan X, the base clock was kept at 1.000 MHz, but the temperature was at a peak of 87 ° C for a certain period of time. The fan worked and managed to throttle the temperature to 85 ° C within about five minutes - but the value remained there. It would appear that NVIDIA made a change here through either the card's driver or firmware.

Converter temperatures

We use a thermal imaging camera to determine the possible critical areas on the PCB. We use it to scan the back of the circuit board and take a closer look at possible hotspots, which usually occur primarily in the area of ​​the power supply components. Previous empirical values ​​for comparisons with internal temperature diodes, which are possible in some cases, show measurement differences in the range of 5 to 10 ° C - even less in particularly hot situations. However, this procedure also gives us an insight into the entire heat distribution, especially on the surrounding component groups, which is not possible by reading internal diodes or laser thermometers.

Recorded by a thermographic camera on the back of the board

Temperatures

Converter temperatures

AMD Radeon R9 295X2

106,40
MSI GTX 970 Gaming 4G

100,50
Inno3D GeForce GTX 970 Herculez X2

98,50
EVGA GTX 980 SC ACX 2.0
[Max 1418MHz]

96,30
ASUS GTX 980 Strix

93,10
Sapphire Tri-X R9 290X OC

91,30
Palit GTX 970 Jetstream

90,50
MSI R9 290X Gaming 4G

89,20
NVIDIA GeForce GTX Titan X

85,70
NVIDIA GeForce GTX 780

84,80
NVIDIA GeForce GTX Titan
[875MHz]

82,30
NVIDIA GeForce GTX 780 Ti

81,30
NVIDIA GeForce GTX 980 Ti

80,50
ASUS GTX 970 Strix

79,60
Sapphire R9 290X Tri-X 8GB

78,90
AMD Radeon R9 290X
[PerformanceBIOS]

73,50
AMD Radeon R9 290

73,10
° C

The temperature development in this area of ​​the GTX 980 Ti is quite positive. The recorded value of around 81 °C is not only uncritical, but rather cool for a high-end solution. Here, manufacturer measures certainly make a positive contribution thanks to component optimization and processing, but last but not least also that this high-end model “only” has to process 250 watts.

background noise

Loudness measurement – ​​How to measure HT4U. Net

Anyone who has read our articles for a while knows that we do not take the issue of volume lightly, but rather investigate this area very intensively. We currently have our previous test station on another current device from the company ulteaudiotechnik expanded in the form of the new DAASUSB, which has also been extended to our needs with a subsonic function.

The calibrated device allows us to take measurements in the dB (A) and sone range and, as usual, we give the measurement results standardized, which corresponds to a distance of 1 meter. The spectral analyzes also give an impression of the fan behavior of the individual test candidates.

After we have just looked at the temperature behavior, in the next step we of course want to take a closer look at the background noise, because after all, both go hand in hand in behavior.

We didn't encounter any surprises when idling. The well-known cooler from NVIDIA plays a quiet game on our model, even quieter than on the GTX 980. With only 11,9 dB (A), the behavior can be described as whisper-quiet, and this background noise is definitely no longer from a closed case perceive.

Of course, fun is over with maximum 3D load. The 250 watt power consumption of the GTX 980 Ti must be cooled, and the cooling structure remains the familiar one. While it was 980 to 26 dB (A) on the GTX 30, we are now at 31 dB (A). We are not talking about noise at this point, but a noise that can always be clearly identified from a closed housing.

In hot summer months and with sustained full load or with manual intervention in the limits, a worst-case scenario can of course also arise - we simulated this using Furmark, and the volume increases to 37,6 dB (A) or 4,9 sone. This is a little higher than the Titan X and definitely not for spoiled ears and silent freaks. Others may not be bothered by the noise yet, because we would not yet call this behavior noise - but it is definitely too loud for us.

Brief comparison [db (A)]

Since we have recently received repeated comments about the length of our comparison diagrams, we have now put the complete comparison, also with older graphics cards, at the end of the article in the appendix and show "thinned out" comparisons below.

Volume measurements: sound pressure [dB (A)]

Idle

EVGA GeForce GTX 670 SC

109%
Palit GeForce GTX 670

100%
EVGA GeForce GTX 680

87%
MSI GTX 770 Lightning

86%
NVIDIA GeForce GTX 760
[1033MHz]

86%
ASUS GeForce GTX 670 DCU II TOP

85%
Club3D Radeon R9 285 CoolStream

78%
XFX Radeon R9 285 Black OC Edition

72%
NVIDIA GeForce GTX 690

66%
AMD Radeon R9 290X
[Quiet BIOS after 15 min]

66%
AMD Radeon R9 290X
[PerformanceBIOS]

66%
AMD Radeon R9 290
[Pattern 1 & old driver]

66%
AMD Radeon R9 290
[Pattern 2]

66%
Sparkle Caliber X680 Captain

62%
Inno3D GeForce GTX 970 Herculez X2

61%
Sparkle Caliber X670 Captain

61%
EVGA GeForce GTX 680

60%
NVIDIA GeForce GTX 980
[Max 1240MHz]

59%
NVIDIA GeForce GTX 980
[1126MHz]

59%
Sapphire Tri-X R9 290X OC

59%
Sapphire Radeon R9 280X Vapor-X

58%
Sapphire Radeon R9 280X Toxic

57%
MSI GTX 680 OC TwinFrozr III

57%
Gainward GeForce GTX 670 Phantom

56%
Zotac GeForce GTX 680

54%
MSI R9 290X Gaming 4G

54%
Sapphire Radeon R9 280 Dual X

52%
Sapphire R9 285 ITX Compact

51%
MSI R9 280X OC

51%
NVIDIA GeForce GTX 770
[1084MHz]

48%
NVIDIA GeForce GTX 780

48%
NVIDIA GeForce GTX Titan
[875MHz]

48%
NVIDIA GeForce GTX 780 Ti

48%
Sapphire R9 290X Tri-X 8GB

48%
XFX R9 280X Black DD OC

47%
NVIDIA GeForce GTX 980 Ti

46%
Gigabyte GeForce GTX 670 Windforce

44%
NVIDIA GeForce GTX Titan X

44%
NVIDIA GeForce GTX 750 Ti

44%
MSI GTX 970 gaming

0%
EVGA GTX 980 SC ACX 2.0
[Max 1418MHz]

0%
ASUS GTX 980 Strix

0%
ASUS GTX 970 Strix

0%
Palit GTX 970 Jetstream

0%
MSI GTX 960 Gaming 2G

0%
dB (A)
Volume measurements: sound pressure [dB (A)]

Load (games)

AMD Radeon R9 290X
[PerformanceBIOS]

46,0
AMD Radeon R9 290
[Pattern 2]

41,4
Palit GeForce GTX 670

36,3
XFX Radeon R9 285 Black OC Edition

36,1
NVIDIA GeForce GTX 760
[1033MHz]

35,4
AMD Radeon R9 290X
[Quiet BIOS after 15 min]

35,0
AMD Radeon R9 290
[Pattern 1 & old driver]

35,0
EVGA GeForce GTX 680

34,9
Club3D Radeon R9 285 CoolStream

34,3
EVGA GeForce GTX 670 SC

34,0
MSI R9 290X Gaming 4G

33,5
Sapphire Radeon R9 280X Toxic

33,3
NVIDIA GeForce GTX 690

32,9
Sapphire Tri-X R9 290X OC

32,9
EVGA GeForce GTX 680

32,2
Sapphire R9 290X Tri-X 8GB

32,1
NVIDIA GeForce GTX Titan X

32,1
XFX R9 280X Black DD OC

31,9
MSI GTX 680 OC TwinFrozr III

31,9
Gainward GeForce GTX 670 Phantom

31,1
ASUS GTX 980 Strix

31,1
Zotac GeForce GTX 680

31,0
NVIDIA GeForce GTX 980 Ti

31,0
NVIDIA GeForce GTX Titan
[875MHz]

30,9
NVIDIA GeForce GTX 780 Ti

30,9
Inno3D GeForce GTX 970 Herculez X2

30,8
NVIDIA GeForce GTX 980
[Max 1240MHz]

30,4
ASUS GTX 970 Strix

30,4
Sapphire Radeon R9 280 Dual X

29,7
NVIDIA GeForce GTX 780

28,7
MSI GTX 970 gaming

26,9
MSI GTX 770 Lightning

26,0
NVIDIA GeForce GTX 980
[1126MHz]

25,9
Palit GTX 970 Jetstream

25,0
Sapphire Radeon R9 280X Vapor-X

24,7
Gigabyte GeForce GTX 670 Windforce

24,6
Sapphire R9 285 ITX Compact

24,5
NVIDIA GeForce GTX 770
[1084MHz]

24,5
Sparkle Caliber X670 Captain

24,1
Sparkle Caliber X680 Captain

24,1
EVGA GTX 980 SC ACX 2.0
[Max 1418MHz]

23,9
ASUS GeForce GTX 670 DCU II TOP

22,2
NVIDIA GeForce GTX 750 Ti

17,7
MSI R9 280X OC

16,0
MSI GTX 960 Gaming 2G

15,9
dB (A)

Brief comparison [sone]

Volume measurements: Loudness (sone)

Idle

Palit GeForce GTX 670

102%
MSI GTX 770 Lightning

100%
Club3D Radeon R9 285 CoolStream

88%
EVGA GeForce GTX 670 SC

84%
NVIDIA GeForce GTX 760
[1033MHz]

80%
EVGA GeForce GTX 680

77%
NVIDIA GeForce GTX 690

68%
AMD Radeon R9 290X
[Quiet BIOS after 15 min]

63%
AMD Radeon R9 290X
[PerformanceBIOS]

63%
AMD Radeon R9 290
[Pattern 1 & old driver]

63%
AMD Radeon R9 290
[Pattern 2]

63%
XFX Radeon R9 285 Black OC Edition

61%
Sparkle Caliber X680 Captain

61%
Sparkle Caliber X670 Captain

59%
Sapphire Radeon R9 280X Vapor-X

55%
Inno3D GeForce GTX 970 Herculez X2

55%
EVGA GeForce GTX 680

55%
Sapphire Tri-X R9 290X OC

55%
NVIDIA GeForce GTX 980
[Max 1240MHz]

53%
NVIDIA GeForce GTX 980
[1126MHz]

53%
MSI N680 GTX OC TwinFrozr III

53%
MSI R9 290X Gaming 4G

53%
Sapphire Radeon R9 280X Toxic

52%
Sapphire R9 285 ITX Compact

50%
Zotac GeForce GTX 680

50%
Sapphire Radeon R9 280 Dual X

48%
Gainward GeForce GTX 670 Phantom

48%
MSI R9 280X OC

46%
Sapphire R9 290X Tri-X 8GB

46%
NVIDIA GeForce GTX 770
[1084MHz]

44%
NVIDIA GeForce GTX 780

44%
NVIDIA GeForce GTX Titan
[875MHz]

44%
NVIDIA GeForce GTX 780 Ti

44%
XFX R9 280X Black DD OC

43%
NVIDIA GeForce GTX 980 Ti

43%
Gigabyte GeForce GTX 670 Windforce

41%
NVIDIA GeForce GTX Titan X

41%
NVIDIA GeForce GTX 750 Ti

39%
ASUS GeForce GTX 670 DCU II TOP

38%
MSI GTX 970 gaming

0%
EVGA GTX 980 SC ACX 2.0
[Max 1418MHz]

0%
ASUS GTX 980 Strix

0%
ASUS GTX 970 Strix

0%
Palit GTX 970 Jetstream

0%
MSI GTX 960 Gaming 2G

0%
sonnet
Volume measurements: Loudness (sone)

Load (games)

AMD Radeon R9 290X
[PerformanceBIOS]

7,40
AMD Radeon R9 290
[Pattern 2]

5,22
XFX Radeon R9 285 Black OC Edition

4,34
Club3D Radeon R9 285 CoolStream

4,03
AMD Radeon R9 290X
[Quiet BIOS after 15 min]

4,00
AMD Radeon R9 290
[Pattern 1 & old driver]

4,00
NVIDIA GeForce GTX 690

3,82
Sapphire Radeon R9 280X Toxic

3,74
MSI R9 290X Gaming 4G

3,68
Palit GeForce GTX 670

3,65
Sapphire Tri-X R9 290X OC

3,55
Sapphire R9 290X Tri-X 8GB

3,49
NVIDIA GeForce GTX 760
[1033MHz]

3,37
NVIDIA GeForce GTX Titan X

3,33
MSI N680 GTX OC TwinFrozr III

3,25
XFX R9 280X Black DD OC

3,15
EVGA GeForce GTX 680

3,11
NVIDIA GeForce GTX 980 Ti

3,11
Inno3D GeForce GTX 970 Herculez X2

3,08
ASUS GTX 970 Strix

3,08
NVIDIA GeForce GTX 980
[Max 1240MHz]

3,05
EVGA GeForce GTX 680

3,05
ASUS GTX 980 Strix

3,05
Gainward GeForce GTX 670 Phantom

3,02
NVIDIA GeForce GTX Titan
[875MHz]

2,96
NVIDIA GeForce GTX 780 Ti

2,96
EVGA GeForce GTX 670 SC

2,92
Sapphire Radeon R9 280 Dual X

2,80
Zotac GeForce GTX 680

2,80
NVIDIA GeForce GTX 780

2,48
MSI GTX 970 gaming

2,44
MSI GTX 770 Lightning

2,26
NVIDIA GeForce GTX 980
[1126MHz]

2,17
Palit GTX 970 Jetstream

2,17
Sapphire R9 285 ITX Compact

2,11
EVGA GTX 980 SC ACX 2.0
[Max 1418MHz]

2,11
Sapphire Radeon R9 280X Vapor-X

2,04
Gigabyte GeForce GTX 670 Windforce

2,04
Sparkle Caliber X670 Captain

2,01
Sparkle Caliber X680 Captain

2,01
NVIDIA GeForce GTX 770
[1084MHz]

1,89
ASUS GeForce GTX 670 DCU II TOP

1,60
NVIDIA GeForce GTX 750 Ti

1,19
MSI GTX 960 Gaming 2G

1,07
MSI R9 280X OC

1,03
sonnet

Power and temperature limits

Power and temp target

The linchpin of NVIDIA GPU Boost Technology 1.0 was the power target - the maximum permitted power consumption. From GeForce GTX Titan there is GPU Boost 2.0 and thus also the temperature target. Every NVIDIA-based graphics card with this technology comes with a maximum clock rate (GPU boost). Such an NVIDIA card only works at this high rate under load as long as the two limits mentioned are not reached. When this set maximum power consumption or temperature is reached, the clock rates and voltages of the GPU are then reduced until the graphics card finds a clock level at which these limits are no longer exceeded.

As usual, the GeForce GTX 980 Ti can be used to influence these limitations using tools and allow 91 ° C for the temperature limit (instead of 83 ° C) and 110 percent for the power limit.

Maximum loosening with GTX 980 Ti

Power consumption and boost

We have already dealt with this topic in great detail and will therefore be brief here. The power limit is set to 250 watts and is reached by some titles at the beginning, so that a limitation of the maximum clock of our sample of 1.215 MHz occurs relatively quickly, but only once in regions around 1.180 MHz.

After that, the GTX 980 Ti is limited by the temperature after a short time, because the cooling solution and NVIDIA's specifications for the volume do not allow such a high clock rate over the long term. In many of our demanding titles the clock falls relatively quickly below the averaged boost clock of 1.075 MHz, with Dying Light we even found clock speeds in the range of the base clock if we only got into the appropriate places in the action phase.

Due to the mentioned limitations, not only the clock rate drops, but also the power consumption. At 1.075 MHz, the GTX 980 Ti then only brought 217 watts instead of 250 watts, which is ultimately also due to the lower voltages.

If you want more, you can relax the limits and then usually reach a maximum of 275 watts and temperatures around 84 to 87 °C. However, the latter also ensures that the background noise increases massively, and here you then already enter regions as we described in the worst case in the "Background noise" chapter.

Power consumption: idle - games - full load

Graphics card power consumption - How to measure HT4U. Net

We determine the power consumption of the graphics card using a PCI Express adapter modified for this purpose in our laboratory. The values ​​determined therefore only correspond to the consumption of the graphics card itself and not to the power consumption of the overall system. The power consumption via the PCI Express slot, as well as that via the 12 volt power supply cables, are measured simultaneously using a clamp ammeter. The (constant) power consumption of the 3,3 volt rail is determined separately and is included in the overall result shown. Further details and background information on the measurements can be found in our initial article on the subject Power consumption of graphics cards

Power consumption - graphics card

Idle

MSI N580GTX Twin Frozr II OC

32,45
AMD Radeon R9 295X2

31,17
NVIDIA GeForce GTX 580

31,12
AMD Radeon HD 7990

29,60
MSI N580 GTX Lightning

29,19
MSI R7970 Lightning

25,34
Sapphire Tri-X R9 290X OC

21,31
ASUS ROG Matrix GTX 580 Platinum

20,28
AMD Radeon R9 290X
[PerformanceBIOS]

20,04
AMD Radeon R9 290X
[Quiet BIOS after 15 min]

20,02
AMD Radeon R9 290
[Pattern 1 & old driver]

19,71
AMD Radeon R9 290
[Pattern 2]

18,55
MSI R9 290X Gaming 4G

16,83
MSI R9 280X OC

16,25
ASUS GTX 980 Strix

15,33
Sapphire R9 290X Tri-X 8GB

14,81
MSI GTX 970 Gaming 4G

13,96
ASUS GTX 970 Strix

13,19
Inno3D GeForce GTX 970 Herculez X2

13,06
Sapphire Radeon R9 280X Vapor-X

12,98
NVIDIA GeForce GTX 780 Ti

12,92
EVGA GTX 980 SC ACX 2.0
[Max 1418MHz]

12,84
NVIDIA GeForce GTX 980 Ti

12,77
NVIDIA GeForce GTX 780

11,96
Palit GTX 970 Jetstream

11,94
NVIDIA GeForce GTX Titan
[875MHz]

11,92
NVIDIA GeForce GTX Titan X

11,83
NVIDIA GeForce GTX 980

11,16
NVIDIA GeForce GTX 980
[baseclock]

11,16
Watt
Power consumption - graphics card

Idle

MSI N580GTX Twin Frozr II OC

32,45
AMD Radeon R9 295X2

31,17
NVIDIA GeForce GTX 580

31,12
AMD Radeon HD 7990

29,60
MSI N580 GTX Lightning

29,19
MSI R7970 Lightning

25,34
Sapphire Tri-X R9 290X OC

21,31
ASUS ROG Matrix GTX 580 Platinum

20,28
AMD Radeon R9 290X
[PerformanceBIOS]

20,04
AMD Radeon R9 290X
[Quiet BIOS after 15 min]

20,02
AMD Radeon R9 290
[Pattern 1 & old driver]

19,71
AMD Radeon R9 290
[Pattern 2]

18,55
MSI R9 290X Gaming 4G

16,83
MSI R9 280X OC

16,25
ASUS GTX 980 Strix

15,33
Sapphire R9 290X Tri-X 8GB

14,81
MSI GTX 970 Gaming 4G

13,96
ASUS GTX 970 Strix

13,19
Inno3D GeForce GTX 970 Herculez X2

13,06
Sapphire Radeon R9 280X Vapor-X

12,98
NVIDIA GeForce GTX 780 Ti

12,92
EVGA GTX 980 SC ACX 2.0
[Max 1418MHz]

12,84
NVIDIA GeForce GTX 980 Ti

12,77
NVIDIA GeForce GTX 780

11,96
Palit GTX 970 Jetstream

11,94
NVIDIA GeForce GTX Titan
[875MHz]

11,92
NVIDIA GeForce GTX Titan X

11,83
NVIDIA GeForce GTX 980

11,16
NVIDIA GeForce GTX 980
[baseclock]

11,16
Watt

We were a bit surprised in idle mode. The 15,59 watts shown are not a bad value in themselves, but in direct comparison to the results seen with the GTX 980 or the Titan X, our sample of the 980 Ti is relatively high. This is definitely not due to the tensions. At this point, we cannot say with certainty whether it is due to a bad chip quality - it would be possible.

Update 01.06.15:
We made a mistake with this measurement. As we just noticed, the idle power consumption was measured on a different monitor with a higher resolution, which could explain the higher values. We will submit the correct route during the day!

Update 2 June 01.06.15st, XNUMX:
Our assumption has been confirmed. The new measured values ​​recorded with the correct resolution show our sample of the GTX 980 Ti with 12,77 watts, roughly on the same level as the Titan X or GTX 980 - a clearly better value.

Power consumption - graphics card

Load (games)

AMD Radeon R9 295X2

565,00
AMD Radeon HD 7990

375,00
AMD Radeon R9 290X
[Quiet BIOS after 15 min]

306,88
MSI R9 290X Gaming 4G

298,14
AMD Radeon R9 290
[Pattern 2]

289,39
Sapphire Tri-X R9 290X OC

270,42
NVIDIA GeForce GTX 780 Ti

260,00
Sapphire R9 290X Tri-X 8GB

251,18
NVIDIA GeForce GTX Titan X

250,00
NVIDIA GeForce GTX 980 Ti

250,00
NVIDIA GeForce GTX 580

246,85
AMD Radeon R9 290
[Pattern 1 & old driver]

238,72
MSI N580GTX Twin Frozr II OC

238,24
MSI N580 GTX Lightning

237,62
ASUS ROG Matrix GTX 580 Platinum

236,81
AMD Radeon R9 290X
[PerformanceBIOS]

231,05
MSI R9 280X OC

228,55
MSI R7970 Lightning

224,97
Sapphire Radeon R9 280X Vapor-X

211,28
NVIDIA GeForce GTX 780

198,92
NVIDIA GeForce GTX Titan
[875MHz]

198,10
MSI GTX 970 Gaming 4G

198,00
ASUS GTX 980 Strix

195,00
NVIDIA GeForce GTX 980

180,00
EVGA GTX 980 SC ACX 2.0
[Max 1418MHz]

180,00
Palit GTX 970 Jetstream

180,00
NVIDIA GeForce GTX 980
[baseclock]

160,00
Inno3D GeForce GTX 970 Herculez X2

160,00
ASUS GTX 970 Strix

159,30
Watt

The power consumption during demanding games is 250 watts right from the start - the maximum that NVIDIA allows the model to work with. Due to the two limits (power and temperature limit) the card is throttled after a short time in the clock, so that in most cases we only looked at clock rates in the range of the averaged boost clock and thus with a power consumption between 217 and 225 Watts were faced.

If you loosen the limits by hand and even lend a hand to the clock rates, you will mostly operate on the permitted 980 watts with the GTX 275 Ti.

Power consumption: Blu-ray playback - multi-monitor operation

Blu-ray playback

For these measurements we use the Blu-ray “Die Hard 4.0” from Twentieth Century Fox Home Entertainment. The Blu-ray uses the H.264 codec, also known as MPEG4-AVC, which is now used in most films. PowerDVD from Cyberlink is used as the software; for version details, please refer to the article's test environment.

Power consumption - graphics card

Blu-ray playback

Sapphire Tri-X R9 290X OC

100%
AMD Radeon R9 295X2

99%
AMD Radeon R9 290
[Pattern 1 & old driver]

95%
Sapphire R9 290X Tri-X 8GB

93%
AMD Radeon R9 290
[Pattern 2]

84%
AMD Radeon R9 290X
[Quiet BIOS after 15 min]

83%
AMD Radeon R9 290X
[PerformanceBIOS]

83%
MSI R9 290X Gaming 4G

82%
AMD Radeon HD 7990

70%
MSI R7970 Lightning

66%
MSI R9 280X OC

58%
Sapphire Radeon R9 280X Vapor-X

54%
NVIDIA GeForce GTX Titan
[875MHz]

25%
NVIDIA GeForce GTX 780

23%
NVIDIA GeForce GTX 980 Ti

21%
ASUS GTX 980 Strix

21%
NVIDIA GeForce GTX Titan X

21%
MSI GTX 970 Gaming 4G

19%
NVIDIA GeForce GTX 780 Ti

18%
ASUS GTX 970 Strix

17%
Inno3D GeForce GTX 970 Herculez X2

17%
EVGA GTX 980 SC ACX 2.0
[Max 1418MHz]

17%
Palit GTX 970 Jetstream

16%
NVIDIA GeForce GTX 980

15%
NVIDIA GeForce GTX 980
[baseclock]

15%
Watt

As usual, NVIDIA shines in these comparisons, because you can leave the clock rates of the GPU and the memory at idle levels, so that no higher voltages are applied and, as a result, the power consumption when playing Blu-ray material (HD material) remains more or less at the level of the idle power consumption.

Multi-monitor operation

While the GPU manufacturers are now very careful to reduce the power consumption in idle mode as much as possible, the operation of multiple screens is often left out of these optimizations. According to the manufacturers, the clock drop in the memory in particular can lead to picture flickering, which is why a drop is often omitted there and a separate power level with different voltages and clock rates is used.

We noticed at least one minor change with NVIDIA's GTX 600 family. If only two monitors are operated (even with different resolutions), the card works with the idle power level, and only when using three monitors you switch to a multi-monitor power level. With three monitors, the power consumption of NVIDIA is very similar to that of the AMD models.

Power consumption graphics card multi-monitor operation

Idle (2 devices)

AMD Radeon HD 7990

61,65
ASUS Matrix HD 7970 Platinum

60,64
AMD Radeon HD 7870

48,90
XFX Radeon HD 7870 Black Edition

43,87
AMD Radeon HD 7870 Tahiti LE
[VTX3D Radeon HD 7870 Black]

37,54
AMD Radeon R9 295X2

37,30
Sapphire Radeon HD 7850 Dual-X 1GB

33,40
AMD Radeon R9 270X

32,94
PowerColor HD 7850 PCS +

32,20
Sapphire Radeon HD 7870 XT with Boost

32,06
PowerColor Radeon HD 7870 PCS +

31,49
XFX Radeon HD 7850 Black Edition

30,86
AMD Radeon HD 7850

26,17
Sapphire Radeon R9 280X Toxic

25,19
Sapphire HD 7790 Dual-X OC

24,87
Sapphire Radeon R9 270X Toxic

24,19
Sapphire Tri-X R9 290X OC

22,97
Sapphire HD 7790 Dual-X OC

22,73
AMD Radeon R7 260X

22,49
MSI R7790 OC Edition

22,44
MSI R9 290X Gaming 4G

21,94
EVGA GeForce GTX 680

21,60
MSI R7790 OC Edition

21,27
Zotac GeForce GTX 680

21,10
AMD Radeon R9 290
[Pattern 2]

21,06
MSI GTX 770 Lightning

20,31
XFX Radeon HD 7770 Black Edition

19,97
ASUS R9 270 DCU II OC

19,85
Sapphire Radeon HD 7770 Vapor-X

19,68
NVIDIA GeForce GTX 980 Ti

19,28
ASUS GTX 980 Strix

18,86
Sapphire R9 290X Tri-X 8GB

18,83
MSI GTX 970 gaming

16,96
Sapphire Radeon R9 280X Vapor-X

16,83
Sparkle Caliber X680 Captain

16,77
Sapphire Radeon R9 280 Dual X

16,36
NVIDIA GeForce GTX 780 Ti

16,17
XFX R9 270X Black Edition DD

15,93
EVGA GTX 980 SC ACX 2.0
[Max 1418MHz]

15,83
ASUS GTX 970 Strix

15,79
Palit GTX 970 Jetstream

15,61
NVIDIA GeForce GTX Titan Black

15,48
Inno3D GeForce GTX 970 Herculez X2

15,32
ASUS Radeon R7 250X

15,14
NVIDIA GeForce GTX Titan X

14,98
NVIDIA GeForce GTX 780

14,69
NVIDIA GeForce GTX Titan
[875MHz]

14,62
Club3D Radeon R9 285 CoolStream

14,51
XFX Radeon R9 285 Black OC Edition

13,53
MSI GTX 960 Gaming 2G

12,65
NVIDIA GeForce GTX 770
[1084MHz]

12,23
NVIDIA GeForce GTX 760
[1033MHz]

12,05
MSI GTX 650 Ti Boost TwinFrozr OC

11,85
NVIDIA GeForce GTX 980

11,70
Sapphire Radeon R7 265 Dual X

11,28
Sapphire R9 285 ITX Compact

10,73
NVIDIA GeForce GTX 650 Ti Boost

10,15
AMD Radeon R7 260

8,42
NVIDIA GeForce GTX 750 Ti

8,13
Watt

Again, very good values ​​can be seen in this comparison.

Power consumption graphics card multi-monitor operation

Idle (3 devices)

NVIDIA GeForce GTX Titan Black

100%
AMD Radeon R9 295X2

100%
NVIDIA GeForce GTX 780 Ti

97%
NVIDIA GeForce GTX Titan X

88%
NVIDIA GeForce GTX 980 Ti

87%
Sapphire R9 290X Tri-X 8GB

82%
AMD Radeon HD 7990

80%
ASUS Matrix HD 7970 Platinum

78%
MSI R9 290X Gaming 4G

75%
Sapphire Tri-X R9 290X OC

75%
AMD Radeon R9 290
[Pattern 2]

70%
NVIDIA GeForce GTX Titan
[875MHz]

69%
NVIDIA GeForce GTX 780

65%
Sapphire Radeon R9 280X Toxic

65%
MSI GTX 970 gaming

63%
Zotac GeForce GTX 680

60%
EVGA GeForce GTX 680

59%
XFX R9 270X Black Edition DD

59%
ASUS GTX 970 Strix

58%
ASUS GTX 980 Strix

58%
Sapphire Radeon R9 280 Dual X

56%
Sapphire Radeon R9 280X Vapor-X

56%
MSI GTX 770 Lightning

56%
Sapphire Radeon R9 270X Toxic

56%
Palit GTX 970 Jetstream

56%
Inno3D GeForce GTX 970 Herculez X2

56%
NVIDIA GeForce GTX 770
[1084MHz]

55%
EVGA GTX 980 SC ACX 2.0
[Max 1418MHz]

53%
Sparkle Caliber X680 Captain

52%
Club3D Radeon R9 285 CoolStream

52%
ASUS R9 270 DCU II OC

52%
NVIDIA GeForce GTX 980

51%
NVIDIA GeForce GTX 760
[1033MHz]

50%
XFX Radeon R9 285 Black OC Edition

48%
Sapphire R9 285 ITX Compact

46%
MSI GTX 650 Ti Boost TwinFrozr OC

43%
AMD Radeon R9 270X

43%
NVIDIA GeForce GTX 650 Ti Boost

41%
Sapphire Radeon R7 265 Dual X

38%
MSI GTX 960 Gaming 2G

35%
Sapphire HD 7790 Dual-X OC

33%
AMD Radeon R7 260X

29%
MSI R7790 OC Edition

29%
AMD Radeon R7 260

29%
NVIDIA GeForce GTX 750 Ti

21%
ASUS Radeon R7 250X

20%
Watt

Special features must be clearly emphasized here. We have also described these in the chapter on technical innovations for monitor connections. If we operate three devices on the GTX 980 and 970 with DVI, HDMI and display port, the cards switch to a different power level as usual. In this case, the clock rates are even slightly higher than in the previous models. However, this may be due to the fact that the base clocks are also selected higher.

However, if we connected the monitors to the GTX 970 via 2 x DVI and 1 x DP, the graphics card stayed at idle speed and we saw a maximum of 15 watts in power consumption! NVIDIA has not yet announced any details.

This means that NVIDIA - depending on the connection configuration - would now be able to operate three different monitors in idle cycle. It would be beneficial if such changes were also communicated. So far, however, the manufacturer still doesn't know - according to our feedback - what exactly we are talking about.

In the case of the GTX 980, GTX 980 Ti and Titan X, however, this is not relevant. There are only the mentioned connection options, and when connecting three devices, the intermediate power stage is applied. This means that there is a power consumption of around 70 watts. This is absolutely not a good value and is in the top third of our comparisons.

overclocking

Overclocking doesn't just depend on cooling solutions. You have to realize that the overclockability of graphics cards - be it GPU or memory - depends on many factors and the individual components. In addition, of course, there is the fact that manual intervention in the clock rates immediately occurs Loss of warranty can lead.

We can describe the result as positive, because we were able to drive our GTX 980 Ti model to a maximum GPU clock of 1.380 MHz and the memory to 1.950 MHz real clock.

Of course, we raised the limits for temperature and power consumption to the maximum allowed in advance, and of course we were then slowed down beyond the maximum limits with our overclocking attempts. In most cases, the power limit was activated, which is 275 watts and throttled the GPU clock rates down to 1.329 MHz in our benchmarks. Nevertheless, our interventions show that the GeForce GTX 980 Ti still has reserves and scales wonderfully.

In the applications shown below, performance increased in the range from 17 to 21 percent. The power consumption has to be considered in relation to the automatic throttling, because there the GTX 980 Ti usually only needs around 220 to 225 watts with a boost of 1.075 MHz. This also increased the power consumption.

OC benchmarks 2560 × 1440 (with anti-aliasing)

Crysis 3

NVIDIA GeForce GTX 980 Ti
[GPU 1380MHz / RAM 1952MHz]

100%
NVIDIA GeForce GTX 980 Ti
[max boost]

85%
NVIDIA GeForce GTX Titan X

84%
FPS
OC benchmarks 2560 × 1440 (with anti-aliasing)

Far Cry 4

NVIDIA GeForce GTX 980 Ti
[GPU 1380MHz / RAM 1952MHz]

54,92
NVIDIA GeForce GTX 980 Ti
[max boost]

46,97
NVIDIA GeForce GTX Titan X

46,83
FPS
OC benchmarks 2560 × 1440 (with anti-aliasing)

Bioshock: Infinite

NVIDIA GeForce GTX 980 Ti
[GPU 1380MHz / RAM 1952MHz]

112%
NVIDIA GeForce GTX Titan X

100%
NVIDIA GeForce GTX 980 Ti
[max boost]

94%
FPS
OC benchmarks 2560 × 1440 (with anti-aliasing)

Metro: Last Light Redux

NVIDIA GeForce GTX 980 Ti
[GPU 1380MHz / RAM 1952MHz]

54,19
NVIDIA GeForce GTX Titan X

46,85
NVIDIA GeForce GTX 980 Ti
[max boost]

44,92
FPS
OC benchmarks 2560 × 1440 (with anti-aliasing)

tomb raider

NVIDIA GeForce GTX 980 Ti
[GPU 1380MHz / RAM 1952MHz]

45,73
NVIDIA GeForce GTX 980 Ti
[max boost]

39,13
NVIDIA GeForce GTX Titan X

39,12
FPS

Game benchmarks (OpenGL)

Game BRINK
Developer Splash damage
Publisher Bethesda Softworks
publication 13 May 2011
Genre Ego shooter
Graphics engine modified idTech 4
DirectX path / API OpenGL
Age rating USK 16 years
Benchmark measurement Fraps / savegame
Test area Hostage rescue
Runtime benchmark 10 seconds
Benchmark settings Highest levels of detail
Order from Amazon
Brink

1920 x 1080 [No AA / 16xAF]

NVIDIA GeForce GTX 980 Ti
[Max]

241,51
NVIDIA GeForce GTX Titan X
[typical boost clock]

237,29
NVIDIA GeForce GTX 980 Ti
[typical boost clock]

229,51
NVIDIA GeForce GTX 980
[typical boost clock]

192,08
NVIDIA GeForce GTX 780 Ti

174,95
AMD Radeon R9 290X

153,86
FPS
Brink

2560 x 1440 [No AA / 16xAF]

NVIDIA GeForce GTX 980 Ti
[Max]

100%
NVIDIA GeForce GTX Titan X
[typical boost clock]

99%
NVIDIA GeForce GTX 980 Ti
[typical boost clock]

94%
NVIDIA GeForce GTX 980
[typical boost clock]

76%
NVIDIA GeForce GTX 780 Ti

66%
AMD Radeon R9 290X

57%
FPS
Brink

3840 x 2160 [No AA / 16xAF]

NVIDIA GeForce GTX Titan X
[typical boost clock]

56,30
NVIDIA GeForce GTX 980 Ti
[Max]

54,14
NVIDIA GeForce GTX 980 Ti
[typical boost clock]

52,18
NVIDIA GeForce GTX 980
[typical boost clock]

41,58
NVIDIA GeForce GTX 780 Ti

35,94
AMD Radeon R9 290X

32,38
FPS

Wolfenstein: The new order

Game Wolfenstein: The new order
Developer Machine Games
Publisher Bethesda
publication May 2014
Genre Ego shooter
Age rating 18 years
Graphics engine id Tech 5
DirectX path OpenGL
Benchmark measurement Fraps / savegame
Test area Chapter 9 intro
Runtime benchmark 10 seconds
Benchmark settings Highest levels of detail
HT4U-Test
Find on Amazon*

Test scene of the game

Wolfenstein: The New Order

1920 x 1080 [No AA / 16xAF]

NVIDIA GeForce GTX 780 Ti

59,98
NVIDIA GeForce GTX 980
[typical boost clock]

59,98
NVIDIA GeForce GTX Titan X
[typical boost clock]

59,97
NVIDIA GeForce GTX 980 Ti
[Max]

59,96
NVIDIA GeForce GTX 980 Ti
[typical boost clock]

59,90
AMD Radeon R9 290X

42,19
FPS
Wolfenstein: The New Order

2560 x 1440 [No AA / 16xAF]

NVIDIA GeForce GTX 980 Ti
[Max]

56,09
NVIDIA GeForce GTX Titan X
[typical boost clock]

55,88
NVIDIA GeForce GTX 980 Ti
[typical boost clock]

52,57
NVIDIA GeForce GTX 780 Ti

48,24
NVIDIA GeForce GTX 980
[typical boost clock]

46,13
AMD Radeon R9 290X

30,68
FPS
Wolfenstein: The New Order

3840 x 2160 [No AA / 16xAF]

NVIDIA GeForce GTX 980 Ti
[Max]

32,01
NVIDIA GeForce GTX Titan X
[typical boost clock]

31,88
NVIDIA GeForce GTX 980 Ti
[typical boost clock]

30,49
NVIDIA GeForce GTX 980
[typical boost clock]

26,08
NVIDIA GeForce GTX 780 Ti

25,07
AMD Radeon R9 290X

19,33
FPS

Game benchmarks (DirectX 9)

The Elder Scrolls V: Skyrim [Modded]

Game The Elder Scrolls: Skyrim (Modded)
Developer Bethesda Game Studios
Publisher Bethesda Softworks
publication (March 2012)
Genre role playing game
Age rating 16 years
Graphics engine Creation Engine
DirectX path DirectX 9
Benchmark measurement Fraps / savegame
Test area River forest
Runtime benchmark 10 seconds
Benchmark settings Highest levels of detail, FXAA, High Resolution Texture Pack
Installed mods Realistic Water Two, Tree HD Variation, Verdant Grass Plugin, Wet & Cold, Vivid Landscapes Dungeons & Ruins
Order from Amazon*

 

Benchmark scene in the test

In our approach to modding Skyrim, we unfortunately made the mistake of not comparing the results on representatives of both graphics card manufacturers at the same time. Unfortunately, one of the installed mods ensures that AMD can barely cope with these settings, and of course that cannot be fair in terms of the approach, because AMD will never (want and can) take care of a mod that was created from a hobby project is. So we have to approach the Skyrim and Modding construction site again. The results of this test are therefore not taken into account in the performance index.

The Elder Scrolls V: Skyrim (Modded)

1920 x 1080 [4xAA / 16xAF]

NVIDIA GeForce GTX 980 Ti
[Max]

66,83
NVIDIA GeForce GTX Titan X
[typical boost clock]

64,89
NVIDIA GeForce GTX 980 Ti
[typical boost clock]

62,66
NVIDIA GeForce GTX 780 Ti

60,70
NVIDIA GeForce GTX 980
[typical boost clock]

58,27
AMD Radeon R9 290X

24,48
FPS
The Elder Scrolls V: Skyrim (Modded)

2560 x 1440 [4xAA / 16xAF]

NVIDIA GeForce GTX 980 Ti
[Max]

57,11
NVIDIA GeForce GTX Titan X
[typical boost clock]

56,22
NVIDIA GeForce GTX 980 Ti
[typical boost clock]

53,97
NVIDIA GeForce GTX 780 Ti

50,37
NVIDIA GeForce GTX 980
[typical boost clock]

49,18
AMD Radeon R9 290X

21,20
FPS
The Elder Scrolls V: Skyrim (Modded)

3840 x 2160 [4xAA / 16xAF]

NVIDIA GeForce GTX 980 Ti
[Max]

41,26
NVIDIA GeForce GTX Titan X
[typical boost clock]

39,75
NVIDIA GeForce GTX 980 Ti
[typical boost clock]

38,16
NVIDIA GeForce GTX 780 Ti

34,27
NVIDIA GeForce GTX 980
[typical boost clock]

32,94
AMD Radeon R9 290X

17,72
FPS

The Witcher 2 - Assassins of Kings

 

Game The Witcher 2 - Assassins of Kings
Developer CD Projekt RED
Publisher CD project, Atari
publication 17 May 2011
Genre RPG, fantasy
Graphics engine RED engine
DirectX path DirectX 9
Age rating USK 16 years
Benchmark measurement Fraps / savegame
Test area barricade
Runtime benchmark 10 seconds
Benchmark settings Highest levels of detail
The Witcher 2 - Assassins of Kings

1920 x 1080 [No AA / 16xAF]

NVIDIA GeForce GTX 980 Ti
[Max]

100%
NVIDIA GeForce GTX Titan X
[typical boost clock]

100%
NVIDIA GeForce GTX 980 Ti
[typical boost clock]

97%
NVIDIA GeForce GTX 980
[typical boost clock]

87%
NVIDIA GeForce GTX 780 Ti

83%
AMD Radeon R9 290X

82%
FPS
The Witcher 2 - Assassins of Kings

2560 x 1440 [No AA / 16xAF]

NVIDIA GeForce GTX 980 Ti
[Max]

118,64
NVIDIA GeForce GTX Titan X
[typical boost clock]

117,83
NVIDIA GeForce GTX 980 Ti
[typical boost clock]

113,23
NVIDIA GeForce GTX 980
[typical boost clock]

89,80
AMD Radeon R9 290X

86,18
NVIDIA GeForce GTX 780 Ti

85,49
FPS
The Witcher 2 - Assassins of Kings

3840 x 2160 [No AA / 16xAF]

NVIDIA GeForce GTX Titan X
[typical boost clock]

54,83
NVIDIA GeForce GTX 980 Ti
[Max]

51,84
NVIDIA GeForce GTX 980 Ti
[typical boost clock]

50,28
NVIDIA GeForce GTX 980
[typical boost clock]

41,75
AMD Radeon R9 290X

41,09
NVIDIA GeForce GTX 780 Ti

40,68
FPS
The Witcher 2 - Assassins of Kings

1920 x 1080 [4xSSAA / 16xAF]

NVIDIA GeForce GTX 980 Ti
[Max]

80,60
NVIDIA GeForce GTX Titan X
[typical boost clock]

79,64
NVIDIA GeForce GTX 980 Ti
[typical boost clock]

78,38
AMD Radeon R9 290X

63,01
NVIDIA GeForce GTX 980
[typical boost clock]

61,03
NVIDIA GeForce GTX 780 Ti

60,94
FPS
The Witcher 2 - Assassins of Kings

2560 x 1440 [4xSSAA / 16xAF]

NVIDIA GeForce GTX Titan X
[typical boost clock]

49,08
NVIDIA GeForce GTX 980 Ti
[Max]

48,73
NVIDIA GeForce GTX 980 Ti
[typical boost clock]

47,36
AMD Radeon R9 290X

39,39
NVIDIA GeForce GTX 780 Ti

37,29
NVIDIA GeForce GTX 980
[typical boost clock]

37,10
FPS
The Witcher 2 - Assassins of Kings

3840 x 2160 [4xSSAA / 16xAF]

NVIDIA GeForce GTX Titan X
[typical boost clock]

23,48
NVIDIA GeForce GTX 980 Ti
[Max]

22,50
NVIDIA GeForce GTX 980 Ti
[typical boost clock]

21,25
AMD Radeon R9 290X

19,04
NVIDIA GeForce GTX 780 Ti

17,94
NVIDIA GeForce GTX 980
[typical boost clock]

17,30
FPS

Game benchmarks (DirectX 11)

Alien: isolation

Game Alien: isolation
Developer Creative Assembly
Publisher SEGA
publication 07 October 2014
Genre Survival horror
Graphics engine CA engine
DirectX path / API DirectX 11
Age rating USK 16 years
Benchmark measurement Fraps / savegame
Test area Level 9 signals
Runtime benchmark 10 seconds
Benchmark settings maximum levels of detail
HT4U-Test Order from Amazon

In-game test scene

Alien Isolation

1920 x 1080 [No AA / 16xAF]

NVIDIA GeForce GTX Titan X
[typical boost clock]

102%
NVIDIA GeForce GTX 980 Ti
[Max]

100%
NVIDIA GeForce GTX 980 Ti
[typical boost clock]

95%
NVIDIA GeForce GTX 980
[typical boost clock]

80%
NVIDIA GeForce GTX 780 Ti

72%
AMD Radeon R9 290X

69%
FPS
Alien Isolation

2560 x 1440 [No AA / 16xAF]

NVIDIA GeForce GTX Titan X
[typical boost clock]

90,77
NVIDIA GeForce GTX 980 Ti
[Max]

85,98
NVIDIA GeForce GTX 980 Ti
[typical boost clock]

82,91
NVIDIA GeForce GTX 980
[typical boost clock]

71,29
AMD Radeon R9 290X

65,56
NVIDIA GeForce GTX 780 Ti

64,05
FPS
Alien Isolation

3840 x 2160 [No AA / 16xAF]

NVIDIA GeForce GTX Titan X
[typical boost clock]

41,81
NVIDIA GeForce GTX 980 Ti
[Max]

39,86
NVIDIA GeForce GTX 980 Ti
[typical boost clock]

38,34
NVIDIA GeForce GTX 980
[typical boost clock]

33,15
AMD Radeon R9 290X

31,47
NVIDIA GeForce GTX 780 Ti

29,41
FPS

CE

Game CE
Developer Related Designs / Ubisoft Blue Byte
Publisher Ubisoft
publication 17 November 2011
Genre strategy game
Age rating 6 years
Graphics engine InitEngine
DirectX path DirectX 9 / DirectX 11
Benchmark measurement Fraps / savegame
Test area On the trail of the truth
Runtime benchmark 10 seconds
Benchmark settings Highest levels of detail
Order from Amazon

In-game test scene

CE

1920 x 1080 [No AA / 16xAF]

NVIDIA GeForce GTX Titan X
[typical boost clock]

106,30
NVIDIA GeForce GTX 980 Ti
[Max]

101,52
NVIDIA GeForce GTX 980 Ti
[typical boost clock]

101,24
NVIDIA GeForce GTX 980
[typical boost clock]

100,28
AMD Radeon R9 290X

99,05
NVIDIA GeForce GTX 780 Ti

93,04
FPS
CE

2560 x 1440 [No AA / 16xAF]

NVIDIA GeForce GTX Titan X
[typical boost clock]

100%
NVIDIA GeForce GTX 980 Ti
[Max]

95%
NVIDIA GeForce GTX 980 Ti
[typical boost clock]

91%
AMD Radeon R9 290X

83%
NVIDIA GeForce GTX 980
[typical boost clock]

77%
NVIDIA GeForce GTX 780 Ti

71%
FPS
CE

3840 x 2160 [No AA / 16xAF]

NVIDIA GeForce GTX Titan X
[typical boost clock]

36,08
NVIDIA GeForce GTX 980 Ti
[Max]

35,14
NVIDIA GeForce GTX 980 Ti
[typical boost clock]

34,62
AMD Radeon R9 290X

31,47
NVIDIA GeForce GTX 980
[typical boost clock]

28,54
NVIDIA GeForce GTX 780 Ti

26,35
FPS

Assassin's Creed Unity

Game Assassin's Creed Unity
Developer Ubisoft Montreal
Publisher Ubisoft
publication 13 November 2014
Genre Action-Adventure
Graphics engine AnvilNext engine
DirectX path / API DirectX 11
Age rating USK 16 years
Benchmark measurement Fraps / savegame
Test area Sequence 7.2 - A meeting with Mirabeau
Runtime benchmark 25 seconds
Benchmark settings maximum levels of detail
HT4U-Test Order from Amazon*
Assassins Creed: Unity

1920 x 1080 [No AA / 16xAF]

NVIDIA GeForce GTX 980 Ti
[Max]

56,98
NVIDIA GeForce GTX Titan X
[typical boost clock]

56,93
NVIDIA GeForce GTX 980 Ti
[typical boost clock]

53,66
NVIDIA GeForce GTX 980
[typical boost clock]

46,19
NVIDIA GeForce GTX 780 Ti

41,16
AMD Radeon R9 290X

36,25
FPS
Assassins Creed: Unity

2560 x 1440 [No AA / 16xAF]

NVIDIA GeForce GTX Titan X
[typical boost clock]

39,90
NVIDIA GeForce GTX 980 Ti
[Max]

39,79
NVIDIA GeForce GTX 980 Ti
[typical boost clock]

37,53
NVIDIA GeForce GTX 980
[typical boost clock]

31,41
NVIDIA GeForce GTX 780 Ti

28,01
AMD Radeon R9 290X

26,33
FPS
Assassins Creed: Unity

3840 x 2160 [No AA / 16xAF]

NVIDIA GeForce GTX Titan X
[typical boost clock]

21,60
NVIDIA GeForce GTX 980 Ti
[Max]

21,02
NVIDIA GeForce GTX 980 Ti
[typical boost clock]

19,98
NVIDIA GeForce GTX 980
[typical boost clock]

16,27
AMD Radeon R9 290X

13,80
NVIDIA GeForce GTX 780 Ti

13,08
FPS

Battlefield 4

Game Battlefield 4
Developer EA Digital Illusions CE
Publisher Electronic Arts
publication Dezember 2013
Genre Ego shooter
Age rating USK: 18 years
Graphics engine Frostbite 3
DirectX path DirectX 10 / DirectX 11 / Mantle
Benchmark measurement Fraps / savegame
Test area Level 6: Tashgar - Checkpoint 5
Runtime benchmark 10 seconds
Benchmark settings Highest level of detail, DX 11
HT4U-Test Order from Amazon

In-game test scene

Battlefield 4

1920 x 1080 [No AA / 16xAF]

NVIDIA GeForce GTX 980 Ti
[Max]

100%
NVIDIA GeForce GTX Titan X
[typical boost clock]

97%
NVIDIA GeForce GTX 980 Ti
[typical boost clock]

95%
NVIDIA GeForce GTX 980
[typical boost clock]

79%
NVIDIA GeForce GTX 780 Ti

71%
AMD Radeon R9 290X

66%
FPS
Battlefield 4

2560 x 1440 [No AA / 16xAF]

NVIDIA GeForce GTX 980 Ti
[Max]

82,9
NVIDIA GeForce GTX Titan X
[typical boost clock]

82,3
NVIDIA GeForce GTX 980 Ti
[typical boost clock]

76,6
NVIDIA GeForce GTX 980
[typical boost clock]

64,0
NVIDIA GeForce GTX 780 Ti

58,6
AMD Radeon R9 290X

55,9
FPS
Battlefield 4

3840 x 2160 [No AA / 16xAF]

NVIDIA GeForce GTX Titan X
[typical boost clock]

100%
NVIDIA GeForce GTX 980 Ti
[Max]

97%
NVIDIA GeForce GTX 980 Ti
[typical boost clock]

90%
AMD Radeon R9 290X

82%
NVIDIA GeForce GTX 980
[typical boost clock]

77%
NVIDIA GeForce GTX 780 Ti

69%
FPS
Battlefield 4

1920 x 1080 [4xAA / 16xAF]

NVIDIA GeForce GTX 980 Ti
[Max]

108,0
NVIDIA GeForce GTX Titan X
[typical boost clock]

105,0
NVIDIA GeForce GTX 980 Ti
[typical boost clock]

101,9
NVIDIA GeForce GTX 980
[typical boost clock]

80,8
NVIDIA GeForce GTX 780 Ti

73,8
AMD Radeon R9 290X

67,9
FPS
Battlefield 4

2560 x 1440 [4xAA / 16xAF]

NVIDIA GeForce GTX 980 Ti
[Max]

101%
NVIDIA GeForce GTX Titan X
[typical boost clock]

100%
NVIDIA GeForce GTX 980 Ti
[typical boost clock]

97%
NVIDIA GeForce GTX 980
[typical boost clock]

76%
NVIDIA GeForce GTX 780 Ti

69%
AMD Radeon R9 290X

66%
FPS
Battlefield 4

3840 x 2160 [4xAA / 16xAF]

NVIDIA GeForce GTX Titan X
[typical boost clock]

27,4
NVIDIA GeForce GTX 980 Ti
[Max]

26,1
NVIDIA GeForce GTX 980 Ti
[typical boost clock]

24,7
NVIDIA GeForce GTX 980
[typical boost clock]

21,4
AMD Radeon R9 290X

20,5
NVIDIA GeForce GTX 780 Ti

18,3
FPS

Bioshock: Infinite

Game BioShock: Infinite
Developer Irrational Games, 2K Marin, Human Head Studios
Publisher 2K Games
publication March 26, 2013
Genre First person shooter with fantasy elements
Graphics engine U
DirectX path DirectX 10 and 11
Age rating USK 18 years
Benchmark measurement Fraps / savegame
Test area Finkton Proper
Runtime benchmark 10 seconds
Benchmark settings System settings Maximum & FXAA
HT4U-Test
Order from Amazon*

Benchmark scene in the test

We are now deliberately writing it again here in the benchmark course, because it seems to be a question of a broad understanding problem. We don't use the useless BioShock benchmark (useless because it doesn't evaluate scenes in its runs and doesn't even begin to show a worst-case scenario). We're using a savegame that represents a worst-case scenario, like you often find in BioShock! This has repeatedly led to discussions and queries, which is why we would like to make it clear once again here. And since there are still people who "skip over", we even put this paragraph in bold typeface.

Bioshock: Infinite

1920 x 1080 [No AA / 16xAF]

NVIDIA GeForce GTX Titan X
[typical boost clock]

116,16
NVIDIA GeForce GTX 980 Ti
[Max]

112,95
NVIDIA GeForce GTX 980 Ti
[typical boost clock]

103,19
AMD Radeon R9 290X

92,03
NVIDIA GeForce GTX 980
[typical boost clock]

89,82
NVIDIA GeForce GTX 780 Ti

82,63
FPS
Bioshock: Infinite

2560 x 1440 [No AA / 16xAF]

NVIDIA GeForce GTX Titan X
[typical boost clock]

73,17
NVIDIA GeForce GTX 980 Ti
[Max]

68,70
NVIDIA GeForce GTX 980 Ti
[typical boost clock]

66,05
AMD Radeon R9 290X

58,17
NVIDIA GeForce GTX 980
[typical boost clock]

54,69
NVIDIA GeForce GTX 780 Ti

49,66
FPS
Bioshock: Infinite

3840 x 2160 [No AA / 16xAF]

NVIDIA GeForce GTX Titan X
[typical boost clock]

35,37
NVIDIA GeForce GTX 980 Ti
[Max]

33,51
NVIDIA GeForce GTX 980 Ti
[typical boost clock]

32,58
AMD Radeon R9 290X

28,18
NVIDIA GeForce GTX 980
[typical boost clock]

25,87
NVIDIA GeForce GTX 780 Ti

22,35
FPS

Call of Duty: Advanced Warfare

Game Call of Duty: Advanced Warfare
Developer Sledgehammer Games
Publisher Activision
publication 04 November 2014
Genre Ego shooter
Graphics engine Infinity Ward engine, modified
DirectX path / API DirectX 11
Age rating USK 18 years
Benchmark measurement Fraps / savegame
Test area Level 10 bio-laboratory - sixth save point
Runtime benchmark 10 seconds
Benchmark settings maximum levels of detail
HT4U-Test Order from Amazon

In-game test scene

Call of Duty: Advanced Warfare

1920 x 1080 [No AA / 16xAF]

NVIDIA GeForce GTX 980 Ti
[Max]

128,86
NVIDIA GeForce GTX Titan X
[typical boost clock]

126,67
NVIDIA GeForce GTX 980 Ti
[typical boost clock]

121,13
NVIDIA GeForce GTX 980
[typical boost clock]

102,34
NVIDIA GeForce GTX 780 Ti

89,82
AMD Radeon R9 290X

69,39
FPS
Call of Duty: Advanced Warfare

2560 x 1440 [No AA / 16xAF]

NVIDIA GeForce GTX 980 Ti
[Max]

99,07
NVIDIA GeForce GTX Titan X
[typical boost clock]

97,18
NVIDIA GeForce GTX 980 Ti
[typical boost clock]

92,82
NVIDIA GeForce GTX 980
[typical boost clock]

77,11
NVIDIA GeForce GTX 780 Ti

69,41
AMD Radeon R9 290X

52,76
FPS
Call of Duty: Advanced Warfare

3840 x 2160 [No AA / 16xAF]

NVIDIA GeForce GTX Titan X
[typical boost clock]

60,11
NVIDIA GeForce GTX 980 Ti
[Max]

60,01
NVIDIA GeForce GTX 980 Ti
[typical boost clock]

56,84
NVIDIA GeForce GTX 980
[typical boost clock]

46,42
NVIDIA GeForce GTX 780 Ti

42,20
AMD Radeon R9 290X

31,50
FPS
Call of Duty: Advanced Warfare

1920 x 1080 [2xSSAA / 16xAF]

NVIDIA GeForce GTX 980 Ti
[Max]

129,20
NVIDIA GeForce GTX Titan X
[typical boost clock]

125,84
NVIDIA GeForce GTX 980 Ti
[typical boost clock]

120,44
NVIDIA GeForce GTX 980
[typical boost clock]

101,71
NVIDIA GeForce GTX 780 Ti

89,39
AMD Radeon R9 290X

69,42
FPS
Call of Duty: Advanced Warfare

2560 x 1440 [2xSSAA / 16xAF]

NVIDIA GeForce GTX 980 Ti
[Max]

98,37
NVIDIA GeForce GTX Titan X
[typical boost clock]

96,60
NVIDIA GeForce GTX 980 Ti
[typical boost clock]

93,36
NVIDIA GeForce GTX 980
[typical boost clock]

77,21
NVIDIA GeForce GTX 780 Ti

69,10
AMD Radeon R9 290X

52,78
FPS
Call of Duty: Advanced Warfare

3840 x 2160 [2xSSAA / 16xAF]

NVIDIA GeForce GTX 980 Ti
[Max]

60,44
NVIDIA GeForce GTX Titan X
[typical boost clock]

59,25
NVIDIA GeForce GTX 980 Ti
[typical boost clock]

57,50
NVIDIA GeForce GTX 980
[typical boost clock]

45,95
NVIDIA GeForce GTX 780 Ti

42,13
AMD Radeon R9 290X

31,38
FPS

Crysis 3

Game Crysis 3
Developer Crytek
Publisher Electronic Arts
publication 21 February 2013
Genre Ego shooter
Graphics engine CryENGINE 3
DirectX path DirectX 9 and 11
Age rating USK 18 years
Benchmark measurement Fraps / savegame
Test area Mission 4 - Swamp
Runtime benchmark 10 seconds
Benchmark settings Default system and textures: maximum
Order from Amazon*

In-game test scene

In the following diagrams, 1 x AA stands for deactivated antialiasing and the post-processing filter FXAA. 2 x AA stands for the special level 4 x SMAA. The game relies on double, regular anti-aliasing (MSAA) and additional filters. The designation 4 x AA corresponds to the usual quadruple anti-aliasing (MSAA).

Crysis 3

1920 x 1080 [No AA / 16xAF]

NVIDIA GeForce GTX Titan X
[typical boost clock]

97,47
NVIDIA GeForce GTX 980 Ti
[Max]

97,35
NVIDIA GeForce GTX 980 Ti
[typical boost clock]

94,92
NVIDIA GeForce GTX 980
[typical boost clock]

84,92
NVIDIA GeForce GTX 780 Ti

83,35
AMD Radeon R9 290X

73,93
FPS
Crysis 3

2560 x 1440 [No AA / 16xAF]

NVIDIA GeForce GTX Titan X
[typical boost clock]

70,67
NVIDIA GeForce GTX 980 Ti
[Max]

70,40
NVIDIA GeForce GTX 980 Ti
[typical boost clock]

67,78
NVIDIA GeForce GTX 980
[typical boost clock]

56,24
NVIDIA GeForce GTX 780 Ti

54,93
AMD Radeon R9 290X

50,74
FPS
Crysis 3

3840 x 2160 [No AA / 16xAF]

NVIDIA GeForce GTX Titan X
[typical boost clock]

34,75
NVIDIA GeForce GTX 980 Ti
[Max]

34,50
NVIDIA GeForce GTX 980 Ti
[typical boost clock]

33,02
NVIDIA GeForce GTX 980
[typical boost clock]

27,65
NVIDIA GeForce GTX 780 Ti

26,74
AMD Radeon R9 290X

25,81
FPS
Crysis 3

1920 x 1080 [2xAA / 16xAF]

NVIDIA GeForce GTX 980 Ti
[Max]

100%
NVIDIA GeForce GTX Titan X
[typical boost clock]

98%
NVIDIA GeForce GTX 980 Ti
[typical boost clock]

96%
NVIDIA GeForce GTX 980
[typical boost clock]

81%
NVIDIA GeForce GTX 780 Ti

81%
AMD Radeon R9 290X

72%
FPS
Crysis 3

2560 x 1440 [2xAA / 16xAF]

NVIDIA GeForce GTX 980 Ti
[Max]

52,36
NVIDIA GeForce GTX Titan X
[typical boost clock]

51,81
NVIDIA GeForce GTX 980 Ti
[typical boost clock]

50,11
NVIDIA GeForce GTX 980
[typical boost clock]

41,59
NVIDIA GeForce GTX 780 Ti

40,94
AMD Radeon R9 290X

38,58
FPS
Crysis 3

3840 x 2160 [2xAA / 16xAF]

NVIDIA GeForce GTX Titan X
[typical boost clock]

25,88
NVIDIA GeForce GTX 980 Ti
[Max]

25,18
NVIDIA GeForce GTX 980 Ti
[typical boost clock]

24,56
NVIDIA GeForce GTX 980
[typical boost clock]

20,36
NVIDIA GeForce GTX 780 Ti

19,79
AMD Radeon R9 290X

19,37
FPS

Dying Light

Game Dying Light
Developer Techland
Publisher Warner Bros.
publication 27 January 2015
Genre Survival horror
Graphics engine Chrome 6 engine
DirectX path / API DirectX 11
Age rating USK 18 years
Benchmark measurement Fraps / savegame
Test area Level 1 Headquarters - The Tower
Runtime benchmark 10 seconds
Benchmark settings maximum levels of detail
HT4U-Test Order from Amazon

In-game test scene

dying light

1920x1080 [No AA / 16xAF]

NVIDIA GeForce GTX 980 Ti
[Max]

100%
NVIDIA GeForce GTX Titan X
[typical boost clock]

99%
NVIDIA GeForce GTX 980 Ti
[typical boost clock]

95%
NVIDIA GeForce GTX 980
[typical boost clock]

76%
NVIDIA GeForce GTX 780 Ti

65%
AMD Radeon R9 290X

64%
FPS
dying light

2560 x 1440 [no AA / 16xAF]

NVIDIA GeForce GTX 980 Ti
[Max]

70.06
NVIDIA GeForce GTX Titan X
[typical boost clock]

69.37
NVIDIA GeForce GTX 980 Ti
[typical boost clock]

66.85
NVIDIA GeForce GTX 980
[typical boost clock]

52.76
AMD Radeon R9 290X

46.71
NVIDIA GeForce GTX 780 Ti

46.19
FPS
dying light

3840x2160 [No AA / 16xAF]

NVIDIA GeForce GTX 980 Ti
[Max]

33.18
NVIDIA GeForce GTX Titan X
[typical boost clock]

32.94
NVIDIA GeForce GTX 980 Ti
[typical boost clock]

31.73
NVIDIA GeForce GTX 980
[typical boost clock]

25.04
AMD Radeon R9 290X

22.61
NVIDIA GeForce GTX 780 Ti

21.69
FPS

Grand Theft Auto V (GTA V)

Game Grand Theft Auto V
Developer Rockstar North
Publisher Rockstar Games
publication 14 April 2015
Genre Action
Age rating USK: 18 years
Graphics engine RAGE engine
DirectX path DirectX 10/11
Benchmark measurement Fraps / savegame
Test area Mountain areas of Los Santos
Runtime benchmark 10 seconds
Benchmark settings Highest level of detail, DX 11
HT4U-Test Order from Amazon*

In-game test scene

Grand Theft Auto V (GTA 5)

1920x1080 [4xAA / 16xAF]

NVIDIA GeForce GTX 980 Ti
[Max]

100%
NVIDIA GeForce GTX Titan X
[typical boost clock]

96%
NVIDIA GeForce GTX 980 Ti
[typical boost clock]

96%
NVIDIA GeForce GTX 980
[typical boost clock]

85%
NVIDIA GeForce GTX 780 Ti

74%
AMD Radeon R9 290X

72%
FPS
Grand Theft Auto V (GTA 5)

2560x1440 [4xAA / 16xAF]

NVIDIA GeForce GTX 980 Ti
[Max]

100%
NVIDIA GeForce GTX Titan X
[typical boost clock]

97%
NVIDIA GeForce GTX 980 Ti
[typical boost clock]

95%
NVIDIA GeForce GTX 980
[typical boost clock]

83%
NVIDIA GeForce GTX 780 Ti

78%
AMD Radeon R9 290X

70%
FPS
Grand Theft Auto V (GTA 5)

3840x2160 [4xAA / 16xAF]

NVIDIA GeForce GTX Titan X
[typical boost clock]

25.56
NVIDIA GeForce GTX 980 Ti
[Max]

25.07
NVIDIA GeForce GTX 980 Ti
[typical boost clock]

23.95
NVIDIA GeForce GTX 980
[typical boost clock]

20.11
NVIDIA GeForce GTX 780 Ti

19.44
AMD Radeon R9 290X

17.61
FPS

Far Cry 4

Game Far Cry 4
Developer Ubisoft Montreal
Publisher Ubisoft
publication 18 November 2014
Genre Ego shooter
Graphics engine Dunia 2 engine
DirectX path / API DirectX 11
Age rating USK 18 years
Benchmark measurement Fraps / savegame
Test area Kyrat International Airport
Runtime benchmark 10 seconds
Benchmark settings maximum levels of detail
HT4U-Test Order from Amazon

In-game test scene

Far Cry 4

1920x1080 [No AA / 16xAF]

NVIDIA GeForce GTX 980 Ti
[Max]

109.35
NVIDIA GeForce GTX Titan X
[typical boost clock]

108.84
NVIDIA GeForce GTX 980 Ti
[typical boost clock]

104.53
NVIDIA GeForce GTX 980
[typical boost clock]

84.60
AMD Radeon R9 290X

79.98
NVIDIA GeForce GTX 780 Ti

76.50
FPS
Far Cry 4

2560 x 1440 [no AA / 16xAF]

NVIDIA GeForce GTX 980 Ti
[Max]

74.24
NVIDIA GeForce GTX Titan X
[typical boost clock]

73.78
NVIDIA GeForce GTX 980 Ti
[typical boost clock]

71.68
NVIDIA GeForce GTX 980
[typical boost clock]

56.96
AMD Radeon R9 290X

56.54
NVIDIA GeForce GTX 780 Ti

52.38
FPS
Far Cry 4

3840x2160 [No AA / 16xAF]

NVIDIA GeForce GTX Titan X
[typical boost clock]

38.88
NVIDIA GeForce GTX 980 Ti
[Max]

38.66
NVIDIA GeForce GTX 980 Ti
[typical boost clock]

36.17
AMD Radeon R9 290X

31.27
NVIDIA GeForce GTX 980
[typical boost clock]

29.79
NVIDIA GeForce GTX 780 Ti

27.22
FPS
Far Cry 4

1920x1080 [4xMSAA / 16xAF]

NVIDIA GeForce GTX 980 Ti
[Max]

71.86
NVIDIA GeForce GTX Titan X
[typical boost clock]

70.76
NVIDIA GeForce GTX 980 Ti
[typical boost clock]

68.04
NVIDIA GeForce GTX 980
[typical boost clock]

53.81
AMD Radeon R9 290X

51.42
NVIDIA GeForce GTX 780 Ti

44.84
FPS
Far Cry 4

2560x1440 [4xMSAA / 16xAF]

NVIDIA GeForce GTX 980 Ti
[Max]

46.98
NVIDIA GeForce GTX Titan X
[typical boost clock]

46.84
NVIDIA GeForce GTX 980 Ti
[typical boost clock]

44.38
NVIDIA GeForce GTX 980
[typical boost clock]

35.21
AMD Radeon R9 290X

34.65
NVIDIA GeForce GTX 780 Ti

31.10
FPS
Far Cry 4

3840x2160 [4xMSAA / 16xAF]

NVIDIA GeForce GTX Titan X
[typical boost clock]

23,92
NVIDIA GeForce GTX 980 Ti
[Max]

23,85
NVIDIA GeForce GTX 980 Ti
[typical boost clock]

22,84
AMD Radeon R9 290X

17,74
NVIDIA GeForce GTX 980
[typical boost clock]

17,74
NVIDIA GeForce GTX 780 Ti

13,48
FPS

Metro: Last Light Redux

Game Metro: Last Light Redux
Developer 4A Games
Publisher Deep Silver
publication August 29, 2014
Genre Ego shooter
Graphics engine 4A engine
DirectX path DirectX 10 and 11
Age rating USK 18 years
Benchmark measurement Fraps / savegame
Test area Chapter Train to the Past
Runtime benchmark 10 seconds
Benchmark settings System settings: Very high - Tess: High
Find it on Amazon*

In-game test scene

Metro: Last Light REDUX

1920 x 1080 [No AA / 16xAF]

NVIDIA GeForce GTX Titan X
[typical boost clock]

124,53
NVIDIA GeForce GTX 980 Ti
[Max]

121,69
NVIDIA GeForce GTX 980 Ti
[typical boost clock]

111,79
NVIDIA GeForce GTX 980
[typical boost clock]

101,47
NVIDIA GeForce GTX 780 Ti

90,49
AMD Radeon R9 290X

61,89
FPS
Metro: Last Light REDUX

2560 x 1440 [No AA / 16xAF]

NVIDIA GeForce GTX Titan X
[typical boost clock]

83,56
NVIDIA GeForce GTX 980 Ti
[Max]

79,64
NVIDIA GeForce GTX 980 Ti
[typical boost clock]

76,73
NVIDIA GeForce GTX 980
[typical boost clock]

65,17
NVIDIA GeForce GTX 780 Ti

60,81
AMD Radeon R9 290X

44,04
FPS
Metro: Last Light REDUX

3840 x 2160 [No AA / 16xAF]

NVIDIA GeForce GTX Titan X
[typical boost clock]

41,09
NVIDIA GeForce GTX 980 Ti
[Max]

39,41
NVIDIA GeForce GTX 980 Ti
[typical boost clock]

35,92
NVIDIA GeForce GTX 980
[typical boost clock]

30,59
NVIDIA GeForce GTX 780 Ti

30,44
AMD Radeon R9 290X

23,64
FPS
Metro: Last Light REDUX

1920 x 1080 [2xSSAA / 16xAF]

NVIDIA GeForce GTX Titan X
[typical boost clock]

76,30
NVIDIA GeForce GTX 980 Ti
[Max]

74,23
NVIDIA GeForce GTX 980 Ti
[typical boost clock]

70,75
NVIDIA GeForce GTX 980
[typical boost clock]

57,68
NVIDIA GeForce GTX 780 Ti

56,14
AMD Radeon R9 290X

41,69
FPS
Metro: Last Light REDUX

2560 x 1440 [2xSSAA / 16xAF]

NVIDIA GeForce GTX Titan X
[typical boost clock]

100%
NVIDIA GeForce GTX 980 Ti
[Max]

96%
NVIDIA GeForce GTX 980 Ti
[typical boost clock]

91%
NVIDIA GeForce GTX 980
[typical boost clock]

74%
NVIDIA GeForce GTX 780 Ti

73%
AMD Radeon R9 290X

57%
FPS
Metro: Last Light REDUX

3840 x 2160 [2xSSAA / 16xAF]

NVIDIA GeForce GTX Titan X
[typical boost clock]

20,72
NVIDIA GeForce GTX 980 Ti
[Max]

18,86
NVIDIA GeForce GTX 780 Ti

15,86
NVIDIA GeForce GTX 980 Ti
[typical boost clock]

15,45
NVIDIA GeForce GTX 980
[typical boost clock]

15,02
AMD Radeon R9 290X

12,11
FPS

Ryse: Son of Rome

Game Ryse: Son of Rome
Developer Crytek
Publisher Deep Silver
publication 10 October 2014
Genre Action-Adventure
Graphics engine CryENGINE 3
DirectX path / API DirectX 11
Age rating USK 18 years
Benchmark measurement Fraps / savegame
Test area Chapter 4
Runtime benchmark 10 seconds
Benchmark settings Default setting: high
HT4U-Test Order from Amazon*

In-game test scene

Ryse: Son of Rome

1920 x 1080 [No AA / 16xAF]

NVIDIA GeForce GTX Titan X
[typical boost clock]

100%
NVIDIA GeForce GTX 980 Ti
[Max]

100%
NVIDIA GeForce GTX 980 Ti
[typical boost clock]

97%
NVIDIA GeForce GTX 980
[typical boost clock]

84%
NVIDIA GeForce GTX 780 Ti

75%
AMD Radeon R9 290X

61%
FPS
Ryse: Son of Rome

2560 x 1440 [No AA / 16xAF]

NVIDIA GeForce GTX 980 Ti
[Max]

73,81
NVIDIA GeForce GTX Titan X
[typical boost clock]

73,63
NVIDIA GeForce GTX 980 Ti
[typical boost clock]

65,75
NVIDIA GeForce GTX 980
[typical boost clock]

54,94
AMD Radeon R9 290X

52,35
NVIDIA GeForce GTX 780 Ti

49,90
FPS
Ryse: Son of Rome

3840 x 2160 [No AA / 16xAF]

NVIDIA GeForce GTX Titan X
[typical boost clock]

35,13
NVIDIA GeForce GTX 980 Ti
[Max]

33,33
NVIDIA GeForce GTX 980 Ti
[typical boost clock]

30,74
NVIDIA GeForce GTX 980
[typical boost clock]

26,58
AMD Radeon R9 290X

25,50
NVIDIA GeForce GTX 780 Ti

24,26
FPS

Thieves (2014)

Game Thieves (2014)
Developer Eidos
Publisher Square Enix
publication February 2014
Genre Action adventure / stealth game
Age rating 16 years
Graphics engine Unreal 3
DirectX path DirectX 9 / DirectX 11
Benchmark measurement Fraps / savegame
Test area Stone Market
Runtime benchmark 10 seconds
Benchmark settings Highest levels of detail
HT4U-Test
Find it on Amazon*

In-game test scene

Thief