The pricing structure that NVIDIA creates from todayIntro
NVIDIA's GeForce GTX Titan X is currently the spearhead in graphics cards, but with a purchase price of well over 1.000 euros, there is a huge gap to the next NVIDIA model, the GeForce GTX 980. The manufacturer closes this gap today with the presentation of the GeForce GTX 980 Ti and is equipping itself in terms of price for the upcoming new AMD Radeon graphics card with Fiji chip and HBM memory technology. Our test clarifies where the GTX 980 Ti is.
As a result, we described the NVIDIA GeForce GTX Titan X as the first single GPU graphics card, which is usually able to run current monitors with 4K resolution in games without having to forego any restrictions in quality settings. But the prices for such graphics cards are enormous and, due to the poor euro exchange rate, are sometimes around 1.200 euros.
The next smaller NVIDIA graphics card so far - the GeForce GTX 980 - is around 25 to 40 percent slower than the GTX Titan X and mostly not able to deliver 4K resolutions in modern games at full details. On the other hand, this service is still reasonably affordable at around 500 euros. However, the price gap between the two models is enormous and should be filled today.
The NVIDIA GeForce GTX 980 Ti is the latest addition to the GeForce series and goes on sale today. In terms of price, it is clearly cheaper than the Titan X, but it also comes with less power. Today's article shows exactly where the performance levels off - and of course also what power consumption or even what background noise the interested customer can expect.
But that is not the only goal that NVIDIA is pursuing, because in a few weeks competitor AMD wants to present its new Fiji GPU, and it is said to be in the price range around 700 euros. It is not yet possible to give an assessment of the performance values, but speculations are expecting an attack on the GTX-Titan-X performance. At least in the expected price range, NVIDIA would have to offer a suitable solution now.
One last word before we start: We took this test as an opportunity and revised our test station. There were slight modifications to the CPU, memory and hard drive. Of course, we also brought the software up to date and also radically revised the benchmarks. Of course, all of this means that today's results can no longer be compared with previous measurements. We ask you to take this into account. As usual, details can be found in the extensive test environment.
Bookmark:
- NVIDIA GeForce GTX Titan X
- NVIDIA GeForce GTX 970 and 980
- NVIDIA GeForce GTX 960
- ASUS GTX 980 STRIX
- EVGA GeForce GTX 980 Superclocked ACX 2.0
- ASUS GeForce GTX 970 Strix
- Inno3D GTX 970 iChill Herculez X2
- MSI GeForce GTX 970 Gaming 4G
- NVIDIA GPU Boost 2.0
- NVIDIA GeForce GTX Titan Black
- NVIDIA GeForce GTX 780 Ti
- NVIDIA GeForce GTX Titan
- NVIDIA GeForce GTX 780
- NVIDIA GeForce GTX 770
- NVIDIA GeForce GTX 760
- NVIDIA GeForce GTX 750 Ti (Maxwell architecture)
- Little Kepler: EVGA GeForce GTX 650 SC and GeForce GT 640
- NVIDIA GeForce GTX 650 Ti
- NVIDIA GeForce GTX 650 Ti Boost
- NVIDIA GeForce GTX 660 - EVGA and MSI in the test
- NVIDIA GeForce GTX 660 Ti technology
- ASUS GeForce GTX 660 Ti DirectCU II TOP
- EVGA GTX 660 Ti with 2 and 3 GB memory (reference view)
- MSI N660 Ti Power Edition
- NVIDIA GeForce GTX 600 technology
- NVIDIA GPU Boost disenchanted
- NVIDIA GeForce GTX 680
- NVIDIA GeForce GTX 670
- NVIDIA GeForce GTX 690 (Dual GPU)
Test environment
Hardware: graphics cards
The test candidate
- NVIDIA GeForce GTX 980 Ti
- AMD Radeon R9 295X2 (HT4U-Test / Amazon offers)
- AMD Radeon R9 290X (HT4U-Test / Amazon offers)
- NVIDIA GeForce GTX TitanHT4U-Test / Amazon offers)
- NVIDIA GeForce GTX 980 (HT4U-Test / Amazon offers)
- Inno3D GeForce GTX 970 Herculez X2 (HT4U-Test / Amazon offers)
- NVIDIA GeForce GTX 780 Ti (HT4U-Test / Amazon offers)
- NVIDIA GeForce GTX Titan (HT4U-Test / Amazon offers)
Monitor resolutions and boost clock rates
Resolutions
We are currently testing in the resolutions 1.680 x 1.050, 1.920 x 1.080 and 2.560 x 1.440. While the former resolution is still the most widespread, the resolution of 1.920 x 1.080 pixels is currently emerging to permanently replace the lower resolution. The highest resolution of 2.560 x 1.440 pixels is currently only used by enthusiasts. Corresponding monitors that support this are still quite expensive. On the other hand, screens with 4K resolutions are slowly becoming affordable, but these are still not mainstream.
However, the resolutions have a demanding effect on the performance of the graphics cards. The higher the resolution, the slower the graphics cards are in displaying their images per second, and of course there are some representatives of the above graphics cards that are not able to display games in the highest resolution.
We have therefore divided the test candidates into three groups:
- Ultra High Quality (up to 3840 x 2160)
- High Quality (up to 2560 x 1440)
- Quality (up to 1920 x 1080)
- Low Quality (up to 1680 x 1050)
Only in the ultra and high quality group do we also allow runs with supersampling and / or eightfold anti-aliasing in the quality settings. These are mostly missing in the smaller groups. There are still a few exceptions.
In the ultra-high group, however, there are only absolute high-end graphics cards. So far, this segment has primarily been reserved for dual GPU solutions.
4K resolution and anti-aliasing
At some point someone in the press probably said that with an Ultra HD resolution of 3.840 x 2.160 pixels, anti-aliasing is no longer necessary. This has established itself in the minds of many users and anchored it as a fact. This generalization is absolutely incorrect.
The much higher pixel density in 4K actually ensures a clearly sharper image, but only in some cases eliminates unattractive staircase formation at edges. In some games, the resolution actually eliminates the need for normal multi-sample antialiasing, but unfortunately in some games not at all.
If stairs or flickering edges are left behind, the picture fetishists in particular will not want to live with such a circumstance and will then try all possible adjusting screws in the game to eliminate this. This is exactly the reason why we still keep this setting at 4K resolution for applications that we benchmark with multi-sampling AA. Admittedly: The 8-fold MSAA benchmarks could indeed be given away - the results are available after the runs and are then simply given for the sake of completeness.
So far, there has always been talk of Full HD, which means the resolution of 1.920 x 1.080 pixels on a display. 4K or Ultra HD gets its name from the pixels of the monitor's horizontal resolution of almost 4.000 pixels. An Ultra HD monitor correctly displays 3.840 x 2.160 pixels - 4.000 pixels horizontally are therefore a little rounded up.
While the technology is still quite new and has usually been launched with IPS displays so far, a few manufacturers are currently following in the PC sector, who rely on the cheaper TN panels, making this technology more affordable. However, some of the offers have their pitfalls! So we had ours Dell P2815Q* Discarded again very quickly, as only 30 Hertz operation was possible here, which can very quickly lead to symptoms of fatigue during daily work. It finally followed Samsung U28D59P*, which is able to guarantee operation at 60 Hz via a DisplayPort connection.
In addition, all common smaller resolutions are supported, which seemed ideal for our test purposes. Due to the panel used, this monitor (and others) can hardly be used by professional users in the graphics sector. The viewing angle, but especially the color fidelity, leaves a lot to be desired for this area.
In the TV sector there are some expensive offers that rely on 4K, but so far there has been no suitable image material on DVD or Blu-ray disc, let alone suitable devices among the players. Some boast upscale features, but that's just a consolation. In the PC area, the whole thing looks a little different. The 4K resolution brings - if the image material supports it - a significantly sharper image.
However, with this resolution on the PC - at least with games - there is the unpleasant side effect that a really powerful graphics card has to be used. In our test runs, we found that even high-end single GPU graphics cards like the Radeon HD 290X or GeForce GTX 780 Ti are in principle overwhelmed if you want to play top titles with maximum detail level and anti-aliasing.
At that point the crux arises. Either cut corners, despite the expensive graphics card, or rely on a dual team that can overcome the hurdles. The current status quo is definitely that 4K monitors like dual GPU graphics cards fall into the absolute high-end segment, where they have their right to exist, but also have to struggle with certain weaknesses.
GPU clock
On the GPU boost gadgetswhich appear more and more and falsify benchmark results, we have so far gone into it often enough. We normally counteract this by intervening in the driver. With NVIDIA graphics cards, we usually only show the performance at typical boost, as specified by the manufacturer. In some cases that is even too high - based on a reference graphics card. But now such eyewashes can also be found with AMD graphics cards, which is why we have to intervene there too. We mention the clock rates separately in the benchmark diagrams.
Hardware: test system
Closed housing
A closed computer case is not representative, and we will go into this again in the following lines. In some cases, however, it is essential to be able to judge certain things. And these cases were almost exclusively triggered by new technologies such as Boost 2.0 from NVIDIA or AMD's new edition of PowerTune.
That is why we carried out additional measurements in a closed housing for this test. We decided on a player case from Cooler Master, namely that CM Storm Enforcer. The Enforcer showed its volume as the biggest drawback in our test. That is why we have the two rear fans with Silent Wings from be quiet! replaced (one in the back, one in the lid) and this together with the 200 mm fan in the front is connected to a fan control and operated at the lowest control level.
The case fans, including the CPU cooler, work as quiet as a whisper, and we also place our test candidates in such an image. At this point you can complain as you like, because in the end the background noise remains something subjective. The environment we have chosen can be accepted as whisper-quiet.
In addition, we have attached two quickly reacting temperature sensors. The first sensor is located in front of the housing at the height of the front fan and monitors the sucked in room temperature. The second sensor was attached directly below the graphics card fan and used it to monitor the fan intake temperature of the graphics card.
The measurements in the housing are made at the usual 21 ° C room temperature.
Typical test station
Here, too, we would like to add a few additional words to the following lists. We deliberately used the processor Intel Core i7 4820 the turbo function, but also deactivated Hyper-Threading. This is basically impractical, but it allows us to rule out possible sources of error in the tests. In our cases, the CPU or its clock rate usually only plays a very subordinate role, since the selected game scenes are very GPU-limiting and therefore the processor is usually only rarely used. It is therefore sufficient to use a smaller cooler model from Scythe as this is practically never required. In our tests, the processor's fan works practically inaudibly.
A word also applies to our open test stand. Since there is practically no PC case that could be representative of the home user in any way, we rely on an open test stand. Depending on the housing used at home, this can be an advantage or a disadvantage. In well thought-out case ventilation, some graphics card coolers should show themselves better in terms of noise behavior, in average concepts probably on the level of the open test stand, and in poorly ventilated cases with clear disadvantages. But that, in turn, is all dependent on many factors, which is why we see a sensible and reproducible way in our test stand. The aforementioned exception naturally applies in special cases that we know how to weigh.
Test station:
Attentive observers will find that there have been slight changes to the test station. On the one hand, we gave the system a CPU upgrade. The Intel Core i7-3820K had to give way to an i7-4820K. We have also overclocked this via the turbo level so that it always addresses all four cores with 3,9 GHz.
- CPU:Intel Core i7-4820K @ 4 x 3,9 GHz (Turbo / HT: off) [Find it on Amazon]
- Motherboard: ASUS P9X79 Pro (X79 chipset) - BIOS: 4801 07-2014 [Find it on Amazon]
- Memory:16 GByte (4 x 4 GB) Kingston HyperX-Beast - SPD operation: DDR3-1600 9-9-9-24-1T at 1,5 volts [Find it on Amazon]
- Cooler: Scythe Samurai ZZ Rev B LGA2011 [Find it on Amazon]
Of course, we also had to update the motherboard BIOS of our ASUS mainboard to the latest version, and we also said goodbye to 8 GB of main memory and are now relying on a 16 GB kit from Kingston from the Hyper-X-Beast -Series.
One last change in the test system took place on the hard drive. A 2 TB model from Western Digital from the Enterprise series is currently used here. And of course all of these changes also have the result that previous measurement results can no longer be compared with the current results. We ask you to take this into account.
Other hardware:
- Western Digital WD2003FYYS RE4 2TB [Find it on Amazon]
- LG GSW H20L (Blu-ray / DVD burner) [Find it on Amazon]
- be quiet! Dark Power Pro 950 Watts [Find it on Amazon]
- G.Skill 100 GB SSD as cache drive [Find it on Amazon]
- Teac Floppy Drive / USB Floppy Drive [Find it on Amazon]
- Dell 27 inch monitor [Find it on Amazon]
- Samsung U28590P 4K Monitor [Find it on Amazon*]
- Lian Li T60 (open test stand) [Find it on Amazon]
Hardware: measuring devices
We like to use high-quality measuring devices in our tests. Volume measuring stations, thermographic cameras, infrared thermometers, clamp ammeters or simply voltage measuring devices (voltmeters) are used.
Depending on the area and purpose, we sometimes rely on well-known manufacturers such as Fluke or Tenma, in other cases also on Conrad's own Voltcraft brand. When it comes to noise emissions, we use special equipment from ulteaudiotechnikwhich enable us to carry out sone measurements in addition to dB (A). Further details on the measurement technology we use can be found here .
- DAAS USB
- Tenma 72-2065A (temperature meter)
- Voltcraft DT2L / K (tachometer)
- Voltcraft MS-9160 measuring station
- Tenma 72-6185 (clamp ammeter)
- Thermal imaging camera PCE-TC 3
- HT4U-GPU Power Estimator (in-house development to measure the power consumption of graphics cards)
Software: driver
- Windows 7 64 bit, including all updates up to April 2015
- Intel chipset driver 10.0.27
- DirectX 9.0c (June 2010 Update)
- Intel LAN Driver V. 16.6.0.0
- Audio driver: Realtek (Windows 7 integrated)
- Marvell SATA 6GB / s V. 1.2.0.1014
- ASMedia USB 3.0 V1.14.3.0
- ASUS AI Center II driver for Marvell caching function
Graphics card driver
- AMD Catalyst 15.5 Beta
- NVIDIA GeForce Driver Version 352.90 Beta
Software: testing philosophy
Of course we revise our test course here and there. New game titles are added and some benchmarks are dropped. In today's test there is the special feature that we have now included a whole lot of new titles in the course of our validations and of course older representatives had to say goodbye. We're still working on The Witcher 3: Wild Hunt, our exams are not over yet, and before we show something undercooked, we'd better wait a little longer.
It is one of our ambitions when selecting the titles that we can offer a healthy mix of DirectX 9, DirectX 10 and DirectX 11 titles as well as OpenGL, which covers different game genres or game engines. However, the past 30 months have shown us more than clearly that hardly any DirectX 9 titles are published any more, and the number of OpenGL titles that are interesting and new can be counted on one hand. In addition, DirectX 12 is now shaking the door. The most recent selection of new titles could currently consist exclusively of DirectX 11 titles.
What remains to be said is that you can exert yourself as you like: no benchmark course is consistently fair. There are far too many applications on the market that turn out to be one side or the other. And if we were to follow AMD or NVIDIA with the recommendations in the selection, one or the other product from the respective manufacturer would always win in every test.
This means that the status quo remains that we derive our conclusions and findings from the applications that we consulted in these tests.
Software: the benchmarks
Game benchmarks
A look at the list of the new benchmarks shows very quickly that some things have changed and some things stayed the same - but only at first glance, because we were also heavily involved in some of the older titles.
An example: We just didn't want to part with The Elder Scrolls V: Skyrim. Why? It continues to have a large following, it will continue to be played but with modifications. We have therefore decided to install some modifications and continue to use TES V. The results shown have nothing to do with the original game.
But we also didn't want to part with Crysis 3 or Tomb Raider - also two former top titles that one or the other likes to dig out and play and which can also still be described as demanding via their engines. With Crysis 3 we have not only changed the test scene in the game from today, we have also preset the settings to ultra-high details. Tomb Raider only changed the test sequence.
The following games were brought up to date in May 2015!
- CE (DX 11 - savegame)
- Alien: isolation (DX 11 savegame)
- Assassin's Creed Unity (DX 11 - Savegame)
- Battlefield 4 (DX 11 - savegame)
- BioShock Infinite (DX 11 - savegame)
- BRINK V. 1.023692.48133 (OpenGL - Savegame)
- Call of Duty: Advanced Warfare (DX11 - Savegame)
- Crysis 3 (DX 11 - savegame)
- Dying Light (DX 11 - savegame)
- Far Cry 4 (DX 11 - savegame)
- Grand Theft Auto V - GTA V (DX 11 - savegame)
- Metro: Last Light Redux (DX 11 - savegame)
- Ryse: Son of Rome (DX 11 - savegame)
- The Elder Scrolls V: Skyrim including various mods (DX 9 - savegame)
- Thief (DX 11 - savegame)
- The Witcher 2: The Assassins of Kings V. 1.35 (DX 9 - savegame)
- tomb raider (DX 11 - savegame)
- Wolfenstein: The New Order (OpenGL - savegame)
This means we are faced with 18 gaming benchmarks again that we have to master, and if nothing goes wrong, we need around five hours for a high-end graphics card for this test course.
We are only making a cut over the selected applications and the scenes used for them. We try to make sure that the chosen scene corresponds to what the game entails. If we encounter worst-case scenarios, we prefer to choose such a scene, because that is what can decisively damage the flow of the game.
Why is XYZ missing?
Why is Battlefield Hardline missing as the successor to Battlefield 4? Why didn't Lords of the Fallen make the test? Why don't we have Mordor's shadow on the course?
There are certain factors to be mentioned that prevented this. Battlefield Hardline comes with a new test mechanism. It is only possible to change the graphics card four times in a 24-hour period before the game calls for a ban. We had reported about it, we had contact with Electronic Arts, but we were simply referred to the end customer support - and we cannot work that way. Battlefield 4 is currently in the benchmark course because of the Frostbite 3 engine, which is also used by Hardline.
Lords of the Fallen - awarded several times as the best German game - ate up our memory games in the test, and seriously: it wasn't that good that we wanted to play it again. Mordor's shadow has the problem that you have to play every benchmark position again, which would be much too tedious for us, but also too imprecise for our measurements, because there is no option to reach the exact same playing position again and at the same time Maintains viewing direction and the like.
Race simulations: Unfortunately, we are currently very disappointed and surprised at the same time. We wanted to replace DiRT Showdown with a Codemasters successor. The manufacturers apparently did not want to work hard and hardly present any visual improvements - the hardware requirements remained the same. Assetto Corsa or Project CARS would be extremely interesting in terms of their requirements, but unfortunately offer no options for comparability or a reproducible internal benchmark. Our secret favorite would actually have been Ubisoft's The Crew, as the title is not only fun, but also offers a lot of optical options. After months of testing, the title was ruled out for similar reasons, but more because it is simply only possible to play The Crew in permanent online status. Since our test station remains at the same level X, so that the results remain comparable and since we have to forego anti-virus programs and similar protective measures for the same reasons, this title is unfortunately also excluded. So at this point we are waiting for our possible favorite, which is not only fun and pleasing to the eye, but also allows permanently reproducible results.
GPU Computing Benchmarks
It is a little sad to see that recent GPGPU implementations on modern applications do not fully exploit the performance of graphics cards. Suddenly applications like Adobe Photoshop or GIMP - very popular graphics programs - rely on GPGPU acceleration, but this is not strictly implemented. Ultimately, this is also the reason why some more powerful graphics cards cannot stand out in such a comparison. At the same time, this is also the reason why there are so few applications in this test area, and also often exotic ones. The software industry has not yet recognized the potential of graphics cards as arithmetic units - or the lobby from the CPU warehouse is too big.
And once again all of this is the motivation for separating from another representative. After we had to part with OCL-Hashcat as a hobby project, we say goodbye (at least temporarily) to the CL benchmark, which suddenly can no longer be used in the previous version and refers to a newer variant. The results can absolutely no longer be compared. The CL benchmark will probably be the next building site of our work to include it again in a new version. Unfortunately, this was not possible for today's test.
Further software in the test:
- Tom Clancy's HAWX (Power Consumption Games)
- Furmark 1.6.5 (power consumption simulated full load)
- PowerDVD 9 Ultra V. 9.0.4105.51 (power consumption Blu-ray playback)
- MSI Afterburner
NVIDIA GeForce GTX 980 Ti
Technical consideration
Key data | GeForce GTX TitanX | GeForce GTX 980 Ti | GeForce GTX 980 | GeForce GTX 780 Ti | GeForce GTX Titan |
Codename | GM200 (Maxwell) | GM200 (Maxwell) | GM204 (Maxwell) | GK110 (Kepler) | GK110 (Kepler) |
Production | 28 nm | 28 nm | 28 nm | 28 nm | 28 nm |
Transistors | 8 Billion | 8 Billion | 5,2 Billion | 7,1 Billion | 7,1 Billion |
Chip clock rate (base) | 1.000 MHz | 1.000 MHz | 1.126 MHz | 875 MHz | 837 MHz |
Chip clock rate (averaged boost) | 1.075 MHz | 1.075 MHz | 1.216 MHz | 928 MHz | 875 MHz |
Memory clock rate (MHz) | 1.752 MHz | 1.752 MHz | 1.752 MHz | 1.752 MHz | 1.502 MHz |
Memory clock rate (Mbps) | 7.000 Mbps | 7.000 Mbps | 7.000 Mbps | 7.010 Mbps | 6.008 Mbps |
Storage type | GDDR5 | GDDR5 | GDDR5 | GDDR5 | GDDR5 |
Typical memory size | 12.288 MB | 6.144 MB | 4.096 MB | 3.072 MB | 6.144 MB |
memory interface | 384 bit | 384 bit | 256 bit | 384 bit | 384 bit |
Shader arithmetic units | 3.072 | 2.816 | 2.048 | 2.880 | 2.688 |
Command architecture | Scalar | Scalar | Scalar | Scalar | Scalar |
Skills per shader unit | MADD | MADD | MADD | MADD | MADD |
Double Precision Support | Yes - 1/32 SP performance | Yes - 1/32 SP performance | Yes - 1/32 SP performance | Yes - 1/24 SP performance | Yes - 1/3 SP performance |
Texture Units (TMUs) | 192 | 176 | 128 | 240 | 224 |
Raster Operation Units (ROP) | 96 | 96 | 64 | 48 | 48 |
Shader model version | 5.0 | 5.0 | 5.0 | 5.0 | 5.0 |
DirectX version | DirectX 11 | DirectX 11 | DirectX 11 | DirectX 11 | DirectX 11 |
Audio controller | 7.1 (HD bitstream) | 7.1 (HD bitstream) | 7.1 (HD bitstream) | 7.1 (HD bitstream) | 7.1 (HD bitstream) |
Video processor | VP5 | VP5 | VP5 | VP5 | VP5 |
Typical power consumption (manufacturer information) | ? | ? | 165 W | ? | ? |
Maximum power consumption (manufacturer information) | 250 W | 250 W | 180 W | 250 W | 250 W |
The GeForce GTX 980 Ti is based on the GM200 chip, which has 8 billion transistors. NVIDIA first introduced the GM200 in April with the current graphics card flagship GeForce GTX TitanX.
NVIDIA uses the GM980 chip for the GeForce GTX 200 Ti, but compared to the GeForce GTX Titan X, some functional units have been deactivated. This enables NVIDIA to downgrade its performance and at the same time bring partially defective GM200 chips onto the market. Only the streaming multiprocessors (SMM) are affected by the deactivations of the GeForce GTX 980 Ti: While the GTX Titan X starts with 24 SMMs, the number of SMMs in the GeForce GTX 980 Ti has been reduced to 22.
Each SMM has eight Vec16 arithmetic units and eight texturing units (TMU), which means the GeForce GTX 980 Ti with 22 SMMs has a total of 176 TMUs and 176 Vec16 arithmetic units (corresponds to 2.816 CUDA cores [176 SMMs * 16 slots]). With the memory interface (384 bit) and the ROPs (96) the card is not restricted. However, the GeForce GTX 980 Ti does not use 4 Gbit GDDR5 chips, but chips with a capacity of 2 Gbit. This reduces the storage capacity of the GTX 980 Ti to 6 GB, compared to the 12 GB of the GTX Titan X.
GeForce GTX TitanX | GeForce GTX 980 Ti | GeForce GTX 980 | GeForce GTX 780 Ti | GeForce GTX Titan | |
Computing Power - SP (MADD) | 6.144 GFLOPs | 5.632 GFLOPs | 4.612 GFLOPs | 5.040 GFLOPs | 4.500 GFLOPs |
Computing power - DP (MADD) | 192 GFLOPs | 176 GFLOPs | 144 GFLOPs | 210 GFLOPs | 1.500 GFLOPs |
Texturing performance (INT8 bilinear) | 192,0 GTex / s | 176,0 GTex / s | 144,1 GTex / s | 210,0 GTex / s | 187,5 GTex / s |
Pixel fill rate | 96,0 GPix / s | 96,0 GPix / s | 72,1 GPix / s | 42,0 GPix / s | 40,2 GPix / s |
Memory Bandwidth | 336,0 GB / s | 336,0 GB / s | 224,0 GB / s | 336,4 GB / s | 288,4 GB / s |
Since the GTX 980 Ti starts with the same clock rates as the GeForce GTX Titan X, there are no differences in the theoretical key data in the memory bandwidth and fill rate. Due to the two deactivated SMMs, the GTX 980 Ti is around 8 percent behind the flagship GTX Titan X in terms of computing and texturing performance.
Clock rates and limitations
Boost and base clock
As is common now, NVIDIA issues two clocks for the GPU: the base and the averaged boost clock. The base clock of the GeForce GTX 980 Ti is in the same range as the Titan X, i.e. 1.000 MHz, and should not be undercut in any application. The boost clock specification is also identical to the Titan X at 1.075 MHz, and our model managed at best 1.215 MHz, with an additional voltage of 1.240 MHz.
NVIDIA only understands the averaged boost clock to be the maximum clock rate that all graphics cards in this series on the market can achieve - this information is by no means a guarantee that the clock will always be there. Only the base clock represents a guaranteed clock.
For the first time with the GTX 970 and GTX 980 we encountered the fact that the clock rate was not reached; when testing the Titan X again, but only when using Furmark. We couldn't find this fact in today's test of the GTX 980 Ti. Even under Furmark, the base rate was not undercut. So NVIDIA seems to have changed something in the firmware. That could possibly be the temperature limit.
Limitations
As previously known from Boost 2.0 at NVIDIA, there are two factors that can limit the speed of the GPU. This is on the one hand the set temperature limit of 83 ° C and on the other hand the power consumption of 250 watts. NVIDIA monitors the factors via chips on the board, and if the limits are reached, the driver intervenes and throttles the GPU clock and voltage.
In the case of the relatively high TDP of 250 watts, it is seldom that the power limit intervenes in practice. These peaks usually arise at the beginning of a benchmark that is demanding in terms of power consumption (e.g. Anno 2070). In this case, the clock rates of a maximum of 1.215 MHz were reduced immediately. If the temperature limit of 83 ° C is reached and the fan can keep the temperature level with the intended noise measures that are acceptable from NVIDIA's point of view, the GPU clocks down further until this requirement is also reached.
Unusual: Furmark quickly brought the card to 87 ° C and gave the fan time to work, only to throttle it down to 85 ° C. Although we did not intervene in any options, the temperature did not drop to 83 ° C and the clock rate stayed at 999 MHz. But let's leave Furmark aside, because worst-case behavior also exists in games.
The above recordings serve as an example of a 30-minute game in the named titles (in a closed case - see test environment).
With Crysis 3 we saw usual behavior. After just five minutes, the clock rates were permanently at 1.075 MHz, fell in heated battles and depending on the scene to 1.050 MHz. Nothing about that changed after 20 minutes.
Dying Light is obviously an absolute challenge for NVIDIA technology. Immediately after loading the level and entering the game, the boost collapsed completely. It initially reorganized itself with a base clock rate of 999 MHz, only to then work at 1.075 MHz for a short time and then drop off again immediately. In the selected level and in the said scenes, we mostly only saw clock rates around 30 MHz over the course of 1.025 minutes. Only when there were no action-heavy scenes (on roofs, for example) did the clock rates recover somewhat. 1.100 MHz were a rarity even then.
Playing in Tomb Raider also quickly made the GTX 980 Ti sweat. After five minutes in the game at the latest, the fun was over and the clock jerk orgy showed up in the range from 1.063 to 1.088 MHz. In the worst case it was 1.050 MHz, and when things went well, a clock of 1.100 MHz flashed for a moment.
Manual options
And again, of course, the user has the option of using tools to loosen the limitations set by NVIDIA to a certain extent. The manufacturer approves this, and the board partners have so far also approved it to the same extent.
In the case of the Titan X, the temperature limit can be raised from 83 to 91 ° C. The restriction on power consumption can be increased by 10 percent and thus lands on 275 watts. On the one hand that sounds like a lot, and in practice it is. For enthusiasts and tweakers who are willing to pay such a high price, these options - especially when it comes to power consumption - are a bad joke, because with simple overclocking you can drive the GTX 980 Ti in regions of 275 watts without that you still turn the tension screw.
Turning the voltage screw (limited to a maximum of +0,87 mV, as usual) is not allowed by NVIDIA or board partners and is at your own risk.
Memory usage in games
The memory expansion is currently a big topic in marketing and is often mentioned in the course of 4K resolutions. And of course, higher resolutions also need more memory. But there is always the question of how the game developer compensates for the lack of memory.
Game | Resolution | Memory allocation [MByte] |
CE | 3.840 x 2.160 | 1.000 |
Assassin's Creed Unity | 3.840 x 2.160 | 3.750 |
Assassin's Creed IV: Black Flag | 3.840 x 2.160 | 1.800 |
Battlefield 4 | 3.840 x 2.160 | 2.500 |
Brink | 3.840 x 2.160 | 900 |
Call of Duty: Advanced Warfare | 3.840 x 2.160 | 6.100 |
Call of Duty: Ghosts | 3.840 x 2.160 | 5.400 |
Crysis 3 | 3.840 x 2.160 | 2.900 |
Dying Light | 3.840 x 2.160 | 3.700 |
Far Cry 4 | 3.840 x 2.160 | 4.850 |
Hitman: Absolution | 3.840 x 2.160 | 3.500 |
Lords of the traps | 3.840 x 2.160 | 6.100 |
Metro Last Light | 3.840 x 2.160 | 2.100 |
Middle-earth: Shadow of Mordor | 3.840 x 2.160 | 3.900 |
Ryse: Son of Rome | 3.840 x 2.160 | 3.000 |
TES V: Skyrim | 3.840 x 2.160 | 2.600 |
Thief | 3.840 x 2.160 | 4.000 |
tomb raider | 3.840 x 2.160 | 2.700 |
We did not consult all of the games on today's benchmark course, but the selected 18 titles form a good basis for an overall impression. And outliers, which clearly take up more than 4 gigabytes in the respective scenes, are rare.
But even in these games you can still play without problems with a GeForce GTX 780 Ti with only 3 GB of main memory. The game then simply uses the existing memory in a different form. Playing in 4K resolution is more dependent on how well the graphics card itself works, and here almost all single GPU variants quickly reach their limits in the highest levels of detail. If you turn down the level of detail, the game's memory requirements usually decrease very quickly.
In the end, the GeForce GTX 980 Ti is really well positioned with its 6 gigabytes of GDDR5 main memory. The 12 GByte of the GTX Titan X are currently to be seen as a marketing move and only bring anything to the end customer in very few cases.
The test candidate at a glance
Key data and scope of delivery
Key data / scope of delivery | NVIDIA GeForce GTX Titan X (reference) | NVIDIA GeForce GTX 980 Ti (Reference) | NVIDIA GeForce GTX 980 (reference) |
chipset | GM200 | GM200 | GM204 |
GPU clock rate (base) | 1.000 MHz | 1.000 MHz | 1.126 MHz |
GPU clock rate (Boost) | 1.075 MHz | 1.075 MHz | 1.216 MHz |
Clock rate memory | 1.750 MHz | 1.750 MHz | 1.750 MHz |
main memory | 12 GB GDDR5 | 6 GB GDDR5 | 4 GB GDDR5 |
Monitor outputs | 1 x DVI | 1 x DVI | 1 x DVI |
3 x DisplayPort | 3 x DisplayPort | 3 x DisplayPort | |
1 X HDMI (2.0) | 1 X HDMI (2.0) | 1 X HDMI (2.0) | |
Features | - | - | - |
Measurements and weight: | |||
Weight | 915 grams | 905 grams | 1.030 grams |
Length of PCB (including slot plate) | 26,8 cm | 26,8 cm | 26,8 cm |
Length of PCB (including cooler) | 26,8 cm | 26,8 cm | 26,8 cm |
PCB height (from slot plate) | 12,6 cm | 12,6 cm | 12,6 cm |
PCB height (incl. Cooler) | 12,6 cm | 12,6 cm | 12,6 cm |
- | - | - | - |
Scope of delivery hardware | - | - | - |
Scope of delivery software | - | - | - |
NVIDIA list price (as of June 01, 2015) | 999 US Dollars | 649 US Dollars | 499 US Dollars |
The Titan X and 980 Ti weigh a little less than the GeForce GTX 980. This is only due to the missing aluminum backplate on the back, which was really senseless. There was no contact with components, so it was never to be understood as a cooling plate, but only as a visual gimmick. NVIDIA should have made it clear from the start that this would lead to an increase in temperature.
There is not much to say about the scope of delivery as we are dealing with reference models. The Game bundle with The Witcher 3: Wild Hunt and the upcoming Batman: Arkham Knight expires on June 1, 2015. It's unclear whether NVIDIA will extend this. That would of course be a nice additional incentive to buy. But here you have to wait and see if NVIDIA will go up again.
With the arrival of the GTX 980 Ti, prices will only change slightly. The list price of the new model is $ 649, the Titan X stays at $ 999, and the GTX 980 is $ 499. With taxes, the GTX 980 Ti should cost around 749 euros in this country - high-end graphics cards have become expensive again, and this is not only due to the current poor exchange rate, but also to NVIDIA's influence for a few years, but also to the end customer, who is the prices so assumes. The $ 500 mark is a thing of the past, and AMD's Fiji is also expected to be priced around 700 euros.
Impressions
We have entered a short chapter because there is nothing new to report. NVIDIA continues to use its standard reference cooler, which has been with us since the first GTX Titan. Here again in silver and not in black.
We can only partially share the high praise for the cooler. There is absolutely no question that NVIDIA probably created the first radial reference cooler that does not go along with quite as much noise. But anyone who perceives more than 30 dB (A) to be quiet is by no means sensitive to the background noise.
In addition, there is another problem due to the design, because the encapsulated cooler design is supposed to transport its heated exhaust air via the I / O shield and the openings there. There are also air outlet slots there. Due to the wide range of monitor connection options, the problem now arises that parts of these openings are covered by the housing lock, which means that the warm air can escape more slowly and the fan has to turn higher.
Nonetheless, NVIDIA remains true to its concept here. As soon as you allow the partners (this is not always the case) to work with a different cooler, you immediately see separate drafts from the board partners. They are not always better, but in some cases they are. It is to be expected that NVIDIA will also allow its own designs in the case of the GTX 980 Ti, after all, the first have now also been made alternative cooler for the Titan X appeared. However, no separate announcements of such titanium models that are sold with this fan have been made - NVIDIA seems to continue to issue bans here.
In terms of the external power supply, the GTX 980 Ti is of course identical to the Titan X - after all, both have a TDP of 250 watts. Theoretically, both models would be able to handle 300 watts through the connections. That would certainly arise, but NVIDIA cleverly prevents this through its limitations with temperature and power targets.
There are no changes to the converter implementation either - we are dealing with a 1: 1 design of the Titan X. The only change is in the main memory, which is now only 6 GB, which is why there are no memory chips on the back of the board.
Practical experience
Voltages and clock rates
As is well known, essential details of our articles consist in the use special measuring equipment from different areas. Especially when there is tension, the past has taught us that monitoring tools can provide clues, but their display often does not correspond to reality. So we make sure of this at this point. Different devices are used - depending on the area of application.
In the case of this test area, we primarily rely on our MS-9160 measuring station or the Fluke Clampmeter 345. The Voltcraft measuring station was adjusted to the six-digit voltmeter slot of a calibrated Hewlett-Packard HP5328B and a calibrated BBC-MA5D voltmeter - the measured values of our devices were then identical to those of the references to two decimal places. With the appropriate software, we are of course also able to create recordings of the measurements.
We saw the usual values for modern NVIDIA graphics cards when idling. The GPU of the GTX 980 Ti clocks with 135 MHz and the memory with 202 MHz. Our sample clocks up to a maximum of 1.215 MHz under load. The memory clock is 1.750 MHz under load.
We have determined the other clock stages and the voltages applied as follows (real measured values, no tool readout):
Clock rates / voltages NVIDIA GeForce GTX 980 Ti | GPU clock rate (MHz) | Clock memory (MHz) | GPU voltage (volts) | Voltage storage (volts) |
Load-free operation | 135 | 202 | 0,890 | 1,355 |
Blu-ray playback | 135 | 202 | 0,878 | 1,355 |
Multi-monitor operation (2 devices) | 135 | 202 | 0,878 | 1,355 |
Multi-monitor operation (3 devices) | 810 | 1.752 | 1,020 | 1,583 |
ATiTool | 1.215 | 1.752 | 1,194 | 1,585 |
Furmark load | 999 | 1.752 | 1,017 | 1,589 |
There is hardly any new knowledge to be gained in this chapter. Clock rates and voltages are very similar to the previous Maxwell GPU representatives. Interestingly, the rate does not fall below the base rate under Furmark load, as we experienced with the Titan X.
References
Clock rates / voltages NVIDIA GeForce GTX Titan X | GPU clock rate (MHz) | Clock memory (MHz) | GPU voltage (volts) | Voltage storage (volts) |
Load-free operation | 135 | 202 | 0,889 | 1,356 |
Blu-ray playback | 135 | 202 | 0,870 | 1,356 |
Multi-monitor operation (2 devices) | 135 | 202 | 0,864 | 1,356 |
Multi-monitor operation (3 devices) | 810 | 1.752 | 1,025 | 1,585 |
ATiTool | 1.190 | 1.752 | 1,168 | 1,585 |
Furmark load | 937 | 1.752 | 1,011 | 1,585 |
Clock rates / voltages NVIDIA GeForce GTX 980 | GPU clock rate (MHz) | Clock memory (MHz) | GPU voltage (volts) | Voltage storage (volts) |
Load-free operation | 135 | 162 | 0,856 | |
Blu-ray playback | 135 | 162 | 0,856 | |
Multi-monitor operation (2 devices) | 135 | 162 | 0,856 | |
Multi-monitor operation (3 devices) | 911 | 1.752 | 1,025 | |
ATiTool | 1.240 | 1.752 | 1,206 | |
Furmark load | to 1.037 | 1.752 | 1,025 |
Clock rates / voltages ASUS GTX 980 STRIX | GPU clock rate (MHz) | Clock memory (MHz) | GPU voltage (volts) | Voltage storage (volts) |
Load-free operation | 135 | 162 | 0,851 | 1,318 |
Blu-ray playback | 135 | 162 | 0,851 | 1,318 |
Multi-monitor operation (2 devices) | 135 | 162 | 0,851 | 1,318 |
Multi-monitor operation (3 devices) | 949 | 1.752 | 1,002 | 1,546 |
ATiTool | 1.316 | 1.752 | 1,222 | 1,531 |
Furmark load | to 1.139 | 1.752 | 1,045 | 1,517 |
Clock rates / voltages EVGA GTX 980 SC ACX 2.0 | GPU clock rate (MHz) | Clock memory (MHz) | GPU voltage (volts) | Voltage storage (volts) |
Load-free operation | 135 | 162 | 0,857 | 1,393 |
Blu-ray playback | 135 | 162 | 0,833 | 1,393 |
Multi-monitor operation (2 devices) | 135 | 162 | 0,833 | 1,393 |
Multi-monitor operation (3 devices) | 1.013 | 1.752 | 0,995 | 1,574 |
ATiTool | 1.418 | 1.752 | 1,208 | 1,581 |
Furmark load | to 1.088 | 1.752 | 1,05 | 1,580 |
Clock rates / voltages Inno3D iChill GTX 970 Herculez X2 | GPU clock rate (MHz) | Clock memory (MHz) | GPU voltage (volts) | Voltage storage (volts) |
Load-free operation | 135 | 162 | 0,877 | 1,381 |
Blu-ray playback | 135 | 162 | 0,865 | 1,381 |
Multi-monitor operation (2 devices) | 135 | 162 | 0,878 | 1,381 |
Multi-monitor operation (3 devices) | 873 | 1.752 | 1,034 | 1,554 |
ATiTool | 1.291 | 1.752 | 1,267 | 1,554 |
Furmark load | to 967 | 1.752 | 1,085 | 1,551 |
Clock rates / voltages NVIDIA GeForce GTX 780 Ti | GPU clock rate (MHz) | Clock memory (MHz) | GPU voltage (volts) | Voltage storage (volts) |
Load-free operation | 324 | 162 | 0,878 | 1,356 |
Blu-ray playback | 324 | 162 | 0,878 | 1,356 |
Multi-monitor operation (2 devices) | 324 | 162 | 0,878 | 1,356 |
Multi-monitor operation (3 devices) | 705 | 1.750 | 0,939 | 1,634 |
ATiTool | 1.020 | 1.750 | 1,176 | 1,634 |
Furmark load | 875 MHz | 1.750 | 1,021 | 1,634 |
Clock rates / voltages NVIDIA GeForce GTX 780 | GPU clock rate (MHz) | Clock memory (MHz) | GPU voltage (volts) | Voltage storage (volts) |
Load-free operation | 324 | 162 | 0,875 | 1,375 |
Blu-ray playback | 324 | 162 | 0,875 | 1,378 |
Multi-monitor operation (2 devices) | 324 | 162 | 0,875 | 1,378 |
Multi-monitor operation (3 devices) | 692 | 1.502 | 0,924 | 1,557 |
ATiTool | to 993 | 1.502 | 1,147 | 1,561 |
Furmark load | to 863 | 1.502 | 1,021 | 1,564 |
Temperature behavior
The inventory is taken here using monitoring tools such as the MSI Afterburner or GPU-Z. The idle values are recorded after a certain load and cooling phase, which can result in measurement tolerances.
We emulate 3D gaming load using Tom Clancy's HAWX, which behaves similarly to aliens vs. Predator or The Witcher 2. We understand this measurement as a worst-case scenario for games, although our test scene from Anno 2070 currently puts more load on the graphics cards.
Finally, in this chapter it should be pointed out that, at the request of many readers, we have thinned out the comparison tables in order to provide a better overview. More comprehensive comparisons can be found in the appendix of the article.
Idle desktop
Temperatures | |
Idle |
|
Palit GTX 970 Jetstream | |
ASUS GTX 980 Strix | |
ASUS GTX 970 Strix | |
EVGA GTX 980 SC ACX 2.0 [Max 1418MHz] |
|
MSI GTX 970 Gaming 4G | |
AMD Radeon R9 290 | |
AMD Radeon R9 290X [PerformanceBIOS] |
|
NVIDIA GeForce GTX 780 | |
Sapphire Tri-X R9 290X OC | |
MSI R9 290X Gaming 4G | |
Sapphire R9 290X Tri-X 8GB | |
NVIDIA GeForce GTX 980 Ti | |
NVIDIA GeForce GTX 980 [Default] |
|
NVIDIA GeForce GTX 980 [baseclock] |
|
Inno3D GeForce GTX 970 Herculez X2 | |
NVIDIA GeForce GTX Titan X | |
NVIDIA GeForce GTX 780 Ti | |
NVIDIA GeForce GTX Titan [875MHz] |
|
AMD Radeon R9 295X2 | |
° C |
There isn't much to talk about in the first test of the chapter. The cards move at uncritical temperature levels, and it should be emphasized that the background noise of the GeForce GTX 980 Ti is practically imperceptible. Our test sample shows a slightly higher idle temperature compared to the Titan X, but this is not relevant in these regions.
Games (HAWX)
Temperatures | |
Last games |
|
AMD Radeon R9 290 | |
AMD Radeon R9 290X [PerformanceBIOS] |
|
MSI R9 290X Gaming 4G | |
NVIDIA GeForce GTX Titan X | |
NVIDIA GeForce GTX 980 Ti | |
NVIDIA GeForce GTX 780 Ti | |
NVIDIA GeForce GTX 980 [Default] |
|
NVIDIA GeForce GTX Titan [875MHz] |
|
NVIDIA GeForce GTX 780 | |
NVIDIA GeForce GTX 980 [baseclock] |
|
Inno3D GeForce GTX 970 Herculez X2 | |
Sapphire Tri-X R9 290X OC | |
EVGA GTX 980 SC ACX 2.0 [Max 1418MHz] |
|
ASUS GTX 970 Strix | |
Palit GTX 970 Jetstream | |
Sapphire R9 290X Tri-X 8GB | |
ASUS GTX 980 Strix | |
MSI GTX 970 Gaming 4G | |
AMD Radeon R9 295X2 | |
° C |
We didn't expect any surprises in terms of temperature in the load test either - how should it, since a temperature limit is set here. And that is 83 ° C, so a jump to 84 ° C is the maximum that can be seen briefly, provided that you do not use tools to break the limits. As expected, the volume of the GTX-980-Ti fan is clearly audible.
Surprisingly, we saw a different behavior under Furmark than usual. In contrast to the Titan X, the base clock was kept at 1.000 MHz, but the temperature was at a peak of 87 ° C for a certain period of time. The fan worked and managed to throttle the temperature to 85 ° C within about five minutes - but the value remained there. It would appear that NVIDIA made a change here through either the card's driver or firmware.
Converter temperatures
We use a thermal imaging camera to determine the possible critical areas on the PCB. We use it to scan the back of the circuit board and take a closer look at possible hotspots, which usually occur primarily in the area of the power supply components. Previous empirical values for comparisons with internal temperature diodes, which are possible in some cases, show measurement differences in the range of 5 to 10 ° C - even less in particularly hot situations. However, this procedure also gives us an insight into the entire heat distribution, especially on the surrounding component groups, which is not possible by reading internal diodes or laser thermometers.
Temperatures | |
Converter temperatures |
|
AMD Radeon R9 295X2 | |
MSI GTX 970 Gaming 4G | |
Inno3D GeForce GTX 970 Herculez X2 | |
EVGA GTX 980 SC ACX 2.0 [Max 1418MHz] |
|
ASUS GTX 980 Strix | |
Sapphire Tri-X R9 290X OC | |
Palit GTX 970 Jetstream | |
MSI R9 290X Gaming 4G | |
NVIDIA GeForce GTX Titan X | |
NVIDIA GeForce GTX 780 | |
NVIDIA GeForce GTX Titan [875MHz] |
|
NVIDIA GeForce GTX 780 Ti | |
NVIDIA GeForce GTX 980 Ti | |
ASUS GTX 970 Strix | |
Sapphire R9 290X Tri-X 8GB | |
AMD Radeon R9 290X [PerformanceBIOS] |
|
AMD Radeon R9 290 | |
° C |
The temperature development in this area of the GTX 980 Ti is quite positive. The recorded value of around 81 °C is not only uncritical, but rather cool for a high-end solution. Here, manufacturer measures certainly make a positive contribution thanks to component optimization and processing, but last but not least also that this high-end model “only” has to process 250 watts.
background noise
Loudness measurement – How to measure HT4U. Net
Anyone who has read our articles for a while knows that we do not take the issue of volume lightly, but rather investigate this area very intensively. We currently have our previous test station on another current device from the company ulteaudiotechnik expanded in the form of the new DAASUSB, which has also been extended to our needs with a subsonic function.
The calibrated device allows us to take measurements in the dB (A) and sone range and, as usual, we give the measurement results standardized, which corresponds to a distance of 1 meter. The spectral analyzes also give an impression of the fan behavior of the individual test candidates.
After we have just looked at the temperature behavior, in the next step we of course want to take a closer look at the background noise, because after all, both go hand in hand in behavior.
We didn't encounter any surprises when idling. The well-known cooler from NVIDIA plays a quiet game on our model, even quieter than on the GTX 980. With only 11,9 dB (A), the behavior can be described as whisper-quiet, and this background noise is definitely no longer from a closed case perceive.
Of course, fun is over with maximum 3D load. The 250 watt power consumption of the GTX 980 Ti must be cooled, and the cooling structure remains the familiar one. While it was 980 to 26 dB (A) on the GTX 30, we are now at 31 dB (A). We are not talking about noise at this point, but a noise that can always be clearly identified from a closed housing.
In hot summer months and with sustained full load or with manual intervention in the limits, a worst-case scenario can of course also arise - we simulated this using Furmark, and the volume increases to 37,6 dB (A) or 4,9 sone. This is a little higher than the Titan X and definitely not for spoiled ears and silent freaks. Others may not be bothered by the noise yet, because we would not yet call this behavior noise - but it is definitely too loud for us.
Brief comparison [db (A)]
Since we have recently received repeated comments about the length of our comparison diagrams, we have now put the complete comparison, also with older graphics cards, at the end of the article in the appendix and show "thinned out" comparisons below.
Volume measurements: sound pressure [dB (A)] | |
Idle |
|
EVGA GeForce GTX 670 SC | |
Palit GeForce GTX 670 | |
EVGA GeForce GTX 680 | |
MSI GTX 770 Lightning | |
NVIDIA GeForce GTX 760 [1033MHz] |
|
ASUS GeForce GTX 670 DCU II TOP | |
Club3D Radeon R9 285 CoolStream | |
XFX Radeon R9 285 Black OC Edition | |
NVIDIA GeForce GTX 690 | |
AMD Radeon R9 290X [Quiet BIOS after 15 min] |
|
AMD Radeon R9 290X [PerformanceBIOS] |
|
AMD Radeon R9 290 [Pattern 1 & old driver] |
|
AMD Radeon R9 290 [Pattern 2] |
|
Sparkle Caliber X680 Captain | |
Inno3D GeForce GTX 970 Herculez X2 | |
Sparkle Caliber X670 Captain | |
EVGA GeForce GTX 680 | |
NVIDIA GeForce GTX 980 [Max 1240MHz] |
|
NVIDIA GeForce GTX 980 [1126MHz] |
|
Sapphire Tri-X R9 290X OC | |
Sapphire Radeon R9 280X Vapor-X | |
Sapphire Radeon R9 280X Toxic | |
MSI GTX 680 OC TwinFrozr III | |
Gainward GeForce GTX 670 Phantom | |
Zotac GeForce GTX 680 | |
MSI R9 290X Gaming 4G | |
Sapphire Radeon R9 280 Dual X | |
Sapphire R9 285 ITX Compact | |
MSI R9 280X OC | |
NVIDIA GeForce GTX 770 [1084MHz] |
|
NVIDIA GeForce GTX 780 | |
NVIDIA GeForce GTX Titan [875MHz] |
|
NVIDIA GeForce GTX 780 Ti | |
Sapphire R9 290X Tri-X 8GB | |
XFX R9 280X Black DD OC | |
NVIDIA GeForce GTX 980 Ti | |
Gigabyte GeForce GTX 670 Windforce | |
NVIDIA GeForce GTX Titan X | |
NVIDIA GeForce GTX 750 Ti | |
MSI GTX 970 gaming | |
EVGA GTX 980 SC ACX 2.0 [Max 1418MHz] |
|
ASUS GTX 980 Strix | |
ASUS GTX 970 Strix | |
Palit GTX 970 Jetstream | |
MSI GTX 960 Gaming 2G | |
dB (A) |
Volume measurements: sound pressure [dB (A)] | |
Load (games) |
|
AMD Radeon R9 290X [PerformanceBIOS] |
|
AMD Radeon R9 290 [Pattern 2] |
|
Palit GeForce GTX 670 | |
XFX Radeon R9 285 Black OC Edition | |
NVIDIA GeForce GTX 760 [1033MHz] |
|
AMD Radeon R9 290X [Quiet BIOS after 15 min] |
|
AMD Radeon R9 290 [Pattern 1 & old driver] |
|
EVGA GeForce GTX 680 | |
Club3D Radeon R9 285 CoolStream | |
EVGA GeForce GTX 670 SC | |
MSI R9 290X Gaming 4G | |
Sapphire Radeon R9 280X Toxic | |
NVIDIA GeForce GTX 690 | |
Sapphire Tri-X R9 290X OC | |
EVGA GeForce GTX 680 | |
Sapphire R9 290X Tri-X 8GB | |
NVIDIA GeForce GTX Titan X | |
XFX R9 280X Black DD OC | |
MSI GTX 680 OC TwinFrozr III | |
Gainward GeForce GTX 670 Phantom | |
ASUS GTX 980 Strix | |
Zotac GeForce GTX 680 | |
NVIDIA GeForce GTX 980 Ti | |
NVIDIA GeForce GTX Titan [875MHz] |
|
NVIDIA GeForce GTX 780 Ti | |
Inno3D GeForce GTX 970 Herculez X2 | |
NVIDIA GeForce GTX 980 [Max 1240MHz] |
|
ASUS GTX 970 Strix | |
Sapphire Radeon R9 280 Dual X | |
NVIDIA GeForce GTX 780 | |
MSI GTX 970 gaming | |
MSI GTX 770 Lightning | |
NVIDIA GeForce GTX 980 [1126MHz] |
|
Palit GTX 970 Jetstream | |
Sapphire Radeon R9 280X Vapor-X | |
Gigabyte GeForce GTX 670 Windforce | |
Sapphire R9 285 ITX Compact | |
NVIDIA GeForce GTX 770 [1084MHz] |
|
Sparkle Caliber X670 Captain | |
Sparkle Caliber X680 Captain | |
EVGA GTX 980 SC ACX 2.0 [Max 1418MHz] |
|
ASUS GeForce GTX 670 DCU II TOP | |
NVIDIA GeForce GTX 750 Ti | |
MSI R9 280X OC | |
MSI GTX 960 Gaming 2G | |
dB (A) |
Brief comparison [sone]
Volume measurements: Loudness (sone) | |
Idle |
|
Palit GeForce GTX 670 | |
MSI GTX 770 Lightning | |
Club3D Radeon R9 285 CoolStream | |
EVGA GeForce GTX 670 SC | |
NVIDIA GeForce GTX 760 [1033MHz] |
|
EVGA GeForce GTX 680 | |
NVIDIA GeForce GTX 690 | |
AMD Radeon R9 290X [Quiet BIOS after 15 min] |
|
AMD Radeon R9 290X [PerformanceBIOS] |
|
AMD Radeon R9 290 [Pattern 1 & old driver] |
|
AMD Radeon R9 290 [Pattern 2] |
|
XFX Radeon R9 285 Black OC Edition | |
Sparkle Caliber X680 Captain | |
Sparkle Caliber X670 Captain | |
Sapphire Radeon R9 280X Vapor-X | |
Inno3D GeForce GTX 970 Herculez X2 | |
EVGA GeForce GTX 680 | |
Sapphire Tri-X R9 290X OC | |
NVIDIA GeForce GTX 980 [Max 1240MHz] |
|
NVIDIA GeForce GTX 980 [1126MHz] |
|
MSI N680 GTX OC TwinFrozr III | |
MSI R9 290X Gaming 4G | |
Sapphire Radeon R9 280X Toxic | |
Sapphire R9 285 ITX Compact | |
Zotac GeForce GTX 680 | |
Sapphire Radeon R9 280 Dual X | |
Gainward GeForce GTX 670 Phantom | |
MSI R9 280X OC | |
Sapphire R9 290X Tri-X 8GB | |
NVIDIA GeForce GTX 770 [1084MHz] |
|
NVIDIA GeForce GTX 780 | |
NVIDIA GeForce GTX Titan [875MHz] |
|
NVIDIA GeForce GTX 780 Ti | |
XFX R9 280X Black DD OC | |
NVIDIA GeForce GTX 980 Ti | |
Gigabyte GeForce GTX 670 Windforce | |
NVIDIA GeForce GTX Titan X | |
NVIDIA GeForce GTX 750 Ti | |
ASUS GeForce GTX 670 DCU II TOP | |
MSI GTX 970 gaming | |
EVGA GTX 980 SC ACX 2.0 [Max 1418MHz] |
|
ASUS GTX 980 Strix | |
ASUS GTX 970 Strix | |
Palit GTX 970 Jetstream | |
MSI GTX 960 Gaming 2G | |
sonnet |
Volume measurements: Loudness (sone) | |
Load (games) |
|
AMD Radeon R9 290X [PerformanceBIOS] |
|
AMD Radeon R9 290 [Pattern 2] |
|
XFX Radeon R9 285 Black OC Edition | |
Club3D Radeon R9 285 CoolStream | |
AMD Radeon R9 290X [Quiet BIOS after 15 min] |
|
AMD Radeon R9 290 [Pattern 1 & old driver] |
|
NVIDIA GeForce GTX 690 | |
Sapphire Radeon R9 280X Toxic | |
MSI R9 290X Gaming 4G | |
Palit GeForce GTX 670 | |
Sapphire Tri-X R9 290X OC | |
Sapphire R9 290X Tri-X 8GB | |
NVIDIA GeForce GTX 760 [1033MHz] |
|
NVIDIA GeForce GTX Titan X | |
MSI N680 GTX OC TwinFrozr III | |
XFX R9 280X Black DD OC | |
EVGA GeForce GTX 680 | |
NVIDIA GeForce GTX 980 Ti | |
Inno3D GeForce GTX 970 Herculez X2 | |
ASUS GTX 970 Strix | |
NVIDIA GeForce GTX 980 [Max 1240MHz] |
|
EVGA GeForce GTX 680 | |
ASUS GTX 980 Strix | |
Gainward GeForce GTX 670 Phantom | |
NVIDIA GeForce GTX Titan [875MHz] |
|
NVIDIA GeForce GTX 780 Ti | |
EVGA GeForce GTX 670 SC | |
Sapphire Radeon R9 280 Dual X | |
Zotac GeForce GTX 680 | |
NVIDIA GeForce GTX 780 | |
MSI GTX 970 gaming | |
MSI GTX 770 Lightning | |
NVIDIA GeForce GTX 980 [1126MHz] |
|
Palit GTX 970 Jetstream | |
Sapphire R9 285 ITX Compact | |
EVGA GTX 980 SC ACX 2.0 [Max 1418MHz] |
|
Sapphire Radeon R9 280X Vapor-X | |
Gigabyte GeForce GTX 670 Windforce | |
Sparkle Caliber X670 Captain | |
Sparkle Caliber X680 Captain | |
NVIDIA GeForce GTX 770 [1084MHz] |
|
ASUS GeForce GTX 670 DCU II TOP | |
NVIDIA GeForce GTX 750 Ti | |
MSI GTX 960 Gaming 2G | |
MSI R9 280X OC | |
sonnet |
Power and temperature limits
Power and temp target
The linchpin of NVIDIA GPU Boost Technology 1.0 was the power target - the maximum permitted power consumption. From GeForce GTX Titan there is GPU Boost 2.0 and thus also the temperature target. Every NVIDIA-based graphics card with this technology comes with a maximum clock rate (GPU boost). Such an NVIDIA card only works at this high rate under load as long as the two limits mentioned are not reached. When this set maximum power consumption or temperature is reached, the clock rates and voltages of the GPU are then reduced until the graphics card finds a clock level at which these limits are no longer exceeded.
Power consumption and boost
We have already dealt with this topic in great detail and will therefore be brief here. The power limit is set to 250 watts and is reached by some titles at the beginning, so that a limitation of the maximum clock of our sample of 1.215 MHz occurs relatively quickly, but only once in regions around 1.180 MHz.
After that, the GTX 980 Ti is limited by the temperature after a short time, because the cooling solution and NVIDIA's specifications for the volume do not allow such a high clock rate over the long term. In many of our demanding titles the clock falls relatively quickly below the averaged boost clock of 1.075 MHz, with Dying Light we even found clock speeds in the range of the base clock if we only got into the appropriate places in the action phase.
Due to the mentioned limitations, not only the clock rate drops, but also the power consumption. At 1.075 MHz, the GTX 980 Ti then only brought 217 watts instead of 250 watts, which is ultimately also due to the lower voltages.
If you want more, you can relax the limits and then usually reach a maximum of 275 watts and temperatures around 84 to 87 °C. However, the latter also ensures that the background noise increases massively, and here you then already enter regions as we described in the worst case in the "Background noise" chapter.
Power consumption: idle - games - full load
Graphics card power consumption - How to measure HT4U. Net
We determine the power consumption of the graphics card using a PCI Express adapter modified for this purpose in our laboratory. The values determined therefore only correspond to the consumption of the graphics card itself and not to the power consumption of the overall system. The power consumption via the PCI Express slot, as well as that via the 12 volt power supply cables, are measured simultaneously using a clamp ammeter. The (constant) power consumption of the 3,3 volt rail is determined separately and is included in the overall result shown. Further details and background information on the measurements can be found in our initial article on the subject Power consumption of graphics cards
Power consumption - graphics card | |
Idle |
|
MSI N580GTX Twin Frozr II OC | |
AMD Radeon R9 295X2 | |
NVIDIA GeForce GTX 580 | |
AMD Radeon HD 7990 | |
MSI N580 GTX Lightning | |
MSI R7970 Lightning | |
Sapphire Tri-X R9 290X OC | |
ASUS ROG Matrix GTX 580 Platinum | |
AMD Radeon R9 290X [PerformanceBIOS] |
|
AMD Radeon R9 290X [Quiet BIOS after 15 min] |
|
AMD Radeon R9 290 [Pattern 1 & old driver] |
|
AMD Radeon R9 290 [Pattern 2] |
|
MSI R9 290X Gaming 4G | |
MSI R9 280X OC | |
ASUS GTX 980 Strix | |
Sapphire R9 290X Tri-X 8GB | |
MSI GTX 970 Gaming 4G | |
ASUS GTX 970 Strix | |
Inno3D GeForce GTX 970 Herculez X2 | |
Sapphire Radeon R9 280X Vapor-X | |
NVIDIA GeForce GTX 780 Ti | |
EVGA GTX 980 SC ACX 2.0 [Max 1418MHz] |
|
NVIDIA GeForce GTX 980 Ti | |
NVIDIA GeForce GTX 780 | |
Palit GTX 970 Jetstream | |
NVIDIA GeForce GTX Titan [875MHz] |
|
NVIDIA GeForce GTX Titan X | |
NVIDIA GeForce GTX 980 | |
NVIDIA GeForce GTX 980 [baseclock] |
|
Watt |
Power consumption - graphics card | |
Idle |
|
MSI N580GTX Twin Frozr II OC | |
AMD Radeon R9 295X2 | |
NVIDIA GeForce GTX 580 | |
AMD Radeon HD 7990 | |
MSI N580 GTX Lightning | |
MSI R7970 Lightning | |
Sapphire Tri-X R9 290X OC | |
ASUS ROG Matrix GTX 580 Platinum | |
AMD Radeon R9 290X [PerformanceBIOS] |
|
AMD Radeon R9 290X [Quiet BIOS after 15 min] |
|
AMD Radeon R9 290 [Pattern 1 & old driver] |
|
AMD Radeon R9 290 [Pattern 2] |
|
MSI R9 290X Gaming 4G | |
MSI R9 280X OC | |
ASUS GTX 980 Strix | |
Sapphire R9 290X Tri-X 8GB | |
MSI GTX 970 Gaming 4G | |
ASUS GTX 970 Strix | |
Inno3D GeForce GTX 970 Herculez X2 | |
Sapphire Radeon R9 280X Vapor-X | |
NVIDIA GeForce GTX 780 Ti | |
EVGA GTX 980 SC ACX 2.0 [Max 1418MHz] |
|
NVIDIA GeForce GTX 980 Ti | |
NVIDIA GeForce GTX 780 | |
Palit GTX 970 Jetstream | |
NVIDIA GeForce GTX Titan [875MHz] |
|
NVIDIA GeForce GTX Titan X | |
NVIDIA GeForce GTX 980 | |
NVIDIA GeForce GTX 980 [baseclock] |
|
Watt |
We were a bit surprised in idle mode. The 15,59 watts shown are not a bad value in themselves, but in direct comparison to the results seen with the GTX 980 or the Titan X, our sample of the 980 Ti is relatively high. This is definitely not due to the tensions. At this point, we cannot say with certainty whether it is due to a bad chip quality - it would be possible.
Update 01.06.15:
We made a mistake with this measurement. As we just noticed, the idle power consumption was measured on a different monitor with a higher resolution, which could explain the higher values. We will submit the correct route during the day!
Update 2 June 01.06.15st, XNUMX:
Our assumption has been confirmed. The new measured values recorded with the correct resolution show our sample of the GTX 980 Ti with 12,77 watts, roughly on the same level as the Titan X or GTX 980 - a clearly better value.
Power consumption - graphics card | |
Load (games) |
|
AMD Radeon R9 295X2 | |
AMD Radeon HD 7990 | |
AMD Radeon R9 290X [Quiet BIOS after 15 min] |
|
MSI R9 290X Gaming 4G | |
AMD Radeon R9 290 [Pattern 2] |
|
Sapphire Tri-X R9 290X OC | |
NVIDIA GeForce GTX 780 Ti | |
Sapphire R9 290X Tri-X 8GB | |
NVIDIA GeForce GTX Titan X | |
NVIDIA GeForce GTX 980 Ti | |
NVIDIA GeForce GTX 580 | |
AMD Radeon R9 290 [Pattern 1 & old driver] |
|
MSI N580GTX Twin Frozr II OC | |
MSI N580 GTX Lightning | |
ASUS ROG Matrix GTX 580 Platinum | |
AMD Radeon R9 290X [PerformanceBIOS] |
|
MSI R9 280X OC | |
MSI R7970 Lightning | |
Sapphire Radeon R9 280X Vapor-X | |
NVIDIA GeForce GTX 780 | |
NVIDIA GeForce GTX Titan [875MHz] |
|
MSI GTX 970 Gaming 4G | |
ASUS GTX 980 Strix | |
NVIDIA GeForce GTX 980 | |
EVGA GTX 980 SC ACX 2.0 [Max 1418MHz] |
|
Palit GTX 970 Jetstream | |
NVIDIA GeForce GTX 980 [baseclock] |
|
Inno3D GeForce GTX 970 Herculez X2 | |
ASUS GTX 970 Strix | |
Watt |
The power consumption during demanding games is 250 watts right from the start - the maximum that NVIDIA allows the model to work with. Due to the two limits (power and temperature limit) the card is throttled after a short time in the clock, so that in most cases we only looked at clock rates in the range of the averaged boost clock and thus with a power consumption between 217 and 225 Watts were faced.
If you loosen the limits by hand and even lend a hand to the clock rates, you will mostly operate on the permitted 980 watts with the GTX 275 Ti.
Power consumption: Blu-ray playback - multi-monitor operation
Blu-ray playback
For these measurements we use the Blu-ray “Die Hard 4.0” from Twentieth Century Fox Home Entertainment. The Blu-ray uses the H.264 codec, also known as MPEG4-AVC, which is now used in most films. PowerDVD from Cyberlink is used as the software; for version details, please refer to the article's test environment.
Power consumption - graphics card | |
Blu-ray playback |
|
Sapphire Tri-X R9 290X OC | |
AMD Radeon R9 295X2 | |
AMD Radeon R9 290 [Pattern 1 & old driver] |
|
Sapphire R9 290X Tri-X 8GB | |
AMD Radeon R9 290 [Pattern 2] |
|
AMD Radeon R9 290X [Quiet BIOS after 15 min] |
|
AMD Radeon R9 290X [PerformanceBIOS] |
|
MSI R9 290X Gaming 4G | |
AMD Radeon HD 7990 | |
MSI R7970 Lightning | |
MSI R9 280X OC | |
Sapphire Radeon R9 280X Vapor-X | |
NVIDIA GeForce GTX Titan [875MHz] |
|
NVIDIA GeForce GTX 780 | |
NVIDIA GeForce GTX 980 Ti | |
ASUS GTX 980 Strix | |
NVIDIA GeForce GTX Titan X | |
MSI GTX 970 Gaming 4G | |
NVIDIA GeForce GTX 780 Ti | |
ASUS GTX 970 Strix | |
Inno3D GeForce GTX 970 Herculez X2 | |
EVGA GTX 980 SC ACX 2.0 [Max 1418MHz] |
|
Palit GTX 970 Jetstream | |
NVIDIA GeForce GTX 980 | |
NVIDIA GeForce GTX 980 [baseclock] |
|
Watt |
As usual, NVIDIA shines in these comparisons, because you can leave the clock rates of the GPU and the memory at idle levels, so that no higher voltages are applied and, as a result, the power consumption when playing Blu-ray material (HD material) remains more or less at the level of the idle power consumption.
Multi-monitor operation
While the GPU manufacturers are now very careful to reduce the power consumption in idle mode as much as possible, the operation of multiple screens is often left out of these optimizations. According to the manufacturers, the clock drop in the memory in particular can lead to picture flickering, which is why a drop is often omitted there and a separate power level with different voltages and clock rates is used.
We noticed at least one minor change with NVIDIA's GTX 600 family. If only two monitors are operated (even with different resolutions), the card works with the idle power level, and only when using three monitors you switch to a multi-monitor power level. With three monitors, the power consumption of NVIDIA is very similar to that of the AMD models.
Power consumption graphics card multi-monitor operation | |
Idle (2 devices) |
|
AMD Radeon HD 7990 | |
ASUS Matrix HD 7970 Platinum | |
AMD Radeon HD 7870 | |
XFX Radeon HD 7870 Black Edition | |
AMD Radeon HD 7870 Tahiti LE [VTX3D Radeon HD 7870 Black] |
|
AMD Radeon R9 295X2 | |
Sapphire Radeon HD 7850 Dual-X 1GB | |
AMD Radeon R9 270X | |
PowerColor HD 7850 PCS + | |
Sapphire Radeon HD 7870 XT with Boost | |
PowerColor Radeon HD 7870 PCS + | |
XFX Radeon HD 7850 Black Edition | |
AMD Radeon HD 7850 | |
Sapphire Radeon R9 280X Toxic | |
Sapphire HD 7790 Dual-X OC | |
Sapphire Radeon R9 270X Toxic | |
Sapphire Tri-X R9 290X OC | |
Sapphire HD 7790 Dual-X OC | |
AMD Radeon R7 260X | |
MSI R7790 OC Edition | |
MSI R9 290X Gaming 4G | |
EVGA GeForce GTX 680 | |
MSI R7790 OC Edition | |
Zotac GeForce GTX 680 | |
AMD Radeon R9 290 [Pattern 2] |
|
MSI GTX 770 Lightning | |
XFX Radeon HD 7770 Black Edition | |
ASUS R9 270 DCU II OC | |
Sapphire Radeon HD 7770 Vapor-X | |
NVIDIA GeForce GTX 980 Ti | |
ASUS GTX 980 Strix | |
Sapphire R9 290X Tri-X 8GB | |
MSI GTX 970 gaming | |
Sapphire Radeon R9 280X Vapor-X | |
Sparkle Caliber X680 Captain | |
Sapphire Radeon R9 280 Dual X | |
NVIDIA GeForce GTX 780 Ti | |
XFX R9 270X Black Edition DD | |
EVGA GTX 980 SC ACX 2.0 [Max 1418MHz] |
|
ASUS GTX 970 Strix | |
Palit GTX 970 Jetstream | |
NVIDIA GeForce GTX Titan Black | |
Inno3D GeForce GTX 970 Herculez X2 | |
ASUS Radeon R7 250X | |
NVIDIA GeForce GTX Titan X | |
NVIDIA GeForce GTX 780 | |
NVIDIA GeForce GTX Titan [875MHz] |
|
Club3D Radeon R9 285 CoolStream | |
XFX Radeon R9 285 Black OC Edition | |
MSI GTX 960 Gaming 2G | |
NVIDIA GeForce GTX 770 [1084MHz] |
|
NVIDIA GeForce GTX 760 [1033MHz] |
|
MSI GTX 650 Ti Boost TwinFrozr OC | |
NVIDIA GeForce GTX 980 | |
Sapphire Radeon R7 265 Dual X | |
Sapphire R9 285 ITX Compact | |
NVIDIA GeForce GTX 650 Ti Boost | |
AMD Radeon R7 260 | |
NVIDIA GeForce GTX 750 Ti | |
Watt |
Power consumption graphics card multi-monitor operation | |
Idle (3 devices) |
|
NVIDIA GeForce GTX Titan Black | |
AMD Radeon R9 295X2 | |
NVIDIA GeForce GTX 780 Ti | |
NVIDIA GeForce GTX Titan X | |
NVIDIA GeForce GTX 980 Ti | |
Sapphire R9 290X Tri-X 8GB | |
AMD Radeon HD 7990 | |
ASUS Matrix HD 7970 Platinum | |
MSI R9 290X Gaming 4G | |
Sapphire Tri-X R9 290X OC | |
AMD Radeon R9 290 [Pattern 2] |
|
NVIDIA GeForce GTX Titan [875MHz] |
|
NVIDIA GeForce GTX 780 | |
Sapphire Radeon R9 280X Toxic | |
MSI GTX 970 gaming | |
Zotac GeForce GTX 680 | |
EVGA GeForce GTX 680 | |
XFX R9 270X Black Edition DD | |
ASUS GTX 970 Strix | |
ASUS GTX 980 Strix | |
Sapphire Radeon R9 280 Dual X | |
Sapphire Radeon R9 280X Vapor-X | |
MSI GTX 770 Lightning | |
Sapphire Radeon R9 270X Toxic | |
Palit GTX 970 Jetstream | |
Inno3D GeForce GTX 970 Herculez X2 | |
NVIDIA GeForce GTX 770 [1084MHz] |
|
EVGA GTX 980 SC ACX 2.0 [Max 1418MHz] |
|
Sparkle Caliber X680 Captain | |
Club3D Radeon R9 285 CoolStream | |
ASUS R9 270 DCU II OC | |
NVIDIA GeForce GTX 980 | |
NVIDIA GeForce GTX 760 [1033MHz] |
|
XFX Radeon R9 285 Black OC Edition | |
Sapphire R9 285 ITX Compact | |
MSI GTX 650 Ti Boost TwinFrozr OC | |
AMD Radeon R9 270X | |
NVIDIA GeForce GTX 650 Ti Boost | |
Sapphire Radeon R7 265 Dual X | |
MSI GTX 960 Gaming 2G | |
Sapphire HD 7790 Dual-X OC | |
AMD Radeon R7 260X | |
MSI R7790 OC Edition | |
AMD Radeon R7 260 | |
NVIDIA GeForce GTX 750 Ti | |
ASUS Radeon R7 250X | |
Watt |
Special features must be clearly emphasized here. We have also described these in the chapter on technical innovations for monitor connections. If we operate three devices on the GTX 980 and 970 with DVI, HDMI and display port, the cards switch to a different power level as usual. In this case, the clock rates are even slightly higher than in the previous models. However, this may be due to the fact that the base clocks are also selected higher.
However, if we connected the monitors to the GTX 970 via 2 x DVI and 1 x DP, the graphics card stayed at idle speed and we saw a maximum of 15 watts in power consumption! NVIDIA has not yet announced any details.
This means that NVIDIA - depending on the connection configuration - would now be able to operate three different monitors in idle cycle. It would be beneficial if such changes were also communicated. So far, however, the manufacturer still doesn't know - according to our feedback - what exactly we are talking about.
In the case of the GTX 980, GTX 980 Ti and Titan X, however, this is not relevant. There are only the mentioned connection options, and when connecting three devices, the intermediate power stage is applied. This means that there is a power consumption of around 70 watts. This is absolutely not a good value and is in the top third of our comparisons.
overclocking
Overclocking doesn't just depend on cooling solutions. You have to realize that the overclockability of graphics cards - be it GPU or memory - depends on many factors and the individual components. In addition, of course, there is the fact that manual intervention in the clock rates immediately occurs Loss of warranty can lead.
We can describe the result as positive, because we were able to drive our GTX 980 Ti model to a maximum GPU clock of 1.380 MHz and the memory to 1.950 MHz real clock.
Of course, we raised the limits for temperature and power consumption to the maximum allowed in advance, and of course we were then slowed down beyond the maximum limits with our overclocking attempts. In most cases, the power limit was activated, which is 275 watts and throttled the GPU clock rates down to 1.329 MHz in our benchmarks. Nevertheless, our interventions show that the GeForce GTX 980 Ti still has reserves and scales wonderfully.
In the applications shown below, performance increased in the range from 17 to 21 percent. The power consumption has to be considered in relation to the automatic throttling, because there the GTX 980 Ti usually only needs around 220 to 225 watts with a boost of 1.075 MHz. This also increased the power consumption.
OC benchmarks 2560 × 1440 (with anti-aliasing) | |
Crysis 3 |
|
NVIDIA GeForce GTX 980 Ti [GPU 1380MHz / RAM 1952MHz] |
|
NVIDIA GeForce GTX 980 Ti [max boost] |
|
NVIDIA GeForce GTX Titan X | |
FPS |
OC benchmarks 2560 × 1440 (with anti-aliasing) | |
Far Cry 4 |
|
NVIDIA GeForce GTX 980 Ti [GPU 1380MHz / RAM 1952MHz] |
|
NVIDIA GeForce GTX 980 Ti [max boost] |
|
NVIDIA GeForce GTX Titan X | |
FPS |
OC benchmarks 2560 × 1440 (with anti-aliasing) | |
Bioshock: Infinite |
|
NVIDIA GeForce GTX 980 Ti [GPU 1380MHz / RAM 1952MHz] |
|
NVIDIA GeForce GTX Titan X | |
NVIDIA GeForce GTX 980 Ti [max boost] |
|
FPS |
OC benchmarks 2560 × 1440 (with anti-aliasing) | |
Metro: Last Light Redux |
|
NVIDIA GeForce GTX 980 Ti [GPU 1380MHz / RAM 1952MHz] |
|
NVIDIA GeForce GTX Titan X | |
NVIDIA GeForce GTX 980 Ti [max boost] |
|
FPS |
OC benchmarks 2560 × 1440 (with anti-aliasing) | |
tomb raider |
|
NVIDIA GeForce GTX 980 Ti [GPU 1380MHz / RAM 1952MHz] |
|
NVIDIA GeForce GTX 980 Ti [max boost] |
|
NVIDIA GeForce GTX Titan X | |
FPS |
Game benchmarks (OpenGL)
Game | BRINK |
Developer | Splash damage |
Publisher | Bethesda Softworks |
publication | 13 May 2011 |
Genre | Ego shooter |
Graphics engine | modified idTech 4 |
DirectX path / API | OpenGL |
Age rating USK | 16 years |
Benchmark measurement | Fraps / savegame |
Test area | Hostage rescue |
Runtime benchmark | 10 seconds |
Benchmark settings | Highest levels of detail |
Order from Amazon |
Brink | |
1920 x 1080 [No AA / 16xAF] |
|
NVIDIA GeForce GTX 980 Ti [Max] |
|
NVIDIA GeForce GTX Titan X [typical boost clock] |
|
NVIDIA GeForce GTX 980 Ti [typical boost clock] |
|
NVIDIA GeForce GTX 980 [typical boost clock] |
|
NVIDIA GeForce GTX 780 Ti | |
AMD Radeon R9 290X | |
FPS |
Brink | |
2560 x 1440 [No AA / 16xAF] |
|
NVIDIA GeForce GTX 980 Ti [Max] |
|
NVIDIA GeForce GTX Titan X [typical boost clock] |
|
NVIDIA GeForce GTX 980 Ti [typical boost clock] |
|
NVIDIA GeForce GTX 980 [typical boost clock] |
|
NVIDIA GeForce GTX 780 Ti | |
AMD Radeon R9 290X | |
FPS |
Brink | |
3840 x 2160 [No AA / 16xAF] |
|
NVIDIA GeForce GTX Titan X [typical boost clock] |
|
NVIDIA GeForce GTX 980 Ti [Max] |
|
NVIDIA GeForce GTX 980 Ti [typical boost clock] |
|
NVIDIA GeForce GTX 980 [typical boost clock] |
|
NVIDIA GeForce GTX 780 Ti | |
AMD Radeon R9 290X | |
FPS |
Wolfenstein: The new order
Game | Wolfenstein: The new order |
Developer | Machine Games |
Publisher | Bethesda |
publication | May 2014 |
Genre | Ego shooter |
Age rating | 18 years |
Graphics engine | id Tech 5 |
DirectX path | OpenGL |
Benchmark measurement | Fraps / savegame |
Test area | Chapter 9 intro |
Runtime benchmark | 10 seconds |
Benchmark settings | Highest levels of detail |
HT4U-Test | |
Find on Amazon* |
Wolfenstein: The New Order | |
1920 x 1080 [No AA / 16xAF] |
|
NVIDIA GeForce GTX 780 Ti | |
NVIDIA GeForce GTX 980 [typical boost clock] |
|
NVIDIA GeForce GTX Titan X [typical boost clock] |
|
NVIDIA GeForce GTX 980 Ti [Max] |
|
NVIDIA GeForce GTX 980 Ti [typical boost clock] |
|
AMD Radeon R9 290X | |
FPS |
Wolfenstein: The New Order | |
2560 x 1440 [No AA / 16xAF] |
|
NVIDIA GeForce GTX 980 Ti [Max] |
|
NVIDIA GeForce GTX Titan X [typical boost clock] |
|
NVIDIA GeForce GTX 980 Ti [typical boost clock] |
|
NVIDIA GeForce GTX 780 Ti | |
NVIDIA GeForce GTX 980 [typical boost clock] |
|
AMD Radeon R9 290X | |
FPS |
Wolfenstein: The New Order | |
3840 x 2160 [No AA / 16xAF] |
|
NVIDIA GeForce GTX 980 Ti [Max] |
|
NVIDIA GeForce GTX Titan X [typical boost clock] |
|
NVIDIA GeForce GTX 980 Ti [typical boost clock] |
|
NVIDIA GeForce GTX 980 [typical boost clock] |
|
NVIDIA GeForce GTX 780 Ti | |
AMD Radeon R9 290X | |
FPS |
Game benchmarks (DirectX 9)
The Elder Scrolls V: Skyrim [Modded]
Game | The Elder Scrolls: Skyrim (Modded) |
Developer | Bethesda Game Studios |
Publisher | Bethesda Softworks |
publication | (March 2012) |
Genre | role playing game |
Age rating | 16 years |
Graphics engine | Creation Engine |
DirectX path | DirectX 9 |
Benchmark measurement | Fraps / savegame |
Test area | River forest |
Runtime benchmark | 10 seconds |
Benchmark settings | Highest levels of detail, FXAA, High Resolution Texture Pack |
Installed mods | Realistic Water Two, Tree HD Variation, Verdant Grass Plugin, Wet & Cold, Vivid Landscapes Dungeons & Ruins |
Order from Amazon* |
In our approach to modding Skyrim, we unfortunately made the mistake of not comparing the results on representatives of both graphics card manufacturers at the same time. Unfortunately, one of the installed mods ensures that AMD can barely cope with these settings, and of course that cannot be fair in terms of the approach, because AMD will never (want and can) take care of a mod that was created from a hobby project is. So we have to approach the Skyrim and Modding construction site again. The results of this test are therefore not taken into account in the performance index.
The Elder Scrolls V: Skyrim (Modded) | |
1920 x 1080 [4xAA / 16xAF] |
|
NVIDIA GeForce GTX 980 Ti [Max] |
|
NVIDIA GeForce GTX Titan X [typical boost clock] |
|
NVIDIA GeForce GTX 980 Ti [typical boost clock] |
|
NVIDIA GeForce GTX 780 Ti | |
NVIDIA GeForce GTX 980 [typical boost clock] |
|
AMD Radeon R9 290X | |
FPS |
The Elder Scrolls V: Skyrim (Modded) | |
2560 x 1440 [4xAA / 16xAF] |
|
NVIDIA GeForce GTX 980 Ti [Max] |
|
NVIDIA GeForce GTX Titan X [typical boost clock] |
|
NVIDIA GeForce GTX 980 Ti [typical boost clock] |
|
NVIDIA GeForce GTX 780 Ti | |
NVIDIA GeForce GTX 980 [typical boost clock] |
|
AMD Radeon R9 290X | |
FPS |
The Elder Scrolls V: Skyrim (Modded) | |
3840 x 2160 [4xAA / 16xAF] |
|
NVIDIA GeForce GTX 980 Ti [Max] |
|
NVIDIA GeForce GTX Titan X [typical boost clock] |
|
NVIDIA GeForce GTX 980 Ti [typical boost clock] |
|
NVIDIA GeForce GTX 780 Ti | |
NVIDIA GeForce GTX 980 [typical boost clock] |
|
AMD Radeon R9 290X | |
FPS |
The Witcher 2 - Assassins of Kings
Game | The Witcher 2 - Assassins of Kings |
Developer | CD Projekt RED |
Publisher | CD project, Atari |
publication | 17 May 2011 |
Genre | RPG, fantasy |
Graphics engine | RED engine |
DirectX path | DirectX 9 |
Age rating USK | 16 years |
Benchmark measurement | Fraps / savegame |
Test area | barricade |
Runtime benchmark | 10 seconds |
Benchmark settings | Highest levels of detail |
The Witcher 2 - Assassins of Kings | |
1920 x 1080 [No AA / 16xAF] |
|
NVIDIA GeForce GTX 980 Ti [Max] |
|
NVIDIA GeForce GTX Titan X [typical boost clock] |
|
NVIDIA GeForce GTX 980 Ti [typical boost clock] |
|
NVIDIA GeForce GTX 980 [typical boost clock] |
|
NVIDIA GeForce GTX 780 Ti | |
AMD Radeon R9 290X | |
FPS |
The Witcher 2 - Assassins of Kings | |
2560 x 1440 [No AA / 16xAF] |
|
NVIDIA GeForce GTX 980 Ti [Max] |
|
NVIDIA GeForce GTX Titan X [typical boost clock] |
|
NVIDIA GeForce GTX 980 Ti [typical boost clock] |
|
NVIDIA GeForce GTX 980 [typical boost clock] |
|
AMD Radeon R9 290X | |
NVIDIA GeForce GTX 780 Ti | |
FPS |
The Witcher 2 - Assassins of Kings | |
3840 x 2160 [No AA / 16xAF] |
|
NVIDIA GeForce GTX Titan X [typical boost clock] |
|
NVIDIA GeForce GTX 980 Ti [Max] |
|
NVIDIA GeForce GTX 980 Ti [typical boost clock] |
|
NVIDIA GeForce GTX 980 [typical boost clock] |
|
AMD Radeon R9 290X | |
NVIDIA GeForce GTX 780 Ti | |
FPS |
The Witcher 2 - Assassins of Kings | |
1920 x 1080 [4xSSAA / 16xAF] |
|
NVIDIA GeForce GTX 980 Ti [Max] |
|
NVIDIA GeForce GTX Titan X [typical boost clock] |
|
NVIDIA GeForce GTX 980 Ti [typical boost clock] |
|
AMD Radeon R9 290X | |
NVIDIA GeForce GTX 980 [typical boost clock] |
|
NVIDIA GeForce GTX 780 Ti | |
FPS |
The Witcher 2 - Assassins of Kings | |
2560 x 1440 [4xSSAA / 16xAF] |
|
NVIDIA GeForce GTX Titan X [typical boost clock] |
|
NVIDIA GeForce GTX 980 Ti [Max] |
|
NVIDIA GeForce GTX 980 Ti [typical boost clock] |
|
AMD Radeon R9 290X | |
NVIDIA GeForce GTX 780 Ti | |
NVIDIA GeForce GTX 980 [typical boost clock] |
|
FPS |
The Witcher 2 - Assassins of Kings | |
3840 x 2160 [4xSSAA / 16xAF] |
|
NVIDIA GeForce GTX Titan X [typical boost clock] |
|
NVIDIA GeForce GTX 980 Ti [Max] |
|
NVIDIA GeForce GTX 980 Ti [typical boost clock] |
|
AMD Radeon R9 290X | |
NVIDIA GeForce GTX 780 Ti | |
NVIDIA GeForce GTX 980 [typical boost clock] |
|
FPS |
Game benchmarks (DirectX 11)
Alien: isolation
Game | Alien: isolation |
Developer | Creative Assembly |
Publisher | SEGA |
publication | 07 October 2014 |
Genre | Survival horror |
Graphics engine | CA engine |
DirectX path / API | DirectX 11 |
Age rating USK | 16 years |
Benchmark measurement | Fraps / savegame |
Test area | Level 9 signals |
Runtime benchmark | 10 seconds |
Benchmark settings | maximum levels of detail |
HT4U-Test | Order from Amazon |
Alien Isolation | |
1920 x 1080 [No AA / 16xAF] |
|
NVIDIA GeForce GTX Titan X [typical boost clock] |
|
NVIDIA GeForce GTX 980 Ti [Max] |
|
NVIDIA GeForce GTX 980 Ti [typical boost clock] |
|
NVIDIA GeForce GTX 980 [typical boost clock] |
|
NVIDIA GeForce GTX 780 Ti | |
AMD Radeon R9 290X | |
FPS |
Alien Isolation | |
2560 x 1440 [No AA / 16xAF] |
|
NVIDIA GeForce GTX Titan X [typical boost clock] |
|
NVIDIA GeForce GTX 980 Ti [Max] |
|
NVIDIA GeForce GTX 980 Ti [typical boost clock] |
|
NVIDIA GeForce GTX 980 [typical boost clock] |
|
AMD Radeon R9 290X | |
NVIDIA GeForce GTX 780 Ti | |
FPS |
Alien Isolation | |
3840 x 2160 [No AA / 16xAF] |
|
NVIDIA GeForce GTX Titan X [typical boost clock] |
|
NVIDIA GeForce GTX 980 Ti [Max] |
|
NVIDIA GeForce GTX 980 Ti [typical boost clock] |
|
NVIDIA GeForce GTX 980 [typical boost clock] |
|
AMD Radeon R9 290X | |
NVIDIA GeForce GTX 780 Ti | |
FPS |
CE
Game | CE |
Developer | Related Designs / Ubisoft Blue Byte |
Publisher | Ubisoft |
publication | 17 November 2011 |
Genre | strategy game |
Age rating | 6 years |
Graphics engine | InitEngine |
DirectX path | DirectX 9 / DirectX 11 |
Benchmark measurement | Fraps / savegame |
Test area | On the trail of the truth |
Runtime benchmark | 10 seconds |
Benchmark settings | Highest levels of detail |
Order from Amazon |
CE | |
1920 x 1080 [No AA / 16xAF] |
|
NVIDIA GeForce GTX Titan X [typical boost clock] |
|
NVIDIA GeForce GTX 980 Ti [Max] |
|
NVIDIA GeForce GTX 980 Ti [typical boost clock] |
|
NVIDIA GeForce GTX 980 [typical boost clock] |
|
AMD Radeon R9 290X | |
NVIDIA GeForce GTX 780 Ti | |
FPS |
CE | |
2560 x 1440 [No AA / 16xAF] |
|
NVIDIA GeForce GTX Titan X [typical boost clock] |
|
NVIDIA GeForce GTX 980 Ti [Max] |
|
NVIDIA GeForce GTX 980 Ti [typical boost clock] |
|
AMD Radeon R9 290X | |
NVIDIA GeForce GTX 980 [typical boost clock] |
|
NVIDIA GeForce GTX 780 Ti | |
FPS |
CE | |
3840 x 2160 [No AA / 16xAF] |
|
NVIDIA GeForce GTX Titan X [typical boost clock] |
|
NVIDIA GeForce GTX 980 Ti [Max] |
|
NVIDIA GeForce GTX 980 Ti [typical boost clock] |
|
AMD Radeon R9 290X | |
NVIDIA GeForce GTX 980 [typical boost clock] |
|
NVIDIA GeForce GTX 780 Ti | |
FPS |
Assassin's Creed Unity
Game | Assassin's Creed Unity |
Developer | Ubisoft Montreal |
Publisher | Ubisoft |
publication | 13 November 2014 |
Genre | Action-Adventure |
Graphics engine | AnvilNext engine |
DirectX path / API | DirectX 11 |
Age rating USK | 16 years |
Benchmark measurement | Fraps / savegame |
Test area | Sequence 7.2 - A meeting with Mirabeau |
Runtime benchmark | 25 seconds |
Benchmark settings | maximum levels of detail |
HT4U-Test | Order from Amazon* |
Assassins Creed: Unity | |
1920 x 1080 [No AA / 16xAF] |
|
NVIDIA GeForce GTX 980 Ti [Max] |
|
NVIDIA GeForce GTX Titan X [typical boost clock] |
|
NVIDIA GeForce GTX 980 Ti [typical boost clock] |
|
NVIDIA GeForce GTX 980 [typical boost clock] |
|
NVIDIA GeForce GTX 780 Ti | |
AMD Radeon R9 290X | |
FPS |
Assassins Creed: Unity | |
2560 x 1440 [No AA / 16xAF] |
|
NVIDIA GeForce GTX Titan X [typical boost clock] |
|
NVIDIA GeForce GTX 980 Ti [Max] |
|
NVIDIA GeForce GTX 980 Ti [typical boost clock] |
|
NVIDIA GeForce GTX 980 [typical boost clock] |
|
NVIDIA GeForce GTX 780 Ti | |
AMD Radeon R9 290X | |
FPS |
Assassins Creed: Unity | |
3840 x 2160 [No AA / 16xAF] |
|
NVIDIA GeForce GTX Titan X [typical boost clock] |
|
NVIDIA GeForce GTX 980 Ti [Max] |
|
NVIDIA GeForce GTX 980 Ti [typical boost clock] |
|
NVIDIA GeForce GTX 980 [typical boost clock] |
|
AMD Radeon R9 290X | |
NVIDIA GeForce GTX 780 Ti | |
FPS |
Battlefield 4
Game | Battlefield 4 |
Developer | EA Digital Illusions CE |
Publisher | Electronic Arts |
publication | Dezember 2013 |
Genre | Ego shooter |
Age rating | USK: 18 years |
Graphics engine | Frostbite 3 |
DirectX path | DirectX 10 / DirectX 11 / Mantle |
Benchmark measurement | Fraps / savegame |
Test area | Level 6: Tashgar - Checkpoint 5 |
Runtime benchmark | 10 seconds |
Benchmark settings | Highest level of detail, DX 11 |
HT4U-Test | Order from Amazon |
Battlefield 4 | |
1920 x 1080 [No AA / 16xAF] |
|
NVIDIA GeForce GTX 980 Ti [Max] |
|
NVIDIA GeForce GTX Titan X [typical boost clock] |
|
NVIDIA GeForce GTX 980 Ti [typical boost clock] |
|
NVIDIA GeForce GTX 980 [typical boost clock] |
|
NVIDIA GeForce GTX 780 Ti | |
AMD Radeon R9 290X | |
FPS |
Battlefield 4 | |
2560 x 1440 [No AA / 16xAF] |
|
NVIDIA GeForce GTX 980 Ti [Max] |
|
NVIDIA GeForce GTX Titan X [typical boost clock] |
|
NVIDIA GeForce GTX 980 Ti [typical boost clock] |
|
NVIDIA GeForce GTX 980 [typical boost clock] |
|
NVIDIA GeForce GTX 780 Ti | |
AMD Radeon R9 290X | |
FPS |
Battlefield 4 | |
3840 x 2160 [No AA / 16xAF] |
|
NVIDIA GeForce GTX Titan X [typical boost clock] |
|
NVIDIA GeForce GTX 980 Ti [Max] |
|
NVIDIA GeForce GTX 980 Ti [typical boost clock] |
|
AMD Radeon R9 290X | |
NVIDIA GeForce GTX 980 [typical boost clock] |
|
NVIDIA GeForce GTX 780 Ti | |
FPS |
Battlefield 4 | |
1920 x 1080 [4xAA / 16xAF] |
|
NVIDIA GeForce GTX 980 Ti [Max] |
|
NVIDIA GeForce GTX Titan X [typical boost clock] |
|
NVIDIA GeForce GTX 980 Ti [typical boost clock] |
|
NVIDIA GeForce GTX 980 [typical boost clock] |
|
NVIDIA GeForce GTX 780 Ti | |
AMD Radeon R9 290X | |
FPS |
Battlefield 4 | |
2560 x 1440 [4xAA / 16xAF] |
|
NVIDIA GeForce GTX 980 Ti [Max] |
|
NVIDIA GeForce GTX Titan X [typical boost clock] |
|
NVIDIA GeForce GTX 980 Ti [typical boost clock] |
|
NVIDIA GeForce GTX 980 [typical boost clock] |
|
NVIDIA GeForce GTX 780 Ti | |
AMD Radeon R9 290X | |
FPS |
Battlefield 4 | |
3840 x 2160 [4xAA / 16xAF] |
|
NVIDIA GeForce GTX Titan X [typical boost clock] |
|
NVIDIA GeForce GTX 980 Ti [Max] |
|
NVIDIA GeForce GTX 980 Ti [typical boost clock] |
|
NVIDIA GeForce GTX 980 [typical boost clock] |
|
AMD Radeon R9 290X | |
NVIDIA GeForce GTX 780 Ti | |
FPS |
Bioshock: Infinite
Game | BioShock: Infinite |
Developer | Irrational Games, 2K Marin, Human Head Studios |
Publisher | 2K Games |
publication | March 26, 2013 |
Genre | First person shooter with fantasy elements |
Graphics engine | U |
DirectX path | DirectX 10 and 11 |
Age rating USK | 18 years |
Benchmark measurement | Fraps / savegame |
Test area | Finkton Proper |
Runtime benchmark | 10 seconds |
Benchmark settings | System settings Maximum & FXAA |
HT4U-Test | |
Order from Amazon* |
We are now deliberately writing it again here in the benchmark course, because it seems to be a question of a broad understanding problem. We don't use the useless BioShock benchmark (useless because it doesn't evaluate scenes in its runs and doesn't even begin to show a worst-case scenario). We're using a savegame that represents a worst-case scenario, like you often find in BioShock! This has repeatedly led to discussions and queries, which is why we would like to make it clear once again here. And since there are still people who "skip over", we even put this paragraph in bold typeface.
Bioshock: Infinite | |
1920 x 1080 [No AA / 16xAF] |
|
NVIDIA GeForce GTX Titan X [typical boost clock] |
|
NVIDIA GeForce GTX 980 Ti [Max] |
|
NVIDIA GeForce GTX 980 Ti [typical boost clock] |
|
AMD Radeon R9 290X | |
NVIDIA GeForce GTX 980 [typical boost clock] |
|
NVIDIA GeForce GTX 780 Ti | |
FPS |
Bioshock: Infinite | |
2560 x 1440 [No AA / 16xAF] |
|
NVIDIA GeForce GTX Titan X [typical boost clock] |
|
NVIDIA GeForce GTX 980 Ti [Max] |
|
NVIDIA GeForce GTX 980 Ti [typical boost clock] |
|
AMD Radeon R9 290X | |
NVIDIA GeForce GTX 980 [typical boost clock] |
|
NVIDIA GeForce GTX 780 Ti | |
FPS |
Bioshock: Infinite | |
3840 x 2160 [No AA / 16xAF] |
|
NVIDIA GeForce GTX Titan X [typical boost clock] |
|
NVIDIA GeForce GTX 980 Ti [Max] |
|
NVIDIA GeForce GTX 980 Ti [typical boost clock] |
|
AMD Radeon R9 290X | |
NVIDIA GeForce GTX 980 [typical boost clock] |
|
NVIDIA GeForce GTX 780 Ti | |
FPS |
Call of Duty: Advanced Warfare
Game | Call of Duty: Advanced Warfare |
Developer | Sledgehammer Games |
Publisher | Activision |
publication | 04 November 2014 |
Genre | Ego shooter |
Graphics engine | Infinity Ward engine, modified |
DirectX path / API | DirectX 11 |
Age rating USK | 18 years |
Benchmark measurement | Fraps / savegame |
Test area | Level 10 bio-laboratory - sixth save point |
Runtime benchmark | 10 seconds |
Benchmark settings | maximum levels of detail |
HT4U-Test | Order from Amazon |
Call of Duty: Advanced Warfare | |
1920 x 1080 [No AA / 16xAF] |
|
NVIDIA GeForce GTX 980 Ti [Max] |
|
NVIDIA GeForce GTX Titan X [typical boost clock] |
|
NVIDIA GeForce GTX 980 Ti [typical boost clock] |
|
NVIDIA GeForce GTX 980 [typical boost clock] |
|
NVIDIA GeForce GTX 780 Ti | |
AMD Radeon R9 290X | |
FPS |
Call of Duty: Advanced Warfare | |
2560 x 1440 [No AA / 16xAF] |
|
NVIDIA GeForce GTX 980 Ti [Max] |
|
NVIDIA GeForce GTX Titan X [typical boost clock] |
|
NVIDIA GeForce GTX 980 Ti [typical boost clock] |
|
NVIDIA GeForce GTX 980 [typical boost clock] |
|
NVIDIA GeForce GTX 780 Ti | |
AMD Radeon R9 290X | |
FPS |
Call of Duty: Advanced Warfare | |
3840 x 2160 [No AA / 16xAF] |
|
NVIDIA GeForce GTX Titan X [typical boost clock] |
|
NVIDIA GeForce GTX 980 Ti [Max] |
|
NVIDIA GeForce GTX 980 Ti [typical boost clock] |
|
NVIDIA GeForce GTX 980 [typical boost clock] |
|
NVIDIA GeForce GTX 780 Ti | |
AMD Radeon R9 290X | |
FPS |
Call of Duty: Advanced Warfare | |
1920 x 1080 [2xSSAA / 16xAF] |
|
NVIDIA GeForce GTX 980 Ti [Max] |
|
NVIDIA GeForce GTX Titan X [typical boost clock] |
|
NVIDIA GeForce GTX 980 Ti [typical boost clock] |
|
NVIDIA GeForce GTX 980 [typical boost clock] |
|
NVIDIA GeForce GTX 780 Ti | |
AMD Radeon R9 290X | |
FPS |
Call of Duty: Advanced Warfare | |
2560 x 1440 [2xSSAA / 16xAF] |
|
NVIDIA GeForce GTX 980 Ti [Max] |
|
NVIDIA GeForce GTX Titan X [typical boost clock] |
|
NVIDIA GeForce GTX 980 Ti [typical boost clock] |
|
NVIDIA GeForce GTX 980 [typical boost clock] |
|
NVIDIA GeForce GTX 780 Ti | |
AMD Radeon R9 290X | |
FPS |
Call of Duty: Advanced Warfare | |
3840 x 2160 [2xSSAA / 16xAF] |
|
NVIDIA GeForce GTX 980 Ti [Max] |
|
NVIDIA GeForce GTX Titan X [typical boost clock] |
|
NVIDIA GeForce GTX 980 Ti [typical boost clock] |
|
NVIDIA GeForce GTX 980 [typical boost clock] |
|
NVIDIA GeForce GTX 780 Ti | |
AMD Radeon R9 290X | |
FPS |
Crysis 3
Game | Crysis 3 |
Developer | Crytek |
Publisher | Electronic Arts |
publication | 21 February 2013 |
Genre | Ego shooter |
Graphics engine | CryENGINE 3 |
DirectX path | DirectX 9 and 11 |
Age rating USK | 18 years |
Benchmark measurement | Fraps / savegame |
Test area | Mission 4 - Swamp |
Runtime benchmark | 10 seconds |
Benchmark settings | Default system and textures: maximum |
Order from Amazon* |
In the following diagrams, 1 x AA stands for deactivated antialiasing and the post-processing filter FXAA. 2 x AA stands for the special level 4 x SMAA. The game relies on double, regular anti-aliasing (MSAA) and additional filters. The designation 4 x AA corresponds to the usual quadruple anti-aliasing (MSAA).
Crysis 3 | |
1920 x 1080 [No AA / 16xAF] |
|
NVIDIA GeForce GTX Titan X [typical boost clock] |
|
NVIDIA GeForce GTX 980 Ti [Max] |
|
NVIDIA GeForce GTX 980 Ti [typical boost clock] |
|
NVIDIA GeForce GTX 980 [typical boost clock] |
|
NVIDIA GeForce GTX 780 Ti | |
AMD Radeon R9 290X | |
FPS |
Crysis 3 | |
2560 x 1440 [No AA / 16xAF] |
|
NVIDIA GeForce GTX Titan X [typical boost clock] |
|
NVIDIA GeForce GTX 980 Ti [Max] |
|
NVIDIA GeForce GTX 980 Ti [typical boost clock] |
|
NVIDIA GeForce GTX 980 [typical boost clock] |
|
NVIDIA GeForce GTX 780 Ti | |
AMD Radeon R9 290X | |
FPS |
Crysis 3 | |
3840 x 2160 [No AA / 16xAF] |
|
NVIDIA GeForce GTX Titan X [typical boost clock] |
|
NVIDIA GeForce GTX 980 Ti [Max] |
|
NVIDIA GeForce GTX 980 Ti [typical boost clock] |
|
NVIDIA GeForce GTX 980 [typical boost clock] |
|
NVIDIA GeForce GTX 780 Ti | |
AMD Radeon R9 290X | |
FPS |
Crysis 3 | |
1920 x 1080 [2xAA / 16xAF] |
|
NVIDIA GeForce GTX 980 Ti [Max] |
|
NVIDIA GeForce GTX Titan X [typical boost clock] |
|
NVIDIA GeForce GTX 980 Ti [typical boost clock] |
|
NVIDIA GeForce GTX 980 [typical boost clock] |
|
NVIDIA GeForce GTX 780 Ti | |
AMD Radeon R9 290X | |
FPS |
Crysis 3 | |
2560 x 1440 [2xAA / 16xAF] |
|
NVIDIA GeForce GTX 980 Ti [Max] |
|
NVIDIA GeForce GTX Titan X [typical boost clock] |
|
NVIDIA GeForce GTX 980 Ti [typical boost clock] |
|
NVIDIA GeForce GTX 980 [typical boost clock] |
|
NVIDIA GeForce GTX 780 Ti | |
AMD Radeon R9 290X | |
FPS |
Crysis 3 | |
3840 x 2160 [2xAA / 16xAF] |
|
NVIDIA GeForce GTX Titan X [typical boost clock] |
|
NVIDIA GeForce GTX 980 Ti [Max] |
|
NVIDIA GeForce GTX 980 Ti [typical boost clock] |
|
NVIDIA GeForce GTX 980 [typical boost clock] |
|
NVIDIA GeForce GTX 780 Ti | |
AMD Radeon R9 290X | |
FPS |
Dying Light
Game | Dying Light |
Developer | Techland |
Publisher | Warner Bros. |
publication | 27 January 2015 |
Genre | Survival horror |
Graphics engine | Chrome 6 engine |
DirectX path / API | DirectX 11 |
Age rating USK | 18 years |
Benchmark measurement | Fraps / savegame |
Test area | Level 1 Headquarters - The Tower |
Runtime benchmark | 10 seconds |
Benchmark settings | maximum levels of detail |
HT4U-Test | Order from Amazon |
dying light | |
1920x1080 [No AA / 16xAF] |
|
NVIDIA GeForce GTX 980 Ti [Max] |
|
NVIDIA GeForce GTX Titan X [typical boost clock] |
|
NVIDIA GeForce GTX 980 Ti [typical boost clock] |
|
NVIDIA GeForce GTX 980 [typical boost clock] |
|
NVIDIA GeForce GTX 780 Ti | |
AMD Radeon R9 290X | |
FPS |
dying light | |
2560 x 1440 [no AA / 16xAF] |
|
NVIDIA GeForce GTX 980 Ti [Max] |
|
NVIDIA GeForce GTX Titan X [typical boost clock] |
|
NVIDIA GeForce GTX 980 Ti [typical boost clock] |
|
NVIDIA GeForce GTX 980 [typical boost clock] |
|
AMD Radeon R9 290X | |
NVIDIA GeForce GTX 780 Ti | |
FPS |
dying light | |
3840x2160 [No AA / 16xAF] |
|
NVIDIA GeForce GTX 980 Ti [Max] |
|
NVIDIA GeForce GTX Titan X [typical boost clock] |
|
NVIDIA GeForce GTX 980 Ti [typical boost clock] |
|
NVIDIA GeForce GTX 980 [typical boost clock] |
|
AMD Radeon R9 290X | |
NVIDIA GeForce GTX 780 Ti | |
FPS |
Grand Theft Auto V (GTA V)
Game | Grand Theft Auto V |
Developer | Rockstar North |
Publisher | Rockstar Games |
publication | 14 April 2015 |
Genre | Action |
Age rating | USK: 18 years |
Graphics engine | RAGE engine |
DirectX path | DirectX 10/11 |
Benchmark measurement | Fraps / savegame |
Test area | Mountain areas of Los Santos |
Runtime benchmark | 10 seconds |
Benchmark settings | Highest level of detail, DX 11 |
HT4U-Test | Order from Amazon* |
Grand Theft Auto V (GTA 5) | |
1920x1080 [4xAA / 16xAF] |
|
NVIDIA GeForce GTX 980 Ti [Max] |
|
NVIDIA GeForce GTX Titan X [typical boost clock] |
|
NVIDIA GeForce GTX 980 Ti [typical boost clock] |
|
NVIDIA GeForce GTX 980 [typical boost clock] |
|
NVIDIA GeForce GTX 780 Ti | |
AMD Radeon R9 290X | |
FPS |
Grand Theft Auto V (GTA 5) | |
2560x1440 [4xAA / 16xAF] |
|
NVIDIA GeForce GTX 980 Ti [Max] |
|
NVIDIA GeForce GTX Titan X [typical boost clock] |
|
NVIDIA GeForce GTX 980 Ti [typical boost clock] |
|
NVIDIA GeForce GTX 980 [typical boost clock] |
|
NVIDIA GeForce GTX 780 Ti | |
AMD Radeon R9 290X | |
FPS |
Grand Theft Auto V (GTA 5) | |
3840x2160 [4xAA / 16xAF] |
|
NVIDIA GeForce GTX Titan X [typical boost clock] |
|
NVIDIA GeForce GTX 980 Ti [Max] |
|
NVIDIA GeForce GTX 980 Ti [typical boost clock] |
|
NVIDIA GeForce GTX 980 [typical boost clock] |
|
NVIDIA GeForce GTX 780 Ti | |
AMD Radeon R9 290X | |
FPS |
Far Cry 4
Game | Far Cry 4 |
Developer | Ubisoft Montreal |
Publisher | Ubisoft |
publication | 18 November 2014 |
Genre | Ego shooter |
Graphics engine | Dunia 2 engine |
DirectX path / API | DirectX 11 |
Age rating USK | 18 years |
Benchmark measurement | Fraps / savegame |
Test area | Kyrat International Airport |
Runtime benchmark | 10 seconds |
Benchmark settings | maximum levels of detail |
HT4U-Test | Order from Amazon |
Far Cry 4 | |
1920x1080 [No AA / 16xAF] |
|
NVIDIA GeForce GTX 980 Ti [Max] |
|
NVIDIA GeForce GTX Titan X [typical boost clock] |
|
NVIDIA GeForce GTX 980 Ti [typical boost clock] |
|
NVIDIA GeForce GTX 980 [typical boost clock] |
|
AMD Radeon R9 290X | |
NVIDIA GeForce GTX 780 Ti | |
FPS |
Far Cry 4 | |
2560 x 1440 [no AA / 16xAF] |
|
NVIDIA GeForce GTX 980 Ti [Max] |
|
NVIDIA GeForce GTX Titan X [typical boost clock] |
|
NVIDIA GeForce GTX 980 Ti [typical boost clock] |
|
NVIDIA GeForce GTX 980 [typical boost clock] |
|
AMD Radeon R9 290X | |
NVIDIA GeForce GTX 780 Ti | |
FPS |
Far Cry 4 | |
3840x2160 [No AA / 16xAF] |
|
NVIDIA GeForce GTX Titan X [typical boost clock] |
|
NVIDIA GeForce GTX 980 Ti [Max] |
|
NVIDIA GeForce GTX 980 Ti [typical boost clock] |
|
AMD Radeon R9 290X | |
NVIDIA GeForce GTX 980 [typical boost clock] |
|
NVIDIA GeForce GTX 780 Ti | |
FPS |
Far Cry 4 | |
1920x1080 [4xMSAA / 16xAF] |
|
NVIDIA GeForce GTX 980 Ti [Max] |
|
NVIDIA GeForce GTX Titan X [typical boost clock] |
|
NVIDIA GeForce GTX 980 Ti [typical boost clock] |
|
NVIDIA GeForce GTX 980 [typical boost clock] |
|
AMD Radeon R9 290X | |
NVIDIA GeForce GTX 780 Ti | |
FPS |
Far Cry 4 | |
2560x1440 [4xMSAA / 16xAF] |
|
NVIDIA GeForce GTX 980 Ti [Max] |
|
NVIDIA GeForce GTX Titan X [typical boost clock] |
|
NVIDIA GeForce GTX 980 Ti [typical boost clock] |
|
NVIDIA GeForce GTX 980 [typical boost clock] |
|
AMD Radeon R9 290X | |
NVIDIA GeForce GTX 780 Ti | |
FPS |
Far Cry 4 | |
3840x2160 [4xMSAA / 16xAF] |
|
NVIDIA GeForce GTX Titan X [typical boost clock] |
|
NVIDIA GeForce GTX 980 Ti [Max] |
|
NVIDIA GeForce GTX 980 Ti [typical boost clock] |
|
AMD Radeon R9 290X | |
NVIDIA GeForce GTX 980 [typical boost clock] |
|
NVIDIA GeForce GTX 780 Ti | |
FPS |
Metro: Last Light Redux
Game | Metro: Last Light Redux |
Developer | 4A Games |
Publisher | Deep Silver |
publication | August 29, 2014 |
Genre | Ego shooter |
Graphics engine | 4A engine |
DirectX path | DirectX 10 and 11 |
Age rating USK | 18 years |
Benchmark measurement | Fraps / savegame |
Test area | Chapter Train to the Past |
Runtime benchmark | 10 seconds |
Benchmark settings | System settings: Very high - Tess: High |
Find it on Amazon* |
Metro: Last Light REDUX | |
1920 x 1080 [No AA / 16xAF] |
|
NVIDIA GeForce GTX Titan X [typical boost clock] |
|
NVIDIA GeForce GTX 980 Ti [Max] |
|
NVIDIA GeForce GTX 980 Ti [typical boost clock] |
|
NVIDIA GeForce GTX 980 [typical boost clock] |
|
NVIDIA GeForce GTX 780 Ti | |
AMD Radeon R9 290X | |
FPS |
Metro: Last Light REDUX | |
2560 x 1440 [No AA / 16xAF] |
|
NVIDIA GeForce GTX Titan X [typical boost clock] |
|
NVIDIA GeForce GTX 980 Ti [Max] |
|
NVIDIA GeForce GTX 980 Ti [typical boost clock] |
|
NVIDIA GeForce GTX 980 [typical boost clock] |
|
NVIDIA GeForce GTX 780 Ti | |
AMD Radeon R9 290X | |
FPS |
Metro: Last Light REDUX | |
3840 x 2160 [No AA / 16xAF] |
|
NVIDIA GeForce GTX Titan X [typical boost clock] |
|
NVIDIA GeForce GTX 980 Ti [Max] |
|
NVIDIA GeForce GTX 980 Ti [typical boost clock] |
|
NVIDIA GeForce GTX 980 [typical boost clock] |
|
NVIDIA GeForce GTX 780 Ti | |
AMD Radeon R9 290X | |
FPS |
Metro: Last Light REDUX | |
1920 x 1080 [2xSSAA / 16xAF] |
|
NVIDIA GeForce GTX Titan X [typical boost clock] |
|
NVIDIA GeForce GTX 980 Ti [Max] |
|
NVIDIA GeForce GTX 980 Ti [typical boost clock] |
|
NVIDIA GeForce GTX 980 [typical boost clock] |
|
NVIDIA GeForce GTX 780 Ti | |
AMD Radeon R9 290X | |
FPS |
Metro: Last Light REDUX | |
2560 x 1440 [2xSSAA / 16xAF] |
|
NVIDIA GeForce GTX Titan X [typical boost clock] |
|
NVIDIA GeForce GTX 980 Ti [Max] |
|
NVIDIA GeForce GTX 980 Ti [typical boost clock] |
|
NVIDIA GeForce GTX 980 [typical boost clock] |
|
NVIDIA GeForce GTX 780 Ti | |
AMD Radeon R9 290X | |
FPS |
Metro: Last Light REDUX | |
3840 x 2160 [2xSSAA / 16xAF] |
|
NVIDIA GeForce GTX Titan X [typical boost clock] |
|
NVIDIA GeForce GTX 980 Ti [Max] |
|
NVIDIA GeForce GTX 780 Ti | |
NVIDIA GeForce GTX 980 Ti [typical boost clock] |
|
NVIDIA GeForce GTX 980 [typical boost clock] |
|
AMD Radeon R9 290X | |
FPS |
Ryse: Son of Rome
Game | Ryse: Son of Rome |
Developer | Crytek |
Publisher | Deep Silver |
publication | 10 October 2014 |
Genre | Action-Adventure |
Graphics engine | CryENGINE 3 |
DirectX path / API | DirectX 11 |
Age rating USK | 18 years |
Benchmark measurement | Fraps / savegame |
Test area | Chapter 4 |
Runtime benchmark | 10 seconds |
Benchmark settings | Default setting: high |
HT4U-Test | Order from Amazon* |
Ryse: Son of Rome | |
1920 x 1080 [No AA / 16xAF] |
|
NVIDIA GeForce GTX Titan X [typical boost clock] |
|
NVIDIA GeForce GTX 980 Ti [Max] |
|
NVIDIA GeForce GTX 980 Ti [typical boost clock] |
|
NVIDIA GeForce GTX 980 [typical boost clock] |
|
NVIDIA GeForce GTX 780 Ti | |
AMD Radeon R9 290X | |
FPS |
Ryse: Son of Rome | |
2560 x 1440 [No AA / 16xAF] |
|
NVIDIA GeForce GTX 980 Ti [Max] |
|
NVIDIA GeForce GTX Titan X [typical boost clock] |
|
NVIDIA GeForce GTX 980 Ti [typical boost clock] |
|
NVIDIA GeForce GTX 980 [typical boost clock] |
|
AMD Radeon R9 290X | |
NVIDIA GeForce GTX 780 Ti | |
FPS |
Ryse: Son of Rome | |
3840 x 2160 [No AA / 16xAF] |
|
NVIDIA GeForce GTX Titan X [typical boost clock] |
|
NVIDIA GeForce GTX 980 Ti [Max] |
|
NVIDIA GeForce GTX 980 Ti [typical boost clock] |
|
NVIDIA GeForce GTX 980 [typical boost clock] |
|
AMD Radeon R9 290X | |
NVIDIA GeForce GTX 780 Ti | |
FPS |
Thieves (2014)
Game | Thieves (2014) |
Developer | Eidos |
Publisher | Square Enix |
publication | February 2014 |
Genre | Action adventure / stealth game |
Age rating | 16 years |
Graphics engine | Unreal 3 |
DirectX path | DirectX 9 / DirectX 11 |
Benchmark measurement | Fraps / savegame |
Test area | Stone Market |
Runtime benchmark | 10 seconds |
Benchmark settings | Highest levels of detail |
HT4U-Test | |
Find it on Amazon* |
Thief | |
|