NVIDIA is hosting a "Gaming Celebration" live event during GDC 2017 to talk PC gaming and possibly launch new hardware (if rumors are true!). During the event, NVIDIA CEO Jen-Hsun Huang made a major announcement regarding its top-end GTX 1080 graphics card with a price drop to $499 effective immediately.

The NVIDIA GTX 1080 is a pascal based graphics card with 2560 CUDA cores paired with 8GB of GDDR5X memory. Graphics cards based on this GP104 GPU are currently selling for around $580 to $700 (most are around $650+/-) with the "Founders Edition" having an MSRP of $699. The $499 price teased at the live stream represents a significant price drop compared to what the graphics cards are going for now. NVIDIA did not specify if the new $499 MSRP was the new Founders Edition price or an average price that includes partner cards as well but even if it only happened on the reference cards, the partners would have to adjust their prices downwards accordingly to compete.

I suspect that NVIDIA is making such a bold move to make room in their lineup for a new product (the long-rumored 1080 Ti perhaps?) as well as a pre-emptive strike against AMD and their Radeon RX Vega products. This move may also be good news for GTX 1070 pricing as they may also see price drops to make room for cheaper GTX 1080 partner cards that come in below the $499 price point.

If you have been considering buying a new graphics card, NVIDIA has sweetened the pot a bit especially if you had already been eyeing a GTX 1080. (Note that while the price drop is said to be effective immediately, at the time of writing Amazon was still showing "normal"/typical prices for the cards. Enthusiasts might have to wait a few hours or days for the retailers to catch up and update their sites.)

This makes me a bit more excited to see what AMD will have to offer with Vega as well as the likelihood of a GTX 1080 Ti launch happening sooner rather than later!

The new EVGA GTX 1080 FTW2 with iCX Technology

Back in November of 2016, EVGA had a problem on its hands. The company had a batch of GTX 10-series graphics cards using the new ACX 3.0 cooler solution leave the warehouse missing thermal pads required to keep the power management hardware on its cards within reasonable temperature margins. To its credit, the company took the oversight seriously and instituted a set of solutions for consumers to select from: RMA, new VBIOS to increase fan speeds, or to install thermal pads on your hardware manually. Still, as is the case with any kind of product quality lapse like that, there were (and are) lingering questions about EVGA’s ability to maintain reliable product; with features and new options that don’t compromise the basics.

Internally, the drive to correct these lapses was…strong. From the very top of the food chain on down, it was hammered home that something like this simply couldn’t occur again, and even more so, EVGA was to develop and showcase a new feature set and product lineup demonstrating its ability to innovate. Thus was born, and accelerated, the EVGA iCX Technology infrastructure. While this was something in the pipeline for some time already, it was moved up to counter any negative bias that might have formed for EVGA’s graphics cards over the last several months. The goal was simple: prove that EVGA was the leader in graphics card design and prove that EVGA has learned from previous mistakes.

EVGA iCX Technology

Previous issues aside, the creation of iCX Technology is built around one simple question: is one GPU temperature sensor enough? For nearly all of today’s graphics cards, cooling is based around the temperature of the GPU silicon itself, as measured by NVIDIA (for all of EVGA’s cards). This is how fan curves are built, how GPU clock speeds are handled with GPU Boost, how noise profiles are created, and more. But as process technology has improved, and GPU design has weighed towards power efficiency, the GPU itself is often no longer the thermally limiting factor.

As it turns out, converting 12V (from the power supply) to ~1V (necessary for the GPU) is a simple process that creates a lot of excess heat. The thermal images above clearly demonstrate that and EVGA isn’t the only card vendor to take notice of this. As it turns out, EVGA’s product issue from last year was related to this – the fans were only spinning fast enough to keep the GPU cool and did not take into account the temperature of memory or power delivery.

The fix from EVGA is to ratchet up the number of sensors on the card PCB and wrap them with intelligence in the form of MCUs, updated Precision XOC software and user viewable LEDs on the card itself.

EVGA graphics cards with iCX Technology will include 9 total thermal sensors on the board, independent of the GPU temperature sensor directly integrated by NVIDIA. There are three sensors for memory, five for power delivery and an additional sensor for the GPU temperature. Some are located on the back of the PCB to avoid any conflicts with trace routing between critical components, including the secondary GPU sensor.

NVIDIA P100 comes to Quadro

At the start of the SOLIDWORKS World conference this week, NVIDIA took the cover off of a handful of new Quadro cards targeting professional graphics workloads. Though the bulk of NVIDIA’s discussion covered lower cost options like the Quadro P4000, P2000, and below, the most interesting product sits at the high end, the Quadro GP100.

As you might guess from the name alone, the Quadro GP100 is based on the GP100 GPU, the same silicon used on the Tesla P100 announced back in April of 2016. At the time, the GP100 GPU was specifically billed as an HPC accelerator for servers. It had a unique form factor with a passive cooler that required additional chassis fans. Just a couple of months later, a PCIe version of the GP100 was released under the Tesla GP100 brand with the same specifications.

Today that GPU hardware gets a third iteration as the Quadro GP100. Let’s take a look at the Quadro GP100 specifications and how it compares to some recent Quadro offerings.

Quadro GP100

Quadro P6000

Quadro M6000

Full GP100

GPU

GP100

GP102

GM200

GP100 (Pascal)

SMs

56

60

48

60

TPCs

28

30

24

(30?)

FP32 CUDA Cores / SM

64

64

64

64

FP32 CUDA Cores / GPU

3584

3840

3072

3840

FP64 CUDA Cores / SM

32

2

2

32

FP64 CUDA Cores / GPU

1792

120

96

1920

Base Clock

1303 MHz

1417 MHz

1026 MHz

TBD

GPU Boost Clock

1442 MHz

1530 MHz

1152 MHz

TBD

FP32 TFLOPS (SP)

10.3

12.0

7.0

TBD

FP64 TFLOPS (DP)

5.15

0.375

0.221

TBD

Texture Units

224

240

192

240

ROPs

128?

96

96

128?

Memory Interface

1.4 Gbps
4096-bit HBM2

9 Gbps
384-bit GDDR5X

6.6 Gbps
384-bit
GDDR5

4096-bit HBM2

Memory Bandwidth

716 GB/s

432 GB/s

316.8 GB/s

?

Memory Size

16GB

24 GB

12GB

16GB

TDP

235 W

250 W

250 W

TBD

Transistors

15.3 billion

12 billion

8 billion

15.3 billion

GPU Die Size

610mm2

471 mm2

601 mm2

610mm2

Manufacturing Process

16nm

16nm

28nm

16nm

There are some interesting stats here that may not be obvious at first glance. Most interesting is that despite the pricing and segmentation, the GP100 is not the de facto fastest Quadro card from NVIDIA depending on your workload. With 3584 CUDA cores running at somewhere around 1400 MHz at Boost speeds, the single precision (32-bit) rating for GP100 is 10.3 TFLOPS, less than the recently released P6000 card. Based on GP102, the P6000 has 3840 CUDA cores running at something around 1500 MHz for a total of 12 TFLOPS.

GP100 (full) Block Diagram

Clearly the placement for Quadro GP100 is based around its 64-bit, double precision performance, and its ability to offer real-time simulations on more complex workloads than other Pascal-based Quadro cards can offer. The Quadro GP100 offers 1/2 DP compute rate, totaling 5.2 TFLOPS. The P6000 on the other hand is only capable of 0.375 TLOPS with the standard, consumer level 1/32 DP rate. Inclusion of ECC memory support on GP100 is also something no other recent Quadro card has.

Raw graphics performance and throughput is going to be questionable until someone does some testing, but it seems likely that the Quadro P6000 will still be the best solution for that by at least a slim margin. With a higher CUDA core count, higher clock speeds and equivalent architecture, the P6000 should run games, graphics rendering and design applications very well.

There are other important differences offered by the GP100. The memory system is built around a 16GB HBM2 implementation which means more total memory bandwidth but at a lower capacity than the 24GB Quadro P6000. Offering 66% more memory bandwidth does mean that the GP100 offers applications that are pixel throughput bound an advantage, as long as the compute capability keeps up on the backend.

Around back, both the Gigabyte GTX 1050 OC Low Profile 2G and GTX 1050 Ti OC Low Profile 4G offer four display outputs in the form of two HDMI 2.0b, one DisplayPort 1.4, and one dual-link DVI-D. It appears that Gigabyte is using the same cooler for both cards. There is not much information on this cooler, but it utilizes an aluminum heatsink and what looks like a ~50mm fan. Note that while the cards are half-height, they use a dual slot design which may limit the cases it can be used in.

The pint-sized graphics cards would certainly allow for gaming on your SFF home theater or other desktop PC as well as being an easy upgrade to make a tiny OEM PC gaming capable (think those thin towers HP, Lenovo, and Dell like to use).

Of course, Gigabyte is not yet talking pricing and availability has only been narrowed down to a general Q1 2017 time frame. I would expect the cards to hit retailers within a month or so and be somewhere around $135 for their half height GTX 1050 OC LP 2G and approximately $155 for the faster GTX 1050 Ti variant. That is to say that the low profile cards should be available at a slight premium over the company's larger GTX 1050 and GTX 1050 Ti graphics cards.

One interesting development from Gigabyte at this year’s CES was the expansion of its Aorus branding and the transition from Xtreme Gaming. Initially used on its RGB LED equipped motherboards, the company is rolling out the brand to its other higher end products including laptops and graphics cards. While it appears that Xtreme Gaming is not going away entirely, Aorus is taking the spotlight with the introduction of the first Aorus branded graphics card: the GTX 1080.

Featuring a similar triple 100mm fan cooler as the GTX 1080 Xtreme Gaming 8G, the Aorus GTX 1080 comes with x patterned LED lighting as well as a backlit Aorus logo on the side and a backlit Eagle on the backplate. The cooler is comprised of three 100mm double stacked fans (the center fan is recessed and spins in the opposite direction of the side fans) over a shrouded angled aluminum fin stack that connects to the GPU over five large copper heatpipes.

The graphics card is powered by two 8-pin PCI-E power connectors.

In an interesting twist, the card has two HDMI ports on the back of the card that are intended to be used to hook up front panel HDMI outputs for things like VR headsets. Another differentiator between the upcoming card and the Xtreme Gaming 8G is the backplate which has a large copper plate secured over the underside of the GPU. Several sites are reporting that this area can be used for watercooling, but I am skeptical of this as if you are going to go out and buy a waterblock for your graphics card you might as well buy a block to put on top of the GPU and not on the area of the PCB opposite the GPU!). As is, the copper plate on the backplate certainly won’t hurt cooling, and it looks cool, but that’s all I suspect it is.

Naturally, Gigabyte is not talking clock speeds on this new card, but I expect it to hit at least the same clocks as its Xtreme Gaming 8G predecessor which was clocked at 1759 MHz base and 1848 MHz boost out of the box and 1784 MHz base and 1936 MHz boost in OC Mode respectively. Gigabyte also overlocked the memory on that card up to 10400 MHz on OC Mode.

Gigabyte also had new SLI HB bridges on display bearing the Aorus logo to match the Aorus GPU. The company also had Xtreme Gaming SLI HB bridges though which further suggests that they are not completely retiring that branding (at least not yet).

Pricing has not been announced, but the card will be available in February.

Gigabyte has yet to release official photos of the card or a product page, but it should show up on their website shortly. In the meantime, Paul's Hardware and Think Computers shot some video of the card on the show floor which I have linked above if you are interested in the card. Looking on Amazon, the Xtreme Gaming 1080 8GB is approximately $690 before rebate so I would guess that the Aorus card would come out at a slight premium over that if only for the fact that it is a newer release, has a more expensive backplate and additional RGB LED backlighting.

Having already looked at AMD's performance with two RX 480's in a system, the recent patch which enables support for multiple NVIDIA GPUs have dragged [H]ard|OCP back into the game. Lacking a pair of Titan X cards, they tested the performance of a pair of GTX 1080s and 1070s; the GTX 1060 will not be receiving support from Croteam. It would seem that adding a second Pascal card to your system will benefit you, however the scaling they saw was nowhere near as impressive as with the AMD RX 480 which saw a 36% boost. Check out the full results here and yes ... in this case the m in mGPU indicates multiple GPUs, not mobile.

"Serious Sam VR was the first commercial enthusiast gaming title to include multi-GPU support with AMD's RX 480 GPU. Now the folks at Croteam have added mGPU support for NVIDIA cards as well. We take a look at how well NVIDIA's VRSLI technology fares in this VR shooter title."

A Holiday Project

A couple of years ago, I performed an experiment around the GeForce GTX 750 Ti graphics card to see if we could upgrade basic OEM, off-the-shelf computers to become competent gaming PCs. The key to this potential upgrade was that the GTX 750 Ti offered a great amount of GPU horsepower (at the time) without the need for an external power connector. Lower power requirements on the GPU meant that even the most basic of OEM power supplies should be able to do the job.

That story was a success, both in terms of the result in gaming performance and the positive feedback it received. Today, I am attempting to do that same thing but with a new class of GPU and a new class of PC games.

The goal for today’s experiment remains pretty much the same: can a low-cost, low-power GeForce GTX 1050 Ti graphics card that also does not require any external power connector offer enough gaming horsepower to upgrade current shipping OEM PCs to "gaming PC" status?

Our target PCs for today come from Dell and ASUS. I went into my local Best Buy just before the Thanksgiving holiday and looked for two machines that varied in price and relative performance.

The specifications of these two machines are relatively modern for OEM computers. The Dell Inspiron 3650 uses a modest dual-core Core i3-6100 processor with a fixed clock speed of 3.7 GHz. It has a 1TB standard hard drive and a 240 watt power supply. The ASUS M32CD-B09 PC has a quad-core HyperThreaded processor with a 4.0 GHz maximum Turbo clock, a 1TB hybrid hard drive and a 350 watt power supply. Both of the CPUs share the same Intel brand of integrated graphics, the HD Graphics 520. You’ll see in our testing that not only is this integrated GPU unqualified for modern PC gaming, but it also performs quite differently based on the CPU it is paired with.

We have a lot of gaming notebooks

Back in April I did a video with MSI that looked at all of the gaming notebook lines it built around the GTX 900-series of GPUs. Today we have stepped it up a notch, and again are giving you an overview of MSI's gaming notebook lines that now feature the ultra-powerful GTX 10-series using NVIDIA's Pascal architecture. That includes the GTX 1060, GTX 1070 and GTX 1080.

What differentiates the various series of notebooks from MSI? The GE series is for entry level notebook gaming, the GS series offers slim options while the GT series is the ultimate PC gaming mobile platforms.

If so, you have a reason to be concerned and EVGA offers their apologies and more importantly, a fix. EVGA's tests, which emulate the ones performed at Tom's show that the thermal temperature of the PWM and memory was just marginally within spec. That is a fancy way of saying that in certain circumstances the PWM was running just short of causing a critical thermal incident, also know as catching on fire and letting out the magic smoke. They claim that this was because the testing focused on GPU temperature and the lowest acoustic levels possible and did not involve measuring the heat produced on memory or the VRM which is, as they say, a problem.

You have several choices of remedy from EVGA, please remember that you should reach out directly to their support, not NVIDIA's. You can try requesting a refund from the store you purchased it at but your best bet is EVGA.

The first option is a cross-ship RMA. Contact EVGA as a guest or with your account to set up an RMA and they will ship you a replacement card with a new VBIOS which will not have this issue and you won't need to send yours back until the replacement arrives.

You can flash to the new VBIOS which will adjust the fan-speed curve to ensure that your fans are running higher than 30% and will provide sufficient cooling to additional portions of the GPU. Your card will be louder but it will also be less likely to commit suicide in a dramatic fashion.

Lastly you can request a thermal pad kit, which EVGA suggests is unnecessary but certainly sounds like a good idea especially as it is free although requires you sign up for an EVGA account. Hopefully in the spare seconds currently available to the team we can get our hands on an ACX 3.0 cooled Pascal card with the VBIOS update and thermal pads so we can verify this for you.

This issue should not have happened and does reflect badly on certain factors of EVGA's testing. Their response has been very appropriate on the other hand, if you are affected then you can get a replacement card with no issues or you can fix the issue yourself. Any cards shipped, though not necessarily purchased, after Nov. 1st will have the new VBIOS so be careful if you are sticking with a new EVGA Pascal card.

The Guru of 3D tested out MSI's GeForce GTX 1050 and 1050 Ti, with MSRP's of $109 and $139 respectively. The non-Ti version has the lowest count of Texture Mapping Units of this generation but a higher GPU frequency that the Ti model, it also has the smallest amount of memory at 2GB though at least it is clocked the same in both models. DirectX 12 testing offers variable results, in many games the two are bookends to the RX 460 with the GTX 1050 a bit slower and the 1050 Ti a bit faster but this does not hold true in all games. DirectX 11 results were more favourable for this architecture, the two cards climbed in the rankings with the 1050 Ti offering acceptable performance. Check out their full review here.

"Last week Nvidia announced the GeForce GTX 1050 series, with two primary models. In this article we'll review the MSI GeForce GTX 1050 and 1050 Ti Gaming X, two graphics cards aimed at the budget minded consumer. We say budget minded as these cards are very affordable and positioned in an attractive 109 and 139 dollar (US) segment."