Just two days ago, NVIDIA has published a job posting for a software engineer to “implement and extend 3D graphics and Metal”. Given that they specify the Metal API, and they want applicants who are “Experienced with OSX and/or Linux operating systems”, it seems clear that this job would involve macOS and/or iOS.

Second, and this is where it gets potentially news-worthy, is that NVIDIA hasn’t really done a whole lot on Apple platforms for a while. The most recent NVIDIA GPU to see macOS is the GeForce GTX 680. It’s entirely possible that NVIDIA needs someone to fill in and maintain those old components. If that’s the case? Business as usual. Nothing to see here.

The other possibility is that NVIDIA might be expecting a design win with Apple. What? Who knows. It could be something as simple as Apple’s external GPU architecture allowing the user to select their own add-in board. Alternatively, Apple could have selected an NVIDIA GPU for one or more product lines, which they have not done since 2013 (as far as I can tell).

Apple typically makes big announcements at WWDC, which is expected in early June, or around the back-to-school season in September. I’m guessing we’ll know by then at the latest if something is in the works.

AMD is quoting frame rate increases in the range of ~3-4% with this driver, although PubG can see up to 7% if you compare it with 17.12.1. They don’t seem to list any fixes, although there’s a handful of known issues, like FreeSync coming online during Google Chrome video playback, refreshing incorrectly and causing flicker. There’s also a system hang that could occur when twelve GPUs are performing a compute task. I WONDER WHAT CONDITIONS WOULD CAUSE THAT.

Valve has announced support for AMD's TrueAudio Next technology in its Steam Audio SDK for developers. The partnership will allow game and VR application developers to reserve a portion of a GCN-based GPU's compute units for audio processing and increase the quality and quantity of audio sources as a result. AMD's OpenCL-based TrueAudio Next technology can run CPUs as well but it's strength is in the ability to run on a dedicated portion of the GPU to improve both frame times and audio quality since threads are not competing for the same GPU resources during complex scenes and the GPU can process complex audio scenes and convolutions much more efficiently than a CPU (especially as the number of sources and impulse responses increase) respectively.

Steam Audio's TrueAudio Next integration is being positioned as an option for developers and the answer to increasing the level of immersion in virtual reality games and applications. While TrueAudio Next is not using ray tracing for audio, it is physics-based and can be used to great effect to create realistic scenes with large numbers of direct and indirect audio sources, ambisonics, increased impulse response lengths, echoes, reflections, reverb, frequency equalization, and HRTF (Head Related Transfer Function) 3D audio. According to Valve indirect audio from multiple sources with convolution reverb is one of the most computationally intensive parts of Steam Audio, and TAN is able to handle it much more efficiently and accurately without affecting GPU frame times and freeing the CPU up for additional physics and AI tasks which it is much better at anyway. Convolution is a way of modeling and filtering audio to create effects such as echoes and reverb. In the case of indirect audio, Steam Audio uses ray tracing to generate an impulse response (it measures the distance and path audio would travel from source to listener) and then convolution is used to generate a reverb effect which, while very accurate, can be quite computationally intensive with it requiring hundreds of thousands of sound samples. Ambisonics further represent the directional nature of indirect sound which helps to improve positional audio and the immersion factor as sounds are more real-world modeled.

In addition to the ability of developers to dedicate a portion (up to 20 to 25%) of a GPU's compute units to audio processing, developers can enable/disable TrueAudio processing including the level of acoustic complexity and detail on a scene-by-scene basis. Currently it appears that Unity, FMOD Studio, and C API engines can hook into Steam Audio and the TrueAudio Next features, but it remains up to developers to use the features and integrate them into their games.

Note that GPU-based TrueAudio Next requires a GCN-based graphics card of the RX 470, RX 480, RX 570, RX 580, R9 Fury, R9 Fury X, Radeon Pro Duo, RX Vega 56, and RX Vega 64 variety in order to work, so that is a limiting factor in adoption much like the various hair and facial tech is for AMD and NVIDIA on the visual side of things where the question of is the target market large enough to encourage developers to put in the time and effort to enable X optional feature arises.

I do not pretend to be an audio engineer, nor do I play a GPU programmer on TV but more options are always good and I hope that developers take advantage of the resource reservation and GPU compute convolution algorithms of TrueAudio Next to further the immersion factor of audio as much as they have the visual side of things. As VR continues to become more relevant I think that developers will have to start putting more emphasis on accurate and detailed audio and that's a good thing for an aspect of gaming that has seemingly taken a backseat since Windows Vista.

What are your thoughts on the state of audio in gaming and Steam Audio's new TrueAudio Next integration?

MSI is updating its Radeon RX 580 Armor series with a new MK2 variant (in both standard and OC editions) that features an updated cooler with red and black color scheme and a metal backplate along with Torx 2.0 fans.

The graphics card is powered by a single 8-pin PCI-E power connection and has two DisplayPort, two HDMI, and one DVI display output. MSI claims the MK2 cards use its Military Class 4 hardware including high end solid capacitors. The large heatsink features three copper heatpipes and a large aluminum fin stack. It appears that the cards are using the same PCB as the original Armor series but it is not clear from MSI’s site if they have dome anything different to the power delivery.

The RX 580 Polaris GPU is running at a slight factory overclock out of the box with a boost clock of up to 1353 MHz (reference is 1340) for the standard edition and at up to 1366 MHz for the RX 580 Armor MK2 OC Edition. The OC edition can further clock up to 1380 MHz when run in OC mode using the company’s software utility (enthusiasts can attempt to go beyond that but MSi makes no guarantees). Both cards come with 8GB of GDDR5 memory clocked at the reference 8GHz.

MSI did not release pricing or availability but expect them to be difficult to find and for well above MSRP when they are in stock If you have a physical Microcenter near you, it might be worth watching for one of these cards there to have a chance of getting one closer to MSRP.

On the odd occasion it is in stock, the GIGABYTE AORUS GTX 1080 Ti Waterforce Xtreme will cost you $1300 or more, about twice what the MSRP is. The liquid cooled card does come with overclocking, Gaming mode offers 1607MHz Base and 1721MHz Boost Clock, OC mode is 1632MHz Base and 1746MHz Boost Clock. [H]ard|OCP managed to hit an impressive 2038MHz Base, 2050MHz Boost with 11.6GHz VRAM. Check out the full review to see what that did for its performance.

"GIGABYTE has released a brand new All-In-One liquid cooled GeForce GTX 1080 Ti video card with the AORUS Waterforce Xtreme Edition video card. This video card gives the Corsair Hydro GFX liquid cooled video card some competition, with a higher out-of-box clock speed we’ll see how fast this video card is and if there is any room left for overclocking."

SK Hynix recently updated its product catalog and announced the availability of its eight gigabit (8 Gb) GDDR6 graphics memory. The new chips come in two SKUs and three speed grades with the H56C8H24MJR-S2C parts operating at 14 Gbps and 12 Gbps and the H56C8H24MJR-S0C operating at 12 Gbps (but at higher voltage than the -S2C SKU) and 10 Gbps. Voltages range from 1.25V for 10 Gbps and either 1.25V or 1.35V for 12 Gbps to 1.35V for 14 Gbps. Each 8 Gb GDDR6 memory chip holds 1 GB of memory and can provide up to 56 GB/s of per-chip bandwidth.

While SK Hynix has a long way to go before competing with Samsung’s 18 Gbps GDDR6, its new chips are significantly faster than even its latest GDDR5 chips with the company working on bringing 9 Gbps and 10 Gbps GDDR5 to market. As a point of comparison, its fastest 10 Gbps GDDR5 would have a per chip bandwidth of 40 GB/s versus its 14 Gbps GDDR6 at 56 GB/s. A theoretical 8GB graphics card with eight 8 Gb chips running at 10 Gbps on a 256-bit memory bus would have maximum bandwidth of 320 GB/s. Replacing the GDDR5 with 14 Gbps GDDR6 in the same eight chip 256-bit bus configuration, the graphics card would hit 448 GB/s of bandwidth. In the Samsung story I noted that the Titan XP runs 12 8 Gb GDDR5X memory chips at 11.4 Gbps on a 384-bit bus for bandwidth of 547 GB/s. Replacing the G5X with GDDR6 would ramp up the bandwidth to 672 GB/s if running the chips at 14 Gbps.

Theoretical Memory Bandwidth

Chip Pin Speed

Per Chip Bandwidth

256-bit bus

384-bit bus

1024-bit (one package)

4096-bit (4 packages)

10 Gbps

40 GB/s

320 GB/s

480 GB/s

12 Gbps

48 GB/s

384 GB/s

576 GB/s

14 Gbps

56 GB/s

448 GB/s

672 GB/s

16 Gbps

64 GB/s

512 GB/s

768 GB/s

18 Gbps

72 GB/s

576 GB/s

864 GB/s

HBM2 2 Gbps

256 GB/s

256 GB/s

1 TB/s

GDDR6 is still a far cry from High Bandwidth Memory levels of performance, but it is much cheaper and easier to produce. With SK Hynix ramping up production and Samsung besting the fastest 16 Gbps G5X, it is likely that the G5X stop-gap will be wholly replaced with GDDR6 and things like the upgraded 10 Gbps GDDR5 from SK Hynix will pick up the low end. As more competition enters the GDDR6 space, prices should continue to come down and adoption should ramp up for the new standard with the next generation GPUs, game consoles, network devices, ect. using GDDR6 for all but the highest tier prosumer and enterprise HPC markets.

AMD announced today that it has hired two new executives to run its graphics division after the departure of Radeon Technologies Group’s previous lead. Raja Koduri left AMD in November to join Intel and launch its new Core and Visual Computing group, creating a hole in the leadership of this critical division at AMD. CEO Lisa Su filled in during Koduri’s sabbatical and subsequent exit, but the company had been searching for the right replacements since late last year.

Appointed as the senior vice president and GM of the Radeon Technologies Group, Mike Rayfield comes to AMD from previous stints at both Micron and NVIDIA. Rayfield will cover all aspects of the business management of AMD’s graphics division, including consumer, professional, game consoles, and the semi-custom division that recently announced a partnership with Intel. At Micron he served as the senior vice president of the Mobile Business Unit, responsible for company’s direction in working with wireless technology providers (smart phones, tablets, etc.) across various memory categories. While at NVIDIA, Rayfield was the general manager of the Mobile Business Unit helping to create the Tegra brand and products. Though in a different division at the time, Rayfield’s knowledge and experience in the NVIDIA organization may help AMD better address the graphics markets.

David Wang is now the senior vice president of engineering for the AMD Radeon Technologies Group and is responsible for the development of new graphics architectures, the hardware and software that integrate them, and the future strategy of where AMD will invest in graphics R&D. Wang is an alumni of AMD, working as corporate vice president for graphics IP and chip development before leaving in 2012 for Synaptics. David has more than 25 years of graphics and silicon experience, starting at LSI Logic, through ArtX, then ATI, before being acquired by AMD.

The hires come at a critical time for AMD. Though the processor division responsible for the Zen architecture and Ryzen/EPYC processors continues to make strong movement against the Intel dominated space, NVIDIA’s stranglehold on the graphics markets for gaming, machine learning, and autonomous driving are expanding the gap between the graphics chip vendors. The Vega architecture was meant to close it (at least somewhat) but NVIDIA remains the leader in the space by a not insignificant margin. Changing that is and should be AMD’s primary goal for the next few years.

AMD is hoping that by creating this two-headed spear of leadership for its Radeon graphics division it can get the group back on track. Rayfield will be taking over all business aspects of the graphics portion of AMD and that includes the addition of the semi-custom segment, previously a part of the EESC (Enterprise, Embedded, and Semi-Custom) group under senior vice president Forrest Norrod. AMD believes that with the growth and expansion of the enterprise segment with its EPYC processor family, and because the emphasis on the semi-custom group continues to be the advantage AMD holds in its graphics portfolio, the long-term strategy can be better executed with that group under the Radeon Technologies umbrella.

The return of Wang as the technical lead for the graphics division could bring significant positive momentum to the group that has struggled in the weeks leading up to the release of its Vega architecture. The product family based on that tech underwhelmed and had concerns over availability, pricing, and timing. Wang has a strong history in the graphics field, with experience as far back as any high-level graphics executive in the business. While at ATI and AMD, Wang worked on architectures from 2002 through 2012, with several periods of graphics leadership under his belt. Competing against the giant that NVIDIA has become will be a challenge that requires significant technical knowledge and risk-taking and Wang has the acumen to get it done.

AMD CEO Lisa Su expressed excitement and trust in the new graphics executives. “Mike and David are industry leaders who bring proven track records of delivering profitable business growth and leadership product roadmaps,” she says. “We enter 2018 with incredible momentum for our graphics business based on the full set of GPU products we introduced last year for the consumer, professional, and machine learning markets. Under Mike and David’s leadership, I am confident we will continue to grow the footprint of Radeon across the gaming, immersive, and GPU compute markets.”

Samsung is now mass producing new higher density GDDR6 memory built on its 10nm-class process technology that it claims offers twice the speed and density of its previous 20nm GDDR5. Samsung's new GDDR6 memory uses 16 Gb dies (2 GB) featuring pin speeds of 18 Gbps (gigabits-per-second) and is able to hit data transfer speeds of up to 72 GB/s per chip.

According to Samsnug, its new GDDR6 uses a new circuit design which allows it to run on a mere 1.35 volts. Also good news for Samsung and for memory supply (and thus pricing and availability of products) is that the company is seeing a 30% gain in manufacturing productivity cranking out its 16Gb GDDR6 versus its 20nm GDDR5.

Running at 18 Gbps, the new GDDR6 offers up quite a bit of bandwidth and will allow for graphics cards with much higher amounts of VRAM. Per package, Samsung's 16Gb GDDR6 offers 72 GB/s which is twice the density, pin speed, and bandwidth than that of its 8Gb GDDR5 running at 8Gbps and 1.5V with data transfers of 32 GB/s. (Note that SK Hynix has announced it plans to produce 9Gbps and 10Gbps dies which max out at 40 GB/s.) GDDR5X gets closer to this mark, and in theory is able to hit up to 16 Gbps per pin and 64 GB/s per die, but so far the G5X used in real world products has been much slower (the Titan XP runs at 11.4 Gbps for example). The Titan XP runs 12 8Gb (1GB) dies at 11.4 Gbps on a 384-bit memory bus for maximum memory bandwidth of 547 GB/s. Moving to GDDR6 would enable that same graphics card to have 24 GB of memory (with the same number of dies) with up to 864 GB/s of bandwidth which is approaching High Bandwidth Memory levels of performance (though it still falls short of newer HBM2 and in practice the graphics card would likely be more conservative on the memory speeds). Still, it's an impressive jump in memory performance that widens the gap between GDDR6 and GDDR5X. I am curious how the GPU memory market will shake out in 2018 and 2019 with GDDR5, GDDR5X, GDDR6, HBM, HBM2, and HBM3 all being readily available for use in graphics cards and where each memory type will land especially on the mid-range and high-end consumer cards (HBM2/3 still holds the performance crown and is ideal for the HPC market).

Samsung is aiming its new 18Gbps 16Gb memory at high performance graphics cards, game consoles, vehicles, and networking devices. Stay tuned for more information on GDDR6 as it develops!

Samsung recently announced that it has begun mass production of its second generation HBM2 memory which it is calling “Aquabolt”. Samsung has refined the design of its 8GB HBM2 packages allowing them to achieve an impressive 2.4 Gbps per pin data transfer rates without needing more power than its first generation 1.2V HBM2.

Reportedly Samsung is using new TSV (through-silicon-via) design techniques and adding additional thermal bumps between dies to improve clocks and thermal control. Each 8GB HBM2 “Aquabolt” package is comprised of eight 8Gb dies each of which is vertically interconnected using 5,000 TSVs which is a huge number especially considering how small and tightly packed these dies are. Further, Samsung has added a new protective layer at the bottom of the stack to reinforce the package’s physical strength. While the press release did not go into detail, it does mention that Samsung had to overcome challenges relating to “collateral clock skewing” as a result of the sheer number of TSVs.

On the performance front, Samsung claims that Aquabolt offers up a 50% increase in per package performance versus its first generation “Flarebolt” memory which ran at 1.6Gbps per pin and 1.2V. Interestingly, Aquabolt is also faster than Samsung’s 2.0Gbps per pin HBM2 product (which needed 1.35V) without needing additional power. Samsung also compares Aquabolt to GDDR5 stating that it offers 9.6-times the bandwidth with a single package of HBM2 at 307 GB/s and a GDDR5 chip at 32 GB/s. Thanks to the 2.4 Gbps per pin speed, Aquabolt offers 307 GB/s of bandwidth per package and with four packages products such as graphics cards can take advantage of 1.2 TB/s of bandwidth.

This second generation HBM2 memory is a decent step up in performance (with HBM hitting 128GB/s and first generation HBM2 hitting 256 GB/s per package and 512 GB/s and 1 TB/s with four packages respectively), but the interesting bit is that it is faster without needing more power. The increased bandwidth and data transfer speeds will be a boon to the HPC and supercomputing market and useful for working with massive databases, simulations, neural networks and AI training, and other “big data” tasks.

Aquabolt looks particularly promising for the mobile market though with future products succeeding the current mobile Vega GPU in Kaby Lake-G processors, Ryzen Mobile APUs, and eventually discrete Vega mobile graphics cards getting a nice performance boost (it’s likely too late for AMD to go with this new HBM2 on these specific products, but future refreshes or generations may be able to take advantage of it). I’m sure it will also see usage in the SoCs uses in Intel’s and NVIDIA’s driverless car projects as well.

NVIDIA is opening up its Geforce NOW cloud gaming service to PC gamers who will join Mac users (who got access last year) in the free beta. The service uses GeForce GTX graphics cards and high-powered servers to store, play, and stream games at high settings and stream the output over the internet back to gamers of any desktop or laptop old or new (so long as you have at least a 25Mbps internet connection and can meet the basic requirements to run the Geforce NOW application of course - see below). Currently, NVIDIA supports over 160 games that can be installed on its virtual GeForce NOW gaming PCs and a select number of optimized titles can even be played at 120 FPS for a smoother gaming experience that is closer to playing locally (allegedly).

GeForce NOW is a bring your own games service in the sense that you install the Geforce NOW app on your local machine and validate the games you have purchased and have the rights to play on Steam and Ubisoft's Uplay PC stores. You are then able to install the games on the cloud-based Geforce NOW machines. The game installations reportedly take around 30 seconds with game patching, configurations, and driver updates being handled by NVIDIA's Geforce NOW platform. Gamers will be glad to know that the infrastructure further supports syncing with the games' respective stores and save games, achievements, and settings are synched allowing potentially seamless transitions between local and remote play sessions.

While many of the titles may need to be tweaked to get the best performance, some games have been certified and optimized by NVIDIA to come pre-configured with the best graphics settings for optimum performance including running them at maximum settings at 1920 x 1080 and 120 Hz.

If you are interested in the cloud-based game streaming service, you can sign up for the GeForce NOW beta here and join the waiting list! According to AnandTech, users will need a Windows 7 (or OS X equivalent) PC with at least a Core i3 clocked at 3.1 GHz with 4GB of RAM and a DirectX 9 GPU (AMD HD 3000 series / NVIDIA 600 Series / Intel HD 2000 series) or better. Beta users are limited to 4 hours per gaming session. There is no word on when the paid Geforce NOW tiers will resume or what the pricing for the rented virtual gaming desktops will be.

I signed up (not sure I'll get in though, maybe they need someone to test with old hardware hah) and am interested to try it as their past streaming attempts (e.g. to the Shield Portable) seemed to work pretty well for what it was (something streamed over the internet).

Hopefully they have managed to make it better and quicker to respond to inputs. Have you managed to get access, and if so what are your thoughts? Is GeForce NOW the way its meant to be played? It would be cool to see them add Space Engineers and Sins of a Solar Empire: Rebellion as while me and my brother have fun playing them, they are quite demanding resource wise especially Space Engineers post planets update!

Just like their teaser indicated, one of the major features of this new Vive hardware is an increased display resolution. The Vive Pro's resolution is 2880x1600 (combined), a 78% increase from the standard 2160×1200 resolution shared by the original Vive and the Oculus Rift.

In addition to the display improvements, there are also some design changes in the Vive Pro that aim to allow users to quickly put on the headset and adjust it for maximum comfortability. The Vive Pro now features a dial on the back of the head strap to adjust the headset rather than having to adjust velcro straps. This setup is very reminiscent of the PSVR headset which is widely regarded as one of the most comfortable VR headsets currently on the market.

While we've already seen some of these design changes like integrated headphones in the currently shipping Deluxe Audio Strap for Vive, the Vive Pro is built from the ground up with this new strap instead of it being a replacement option.

HTC was very quiet about the change from a single front-facing camera on the standard Vive to dual front cameras on the Vive Pro. Having stereo cameras on the device have the potential to provide a lot of utility ranging from a stereo view of your surroundings when you are nearing the chaperone boundaries to potential AR applications.

The Vive Pro will work with the current 1.0 base stations for positional tracking, as well as Valve's previously announced but unreleased 2.0 base stations. When using SteamVR 2.0 tracking, the Vive Pro supports up to 4 base stations, allowing for a significantly larger play area of up to 10m x 10m.

Initially, the Vive Pro is slated to ship this quarter as a headset-only upgrade for customers who already have the original Vive with its 1.0 base stations. The full Vive Pro kit with 2.0 tracking is said to ship in the Summer time frame. Pricing for both configurations is yet to be announced.

In addition to new headset hardware, HTC also announced their first official solution for wireless VR connectivity.

Built in partnership with Intel, the Vive Wireless Adapter will use 60 GHz WiGig technology to provide a low latency experience for wirelessly streaming video to the HMD. Both the original Vive and the Vive Pro will support this adapter set to be available this summer. We also have no indications of pricing on the Vive Wireless Adapter.

HTC's announcements today are impressive and should help push PC VR forward. We have yet to get hands-on experience with either the Vive Pro or the Vive Wireless adapter, but we have a demo appointment tomorrow, so keep checking PC Perspective for our updated impressions of the next generation of VR!

Although their Keynote presentation tonight at CES is all about automotive technology, that hasn't stopped NVIDIA from providing us with a few gaming-related announcements this week. The most interesting of which is what NVIDIA is calling "Big Format Gaming Displays" or BFGDs (get it?!).

Along with partners ASUS, Acer, and HP, NVIDIA has developed what seems to be the ultimate living room display solution for gamers.

Based on an HDR-enabled 65" 4K 120Hz panel, these displays integrate both NVIDIA G-SYNC variable refresh rate technology for smooth gameplay, as well as a built-in NVIDIA SHIELD TV set-top box.

In addition to G-SYNC technology, these displays will also feature a full direct-array backlight capable of a peak luminance of 1000-nits and conform to the DCI-P3 color gamut, both necessary features for a quality HDR experience. These specifications put the BFGDs in line with the current 4K HDR TVs on the market.

Unlike traditional televisions, these BFGDs are expected to have very low input latencies, a significant advantage for both PC and console gamers.

Integration of the SHIELD TV means that these displays will be more than just an extremely large PC monitor, but rather capable of replacing the TV in your living room. The Android TV operating system means you will get access to a lot of the most popular streaming video applications, as well as features like Google Assistant and NVIDIA GameStream.

Personally, I am excited at the idea of what is essentially a 65" TV, but optimized for things like low input latency. The current crop of high-end TVs on the market cater very little to gamers, with game modes that don't turn off all of the image processing effects and still have significant latency.

It's also interesting to see companies like ASUS, Acer, and HP who are well known in the PC display market essentially entering the TV market with these BFGD products.

Stay tuned as for eyes-on impression of the BFGD displays as part of our CES 2018 coverage!

Update: ASUS has officially announced their BFGD offering, the aptly named PG65 (pictured below). We have a meeting with ASUS this week, and we hope to get a look at this upcoming product!

Though just the most basic of teases, AMD confirmed at CES that it will have a 7nm based Vega product sampling sometime in 2018. No mention of shipping timeline, performance, or consumer variants were to be found.

This product will target the machine learning market, with hardware and platform optimizations key to that segment. AMD mentions “new DL Ops”, or deep learning operations, but the company didn’t expand on that. It could mean it will integrate Tensor Core style compute units (as NVIDIA did on the Volta architecture) or it may be something more unique. AMD will integrate a new IO, likely to compete with NVLink, and MxGPU support for dividing resources efficiently for virtualization.

AMD did present a GPU “roadmap” at the tech day as well. I put that word in quotes because it is incredibly, and intentionally, vague. You might assume that Navi is being placed into the 2019 window, but its possible it might show in late 2018. AMD was also unable to confirm if a 7nm Vega variant would arrive for gaming and consumer markets in 2018.

If you were wondering if NVIDIA products are vulnerable to some of the latest security threats, the answer is yes. Your Shield device or GPU is not vulnerable to CVE-2017-5754, aka Meltdown, however the two variants of Spectre could theoretically be used to infect you.

Variant 1 (CVE-2017-5753): Mitigations are provided with the security update included in this bulletin. NVIDIA expects to work together with its ecosystem partners on future updates to further strengthen mitigations.

Variant 2 (CVE-2017-5715): Mitigations are provided with the security update included in this bulletin. NVIDIA expects to work together with its ecosystem partners on future updates to further strengthen mitigations.

Variant 3 (CVE-2017-5754): At this time, NVIDIA has no reason to believe that Shield TV/tablet is vulnerable to this variant.

The Android based Shield tablet should be updated to Shield Experience 5.4, which should arrive before the end of the month. Your Shield TV, should you actually still have a working on will receive Shield Experience 6.3 along the same time frame.

The GPU is a little more complex as there are several product lines and OSes which need to be dealt with. There should be a new GeForce driver appearing early next week for gaming GPUs, with HPC cards receiving updates on the dates you can see below.

There is no reason to expect Radeon and Vega GPUs to suffer from these issues at this time. Intel could learn a bit from NVIDIA's response, which has been very quick and includes ther older hardware.

ASUS today announced the XG Station Pro, a Thunderbolt 3-based external GPU enclosure tailored for both gamers and professionals. The XG Station Pro can accommodate full-size GPUs up to 2.5 slots wide, including large cards such as the ROG Strix 1080 Ti and Radeon RX Vega 64.

Featuring a "contemporary design with clean lines and subtle styling," the XG Station Pro has a footprint of 4.3-inches x 14.8-inches, thanks to ASUS's decision to use an external power supply. In order to provide enough juice for high-end graphics cards, ASUS is borrowing the power supply design from its GX800 gaming laptop, which puts out up to 330 watts.

The XG Station Pro's chassis, designed by case maker In Win, has a smooth dark gray finish with a black PCB and sleeved PCIe power cables. It features a soft white internal glow that can be controlled by ASUS's Aura software, including Aura Sync to synchronize lighting with your compatible ASUS and ROG graphics cards and laptops.

Inside the XG Station Pro, dual 120mm PWM fans provide exhaust out of the right side of the chassis. The fans automatically ramp down and even shut off below certain temperatures, but users can also manually control the fans with the ASUS GPU Tweak II application.

Around back, users will find an extra USB Type-C 3.1 Gen 2 port, which can supply up to 15 watts of power to compatible devices such as smartphones and external storage. Finally, ASUS notes that it includes the require Thunderbolt 3 cable in the box, something that many Thunderbolt-based devices seem to lack.

The ASUS XG Station Pro will launch later this month for $329 with support for both AMD and NVIDIA GPUs in Windows 10, and just AMD Vega-based GPUs in macOS Sierra and newer.

Have a laptop with Thunderbolt 3 and a mobile GPU that just doesn't cut it anymore? Gigabyte now offers an incredibly easy way to upgrade your laptop, with no screwdriver required! The Aorus GTX 1070 Gaming Box contains an external desktop class GTX 1070 and separate PSU, giving you a dock with some serious gaming prowess. The Tech Report's benchmarks compare this external GPU against the GTX 1060 installed in their Alienware gaming laptop and Alienware's own external GPU enclosure, on both the internal display and an external monitor. The results are somewhat mixed and worth reading through fully, however if you are on an integrated GPU then this solution is an incredible upgrade.

"Gigabyte's Aorus GTX 1070 Gaming Box offers us a look into a future where a big shot of graphics performance is just a single cable away for ultraportable notebook PCs. We plugged the Gaming Box into a test notebook and gave it a spin to see just how bright that future looks."

The clock rate is where it gets interesting. The NITRO+ RX Vega 64 will have a boost clock of 1611 MHz out-of-the-box. This is above the RX Vega 64 Air’s boost clock (1546 MHz) but below the RX Vega 64 Liquid’s boost clock (1677 MHz). The liquid-cooled Radeon RX Vega 64 still has the highest clocks, but this product sits almost exactly half-way between it (the liquid-cooled RX Vega 64) and the air-cooled RX Vega 64.

As for enthusiast features, this card has quite a few ways to keep it cool. First, it will operate fanless until 56C. Second, the card accepts a 4-pin fan connector, which allows it to adjust the speed of two case fans based on the temperature readings from the card. I am a bit curious whether it’s better to let the GPU control the fans, or whether having them all attached to the same place allows them to work together more effectively. Either way, if you ran out of fan headers, then I’m guessing that this feature will be good for you anyway.

Phanteks announced their G1080 water block for GTX 1080 Ti's a while back but we hadn't seen it in action until now. [H]ard|OCP installed the cooler on a Founders Edition card and created a video of the process. Not only do they show how to properly install the water block they also cover a few of the possible issues you might encounter while doing so. They also made a video showing how the coolant flows through the waterblock which is not only pretty but can help you determine where to insert your GPU into your watercooling loop.

"Phanteks recently sent us its Glacier series water block for our Founders Edition GTX 1080 Ti. We take you through the full process of getting it installed. We check out the mating surfaces of the GPU, capacitors, and MOSFETs and show you just how well it all fits together. Then finally we show exactly how the coolant flows in 4K!"

NVIDIA launched the new Titan V graphics card last week, a $2999 part targeted not at gamers (thankfully) but instead at developers of machine learning applications. Based on the GV100 GPU and 12GB of HBM2 memory, the Titan V is an incredibly powerful graphics card. We have every intention of looking at the gaming performance of this card as a "preview" of potential consumer Volta cards that may come out next year. (This is identical to our stance of testing the Vega Frontier Edition cards.)

But for now, enjoy this unboxing and teardown video that takes apart the card to get a good glimpse of that GV100 GPU.

A couple of quick interesting notes:

This implementation has 25% of the memory and ROPs disabled, giving us 12GB of HBM2, a 3072-bit bus, and 96 ROPs.

Clock speeds in our testing look to be much higher than the base AND boost ratings.

So far, even though the price takes this out of the gaming segment completely, we are impressed with some of the gaming results we have found.

The cooler might LOOK the same, but it definitely is heavier than the cooler and build for the Titan Xp.

Champagne. It's champagne colored.

Double precision performance is insanely good, spanking the Titan Xp and Vega so far in many tests.

NVIDIA made a surprising move late Thursday with the simultaneous announcement and launch of the Titan V, the first consumer/prosumer graphics card based on the Volta architecture.

Like recent flagship Titan-branded cards, the Titan V will be available exclusively from NVIDIA for $2,999. Labeled "the most powerful graphics card ever created for the PC," Titan V sports 12GB of HBM2 memory, 5120 CUDA cores, and a 1455MHz boost clock, giving the card 110 teraflops of maximum compute performance. Check out the full specs below:

The NVIDIA Titan V's 110 teraflops of compute performance compares to a maximum of about 12 teraflops on the Titan Xp, a greater than 9X increase in a single generation. Note that this is a very specific claim though, and references the AI compute capability of the Tensor cores rather than we traditionally measure for GPUs (single precision FLOPS). In that metric, the Titan V only truly offers a jump to 14 TFLOPS. The addition of expensive HBM2 memory also adds to the high price compared to its predecessor.

The Titan V is available now from NVIDIA.com for $2,999, with a limit of 2 per customer. And hey, there's free shipping too.