"This update extends the per pin bandwidth to 2.4 Gbps, adds a new footprint option to accommodate the 16 Gb-layer and 12-high configurations for higher density components, and updates the MISR polynomial options for these new configurations."

The revised spec brings the JEDEC standard up to the level we saw with Samsung's "Aquabolt" HBM2 and its 307.2 GB/s per-stack bandwidth, but with 12-high TSV stacks (up from 8) which raises memory capacity from 8GB to a whopping 24GB per stack.

ARLINGTON, Va., USA – DECEMBER 17, 2018 – JEDEC Solid State Technology Association, the global leader in the development of standards for the microelectronics industry, today announced the publication of an update to JESD235 High Bandwidth Memory (HBM) DRAM standard. HBM DRAM is used in Graphics, High Performance Computing, Server, Networking and Client applications where peak bandwidth, bandwidth per watt, and capacity per area are valued metrics to a solution’s success in the market. The standard was developed and updated with support from leading GPU and CPU developers to extend the system bandwidth growth curve beyond levels supported by traditional discrete packaged memory. JESD235B is available for download from the JEDEC website.

JEDEC standard JESD235B for HBM leverages Wide I/O and TSV technologies to support densities up to 24 GB per device at speeds up to 307 GB/s. This bandwidth is delivered across a 1024-bit wide device interface that is divided into 8 independent channels on each DRAM stack. The standard can support 2-high, 4-high, 8-high, and 12-high TSV stacks of DRAM at full bandwidth to allow systems flexibility on capacity requirements from 1 GB – 24 GB per stack.

This update extends the per pin bandwidth to 2.4 Gbps, adds a new footprint option to accommodate the 16 Gb-layer and 12-high configurations for higher density components, and updates the MISR polynomial options for these new configurations. Additional clarifications are provided throughout the document to address test features and compatibility across generations of HBM components.

We've been hearing rumors about a GeForce RTX 2060 since at least August, with screen captures of a reported mid-range Pascal card (then assumed to be "GTX" 2060) - seemingly with GTX 1080 levels of performance - surfacing at that time.

Then in November there was the reported Final Fantasy XV benchmark leak, showing performance a little below a GTX 1070 with the game running at 3840x2160 (high quality preset) - but this was possibly the mobile skew according to leaker APISAK on Twitter.

It seems fair to assume that a launch is imminent, with reports of a potential announcement the second week of January which may or may not coincide with CES 2019. As to final specs and pricing? Let the speculation commence!

AMD today released the latest major update to its Radeon software and driver suite. Building on the groundwork laid last year, AMD Radeon Software Adrenalin 2019 Edition brings a number of new features and performance improvements.

With this year’s software update, AMD continues to make significant gains in game performance compared to last year’s driver release, with an average gain of up to 15 percent in across a range of popular titles. Examples include Assassin’s Creed Odyssey (11%), Battlefield V (39%), and Shadow of the Tomb Raider (15%).

New Features

Beyond performance, Adrenalin 2019 Edition introduces a number of new and improved features. Highlights include:

Game Streaming: Radeon gamers can now stream any game or application from their PCs to their mobile devices via the AMD Link app at up to 4K 60fps. The feature supports both on-screen controls as well as Bluetooth controllers. ReLive streaming is also expanding to VR, with users able to stream games and videos from their PCs to standalone VR headsets via new AMD VR store apps. This includes Steam VR titles, allowing users to play high-quality PC-based VR games on select standalone headsets. AMD claims that its streaming technology offers “up to 44% faster responsiveness” than other game streaming solutions.

ReLive Streaming and Sharing: Gamers more interested in streaming their games to other people will find several new features in AMD’s ReLive feature, including adjustable picture-in-picture instant replays from 5 to 30 seconds, automatic GIF creation, and a new scene editor with more stream overlay options and hotkey-based scene transition control.

Radeon Game Advisor: A new overlay available in-game that helps users designate their target experience (performance vs. quality) and then recommends game-specific settings to achieve that target. Since the tool is running live alongside the game, it can respond to changes as they occur and dynamically recommend updated settings and options.

Radeon Settings Advisor: A new tool in the Radeon Software interface that scans system configuration and settings and recommends changes (e.g., enabling or disabling Radeon Chill, changing the display refresh rate, enabling HDR) to achieve an optimal gaming experience.

Display Improvements: FreeSync 2 can now tone-map HDR content to look better on displays that don’t support the full color and contrast of the HDR spec, and AMD’s Virtual Super Resolution feature is now supported on ultra-wide displays.

Radeon Overlay: AMD’s Overlay feature which allows gamers to access certain Radeon features without leaving their game has been updated to display system performance metrics, WattMan configuration options, Radeon Enhanced Sync controls, and the aforementioned Game Advisor.

Rumors have appeared online that suggest NVIDIA may be launching mobile versions of its RTX 2070 and RTX 2060 GPUs based on its new Turing architecture. The new RTX 2070 and RTX 2060 with Max-Q designs were leaked by Twitter user TUM_APISAK who posted cropped screenshots of Geekbench 4.3.1 and 3DMark 11 Performance results.

Allegedly handling the graphics duties in a Lenovo 81HE, the GeForce RTX 2070 with Max-Q Design (8GB VRAM) combined with a Core i7-8750H Coffee Lake six core CPU and 32 GB system memory managed a Geekbench 4.3.1 score of 223,753. The GPU supposedly has 36 Compute Units (CUs) and a core clockspeed of 1,300 MHz. The desktop RTX 2070 GPU which is already available also has 36 CUs with 2,304 CUDA cores, 144 texture units, 64 ROPS, 288 Tensor cores, and 36 RT (ray tracing) cores. The desktop GPU has a 175W reference (non FE) TDP and clocks of 1410 MHz base and 1680 MHz boost (1710 MHz for Founder's Edition). Assuming that 36 CU number is accurate, the mobile (RTX 2070M) may well have the same core counts, just running at lower clocks which would be nice to see but would require a beefy mobile cooling solution.

As far as the RTX 2060 Max-Q Design graphics processor, not as much information was leaked as far as specifications as the leak was limited to two screenshots allegedly from Final Fantasy XV's benchmark results page comparing a desktop RTX 2060 with a Max-Q RTX 2060. The number of CUs (and other numbers like CUDA/Tensor/RT cores, TMUs, and ROPs) was not revealed in those screenshots, for example. The comparison does lend further credence to the rumors of the RTX 2060 utilizing 6 GB of GDDR6 memory though. Tom's Hardware does have a screenshot that shows the RTX 2060 with 30 CUs which suggest 1,920 CUDA cores, 240 Tensor cores, and 30 RT cores though with clocks up to 1.2 GHz (which does mesh well with previous rumors of the desktop part).

Graphics Card

Generic VGA

Generic VGA

Memory

6144 MB

6144 MB

Core clock

960 MHz

975 MHz

Memory Clock

1750 MHz

1500 MHz

Driver name

NVIDIA GeForce RTX 2060

NVIDIA GeForce RTX 2060 with Maz-Q Design

Driver version

25.21.14.1690

25.21.14.1693

Also, the TU106 RTX 2060 with Max-Q Design reportedly has a 975 MHz core clock and a 1500 MHz (6 GHz) memory clock. Note that the 960 MHz core clock and 1750 MHz (7 GHz) memory clocks don't match previous RTX 2060 rumors which suggested higher GPU clocks in particular (up to 1.2 GHz). To be fair, it could just be the software reporting incorrect numbers due to the GPUs not being official yet. One final bit of leaked information included a note about 3DMark 11 performance with the RTX 2060 Max Q Design GPU hitting at least 19,000 in the benchmark's Performance preset which allegedly puts it in between the scores of the mobile GTX 1070 and the mobile GTX 1070 Max-Q. (A graphics score between nineteen and twenty thousand would put it a bit above a desktop GTX 1060 but far below the desktop 1070).

As usual, take these rumors and leaked screenshots with a healthy heaping of salt, but they are interesting nonetheless. Combined with the news about NVIDIA possibly announcing new mid-range GPUs at CES 2019, we may well see new laptops and other mobile graphics solutions shown off at CES and available within the first half of 2019 which would be quite the coup.

What are your thoughts on the rumored RTX 2060 for desktops and its mobile RTX 2060 and RTX 2070 Max-Q siblings?

Vega meets Radeon Pro

Professional graphics cards are a segment of the industry that can look strange to gamers and PC enthusiasts. From the outside, it appears that businesses are paying more for almost identical hardware when compared to their gaming counterparts from both NVIDIA and AMD.

However, a lot goes into a professional-level graphics card that makes all the difference to the consumers they are targeting. From the addition of ECC memory to protect against data corruption, all the way to a completely different driver stack with specific optimizations for professional applications, there's a lot of work put into these particular products.

The professional graphics market has gotten particularly interesting in the last few years with the rise of the NVIDIA TITAN-level GPUs and "Frontier Edition" graphics cards from AMD. While lacking ECC memory, these new GPUs have brought over some of the application level optimizations, while providing a lower price for more hobbyist level consumers.

However, if you're a professional that depends on a graphics card for mission-critical work, these options are no replacement for the real thing.

The logo, with the familiar "V" joined by a couple of new stripes on the right side, could mean a couple of things; with a possible reference to Vega II (2), or perhaps the VII suggests the Roman numeral 7 for 7nm, instead? VideoCardz.com thinks the latter may be the case:

"AMD has registered a new trademark just 2 weeks ago. Despite many rumors floating around about Navi architecture and its possible early reveal or announcement in January, it seems that AMD is not yet done with Vega. The Radeon Vega logo, which features the distinctive V lettering, has now received 2 stripes, to indicate the 7nm die shrink."

Whatever the case may be it's interesting to consider the possibility of a 7nm Vega GPU before we see Navi. We really don't know, though it does seem a bit presumptuous to consider a new product as early as CES, as Tech Radar speculates:

"We know full well that the next generation of AMD graphics will be built upon a 7nm architecture going by the roadmaps the company released at CES 2018. At the same time, it seems to all sync up with AMD's plans to announce new 7nm GPUs at CES 2019, so it almost seems certain that we’ll see Vega II graphics cards soon."

The prospect of new graphics cards is always tantalizing, but we'll need more than a logo before things really get interesting.

After first announcing it last month, UL this weekend provided new information on its upcoming ray tracing-focused addition to the 3DMark benchmarking suite. Port Royal, what UL calls the "world's first dedicated real-time ray tracing benchmark for gamers," will launch Tuesday, January 8, 2019.

For those eager for a glimpse of the new ray-traced visual spectacle, or for the majority of gamers without a ray tracing-capable GPU, the company has released a video preview of the complete Port Royal demo scene.

Access to the new Port Royal benchmark will be limited to the Advanced and Professional editions of 3DMark. Existing 3DMark users can upgrade to the benchmark for $2.99, and it will become part of the base $29.99 Advanced Edition package for new purchasers starting January 8th.

Real-time ray tracing promises to bring new levels of realism to in-game graphics. Port Royal uses DirectX Raytracing to enhance reflections, shadows, and other effects that are difficult to achieve with traditional rendering techniques.

As well as benchmarking performance, 3DMark Port Royal is a realistic and practical example of what to expect from ray tracing in upcoming games— ray tracing effects running in real-time at reasonable frame rates at 2560 × 1440 resolution.

3DMark Port Royal was developed with input from AMD, Intel, NVIDIA, and other leading technology companies. We worked especially closely with Microsoft to create a first-class implementation of the DirectX Raytracing API.

Port Royal will run on any graphics card with drivers that support DirectX Raytracing. As with any new technology, there are limited options for early adopters, but more cards are expected to get DirectX Raytracing support in 2019.

3DMark can be acquired via Steam or directly from UL's online store. The Advanced Edition, which includes access to all benchmarks, is priced at $29.99.

MSI is launching a refreshed GTX 1060 graphics card that uses GDDR5X for its 6GB of video memory rather than GDDR5. The aptly named GTX 1060 Armor 6GD5X OC graphics card shares many features of the existing Armor 6G OC (and OCV1) that the new card is a refresh of including the dual TORX fan Armor 2X cooler and maximum 4 display outputs among three DisplayPort 1.4, one HDMI 2.0b, and one DVI-D.

The new Pascal-based GPU in the upcoming graphics card is reportedly a cut-down variant of NVIDIA's larger GP104 chip rather than the GP106-400 used for previous GTX 1060s, but the core count and other compute resources remain the same at 1,280 CUDA cores, 80 TMUs, 48 ROPs, and a 192-bit memory bus. Clock speeds have been increased slightly versus reference specifications however at 1544 MHz base and up to 1759 MHz boost. The GPU is paired with 6 GB of GDDR5X that is curiously clocked at 8 GHz. The memory more than likely has quite a bit of overclocking headroom vs GTX 1060 6GB cards using GDDR5 but it appears MSI is leaving those pursuits for enthusiasts to explore on their own.

MSI is equipping its GTX 1060 Armor 6GD5X OC graphics cards with a 8+6 pin PCI-E power connection setup which should help overclockers push the cards as far as they can (previous GTX 1060 Armor OC cards had only a single 8-pin). Looking at the specification page the new card will be slightly shorter but with a thicker cooler at 276mm x 140mm x 41mm than the GDDR5-based card. As part of the Armor series the card has a white and black design like its predecessors.

MSI has not yet released pricing or availability information but with the GDDR5-based graphics cards priced at around $275 I would suspect the MSI GTX 1060 Armor 6GD5X OC to sit around $290 at launch.

I am curious how well new GTX 1060 graphics cards will perform when paired with faster GDDR5X memory and how the refreshed cards stack up against AMD's refreshed Polaris 30 based RX 590 graphics cards.

Imagination Technologies has just announced the Series3NX line of Neural Network Accelerator (NNA) architectures. These products are designs that can be licensed by system-on-a-chip (SoC) manufacturers to include in their designs. The previous design, Series2NX, has seen some design wins, which Imagination claims is “predominantly focused in the mobile and automotive markets”.

Actually, there are two announcements today: Series3NX and Series3NX-F.

The base NNA core is the Series3NX. Their press kit mentions six SKUs: AX3125 with 0.6 trillion operations per second (TOPS), AX3145 with 1.2 TOPS, AX3165 with 2.4 TOPS, AX3185 with 5 TOPS, and AX3195 with 10 TOPs. Multiple of these cores can be integrated at the same time, which allows products with over 160 TOPS of performance. These designs are available now for licensing.

This brings us to the Series3NX-F. This product combines a Series3NX core with a programmable, floating-point processor (based on the latest PowerVR Rogue architecture) and some RAM. This will be available to license in Q1 2019.

Imagination Technologies has just launched three new GPUs: the PowerVR 9XEP, the PowerVR 9XMP, and the PowerVR 9XTP. The 9XEP is designed for casual gaming and UI, the 9XMP is designed for mid-level mobile gaming, and the 9XTP is for high-end mobile-and-up.

The press release notes that, with the release of Fortnite and PUBG on mobile platforms, gaming is pushing devices toward larger GPUs. As a result, they have worked on gaming-centric features like anisotropic filtering to improve performance an image quality. They specifically mention a 2x performance boost in anisotropic filtering and a 4x increase in shadow sample performance on the 9XMP.

There’s a lot of segments that these designs cover; check out Imagination’s slides above.