DX11 could rival Mantle

The big story at GDC last week was Microsoft’s reveal of DirectX 12 and the future of the dominant API for PC gaming. There was plenty of build up to the announcement with Microsoft’s DirectX team posting teasers and starting up a Twitter account of the occasion. I hosted a live blog from the event which included pictures of the slides. It was our most successful of these types of events with literally thousands of people joining in the conversation. Along with the debates over the similarities of AMD’s Mantle API and the timeline for DX12 release, there are plenty of stories to be told.

After the initial session, I wanted to setup meetings with both AMD and NVIDIA to discuss what had been shown and get some feedback on the planned direction for the GPU giants’ implementations. NVIDIA presented us with a very interesting set of data that both focused on the future with DX12, but also on the now of DirectX 11.

The reason for the topic is easy to decipher – AMD has built up the image of Mantle as the future of PC gaming and, with a full 18 months before Microsoft’s DirectX 12 being released, how developers and gamers respond will make an important impact on the market. NVIDIA doesn’t like to talk about Mantle directly, but it’s obvious that it feels the need to address the questions in a roundabout fashion. During our time with NVIDIA’s Tony Tamasi at GDC, the discussion centered as much on OpenGL and DirectX 11 as anything else.

What are APIs and why do you care?

For those that might not really understand what DirectX and OpenGL are, a bit of background first. APIs (application programming interface) are responsible for providing an abstraction layer between hardware and software applications. An API can deliver consistent programming models (though the language can vary) and do so across various hardware vendors products and even between hardware generations. They can provide access to feature sets of hardware that have a wide range in complexity, but allow users access to hardware without necessarily knowing great detail about it.

Over the years, APIs have developed and evolved but still retain backwards compatibility. Companies like NVIDIA and AMD can improve DirectX implementations to increase performance or efficiency without adversely (usually at least) affecting other games or applications. And because the games use that same API for programming, changes to how NVIDIA/AMD handle the API integration don’t require game developer intervention.

With the release of AMD Mantle, the idea of a “low level” API has been placed in the minds of gamers and developers. The term “low level” can mean many things, but in general it is associated with an API that is more direct, has a thinner set of abstraction layers, and uses less translation from code to hardware. The goal is to reduce the amount of overhead (performance hit) that APIs naturally impair for these translations. With additional performance available, the CPU cycles can be used by the program (game) or be slept to improve battery life. In certain cases, GPU throughput can increase where the API overhead is impeding the video card's progress.

Passing additional control to the game developers, away from the API or GPU driver developers, gives those coders additional power and improves the ability for some vendors to differentiate. Interestingly, not all developers want this kind of control as it requires more time, more development work, and small teams that depend on that abstraction to make coding easier will only see limited performance advantages.

The reasons for this transition to a lower level API is being driven the by widening gap of performance between CPU and GPUs. NVIDIA provided the images below.

On the left we see performance scaling in terms of GFLOPS and on the right the metric is memory bandwidth. Clearly the performance of NVIDIA's graphics chips has far outpaced (as have AMD’s) what the best Intel desktop processor have been able and that gap means that the industry needs to innovate to find ways to close it.

So, for all the discussion about DirectX 12, the three main desktop GPU vendors, NVIDIA, AMD, and Intel, want to tell OpenGL developers how to tune their applications. Using OpenGL 4.2 and a few cross-vendor extensions, because OpenGL is all about its extensions, a handful of known tricks can reduce driver overhead up to ten-fold and increase performance up to fifteen-fold. The talk is very graphics developer-centric, but it basically describes a series of tricks known to accomplish feats similar to what Mantle and DirectX 12 suggest.

The 130-slide presentation is broken into a few sections, each GPU vendor getting a decent chunk of time. On occasion, they would mention which implementation fairs better with one function call. The main point that they wanted to drive home (since they clearly repeated the slide three times with three different fonts) is that none of this requires a new API. Everything exists and can be implemented right now. The real trick is to know how to not poke the graphics library in the wrong way.

The page also hosts a keynote from the recent Steam Dev Days.

That said, an advantage that I expect from DirectX 12 and Mantle is reduced driver complexity. Since the processors have settled into standards, I expect that drivers will not need to do as much unless the library demands it for legacy reasons. I am not sure how extending OpenGL will affect that benefit, as opposed to just isolating the legacy and building on a solid foundation, but I wonder if these extensions could be just as easy to maintain and optimize. Maybe it is.

NVIDIA's Release 340.xx GPU drivers for Windows will be the last to contain "enhancements and optimizations" for users with video cards based on architectures before Fermi. While NVIDIA will provided some extended support for 340.xx (and earlier) drivers until April 1st, 2016, they will not be able to install Release 343.xx (or later) drivers. Release 343 will only support Fermi, Kepler, and Maxwell-based GPUs.

The company has a large table on their CustHelp website filled with product models that are pining for the fjords. In short, if the model is 400-series or higher (except the GeForce 405) then it is still fully supported. If you do have the GeForce 405, or anything 300-series and prior, then GeForce Release 340.xx drivers will be the end of the line for you.

As for speculation, Fermi was the first modern GPU architecture for NVIDIA. It transitioned to standards-based (IEEE 754, etc.) data structures, introduced L1 and L2 cache, and so forth. From our DirectX 12 live blog, we also noticed that the new graphics API will, likewise, begin support at Fermi. It feels to me that NVIDIA, like Microsoft, wants to shed the transition period and work on developing a platform built around that baseline.

If you can afford to spend $1000 or more on a GPU the ASUS ROG MARS GTX 760 is an interesting choice. The two GTX 760 cores on this card are not modified as we have seen on some other two GPU cards, indeed ASUS even overclocked them to a base of 1006MHz and a boost clock of 1072MHz. Ryan reviewed this card back in December, awarding it with a Gold and [H]ard|OCP is revisiting this card with a new driver and a different lineup of games. They also pass this unique card from ASUS a Gold after it finished stomping on AMD and the GTX 780Ti.

"The ASUS ROG MARS 760 is one of the most unique custom built video cards out on the market today. ASUS has designed a video card sporting dual NVIDIA GTX 760 GPUs on a single video card and given gamers something that didn't exist before in the market place. We will find out how it compares with the fastest video cards out there."

The room is much smaller than it should be. Line was way too long for a room like this.

Thursday March 20, 2014 10:00 Ryan Shrout

10:01

Josh Walrath:

that is a super small room for such an event. Especially considering the online demand for details!

Thursday March 20, 2014 10:01 Josh Walrath

10:02

Ryan Shrout

Qualcomm's Eric Demers, AMD's Raja Koduri, NVIDIA's Tony Tamasi.

Thursday March 20, 2014 10:02

10:03

Ryan Shrout:

And we are starting!

Thursday March 20, 2014 10:03 Ryan Shrout

10:03

Josh Walrath:

Have those boys gotten their knives out yet. Are the press circling them and snapping their fingers?

Thursday March 20, 2014 10:03 Josh Walrath

10:03

Ryan Shrout:

Going over a history of DX.

Thursday March 20, 2014 10:03 Ryan Shrout

10:03

Ryan Shrout

Thursday March 20, 2014 10:03

10:04

Ryan Shrout:

Talking about the development process.

Thursday March 20, 2014 10:04 Ryan Shrout

10:04

Ryan Shrout:

All partner base.

Thursday March 20, 2014 10:04 Ryan Shrout

10:04

Ryan Shrout

Thursday March 20, 2014 10:04

10:05

[Comment From GuestGuest: ]

why cant I comment ?

Thursday March 20, 2014 10:05 Guest

10:05

Ryan Shrout:

GPU performance is "embarrassing parallel" statement here.

Thursday March 20, 2014 10:05 Ryan Shrout

10:05

Scott Michaud:

You can, we just need to publish them. And there's *a lot* of comments.

Thursday March 20, 2014 10:05 Scott Michaud

10:05

Ryan Shrout

Thursday March 20, 2014 10:05

10:05

Josh Walrath:

We see everything, Peter.

Thursday March 20, 2014 10:05 Josh Walrath

10:05

Ryan Shrout:

CPU performance has not improved at the same rate. This difference rate of increase is a big challenge for DX.

Thursday March 20, 2014 10:05 Ryan Shrout

10:06

Ryan Shrout:

Third point has been a challenge, until now.

Thursday March 20, 2014 10:06 Ryan Shrout

10:07

Ryan Shrout:

What do developers want? List similar to what AMD presented with Mantle.

Thursday March 20, 2014 10:07 Ryan Shrout

10:07

Ryan Shrout:

DX12 "is no dot release"

Thursday March 20, 2014 10:07 Ryan Shrout

10:08

Ryan Shrout

Thursday March 20, 2014 10:08

10:08

Ryan Shrout:

It faster, more direct. ha ha.

Thursday March 20, 2014 10:08 Ryan Shrout

10:08

Ryan Shrout

Thursday March 20, 2014 10:08

10:08

Ryan Shrout:

Xbox One games will see improved performance. Coming to all MS platforms. PC, mobile too.

Thursday March 20, 2014 10:08 Ryan Shrout

10:08

Josh Walrath:

Oh look, mobile!

Thursday March 20, 2014 10:08 Josh Walrath

10:09

Ryan Shrout

Thursday March 20, 2014 10:09

10:09

Ryan Shrout:

New tools are a requirement.

Thursday March 20, 2014 10:09 Ryan Shrout

10:09

Josh Walrath:

We finally have a MS answer to OpenGL ES.

Thursday March 20, 2014 10:09 Josh Walrath

10:09

Scott Michaud:

Hmm, none of the four pictures in the bottom is a desktop or Laptop.

Thursday March 20, 2014 10:09 Scott Michaud

10:09

Ryan Shrout:

D3D 12 is the first version to go much lower level.

Thursday March 20, 2014 10:09 Ryan Shrout

10:09

[Comment From GuestGuest: ]

The last one is a desktop...

Thursday March 20, 2014 10:09 Guest

10:10

Scott Michaud:

Huh, thought it was TV. My mistake.

Thursday March 20, 2014 10:10 Scott Michaud

10:10

Ryan Shrout

Thursday March 20, 2014 10:10

10:10

Ryan Shrout:

Yeah, desktop PC is definitely on the list here guys.

Thursday March 20, 2014 10:10 Ryan Shrout

10:11

Ryan Shrout:

Going to show us some prototypes.

Thursday March 20, 2014 10:11 Ryan Shrout

10:11

Ryan Shrout:

Ported latest 3DMark.

Thursday March 20, 2014 10:11 Ryan Shrout

10:12

Ryan Shrout:

In DX11, one core is doing most of the work.

Thursday March 20, 2014 10:12 Ryan Shrout

10:12

Ryan Shrout:

on d3d12, overall CPU utilization is down 50%

Thursday March 20, 2014 10:12 Ryan Shrout

10:13

Ryan Shrout:

Also, the workload is more spread out.

Thursday March 20, 2014 10:13 Ryan Shrout

10:13

Ryan Shrout

Thursday March 20, 2014 10:13

10:13

Ryan Shrout:

Interesting data for you all!!

Thursday March 20, 2014 10:13 Ryan Shrout

10:13

Ryan Shrout

Thursday March 20, 2014 10:13

10:14

Ryan Shrout:

Grouping entire pipeline state into state objects. These can be mapped very efficiently to GPU hardware.

Thursday March 20, 2014 10:14 Ryan Shrout

10:14

Ryan Shrout

Thursday March 20, 2014 10:14

10:15

Ryan Shrout:

"Solved" multi-threaded scalability.

Thursday March 20, 2014 10:15 Ryan Shrout

10:15

Scott Michaud:

Hmm, from ~8ms to ~4. That's an extra 4ms for the GPU to work. 20 GFLOPs for a GeForce Titan.

Thursday March 20, 2014 10:15 Scott Michaud

10:15

[Comment From JayJay: ]

Multicore Scalability.... Seems like a big deal when you have 6-8 cores!

Thursday March 20, 2014 10:15 Jay

10:16

Josh Walrath:

It is a big deal for the CPU guys.

Thursday March 20, 2014 10:16 Josh Walrath

10:16

Ryan Shrout:

D3D12 allows apps to control graphics memory better.

Thursday March 20, 2014 10:16 Ryan Shrout

10:16

Ryan Shrout

Thursday March 20, 2014 10:16

10:17

Ryan Shrout:

API is now much lower level. Application tracks pipeline status, not the API.

Thursday March 20, 2014 10:17 Ryan Shrout

10:17

[Comment From JimJim: ]

20 GFlops from a Titan? Stock Titan gets around 5 ATM.

Thursday March 20, 2014 10:17 Jim

10:17

Ryan Shrout

Thursday March 20, 2014 10:17

10:18

Ryan Shrout:

Less API and driver tracking universally. More more predictability.

Thursday March 20, 2014 10:18 Ryan Shrout

10:18

Ryan Shrout:

This is targeted at the smartest developers, but gives you unprecedented performance.

Thursday March 20, 2014 10:18 Ryan Shrout

10:18

Ryan Shrout:

Also planning to advance state of rendering features. Feature level 12.

Thursday March 20, 2014 10:18 Ryan Shrout

10:19

Scott Michaud:

Titan gets around ~5 Teraflops, actually... if it is fully utilized. I'm saying that an extra 4ms is an extra 20 GFlops per frame.

Thursday March 20, 2014 10:19 Scott Michaud

10:19

Ryan Shrout

Thursday March 20, 2014 10:19

10:19

Josh Walrath:

Titan is around 5 TFlops total, that 20 GFLOPS is potential performance in the time gained by optimizations.

Thursday March 20, 2014 10:19 Josh Walrath

10:19

Ryan Shrout:

Better collision and culling

Thursday March 20, 2014 10:19 Ryan Shrout

10:19

Ryan Shrout:

Constantly working with GPU vendors to find new ways to render.

Thursday March 20, 2014 10:19 Ryan Shrout

10:20

Ryan Shrout:

Forza 5 on stage now. Strictly console developer.

Thursday March 20, 2014 10:20 Ryan Shrout

10:20

[Comment From Lewap PawelLewap Pawel: ]

So 20GFLOPS per frame is 20x60 = 1200GFLOPS/sec? 20% improvement?

Thursday March 20, 2014 10:20 Lewap Pawel

10:21

Ryan Shrout

Thursday March 20, 2014 10:21

10:21

Scott Michaud:

Not quite, because we don't know how many FPS we had originally.

Thursday March 20, 2014 10:21 Scott Michaud

10:21

Ryan Shrout:

Talking about porting the game to D3D12

Thursday March 20, 2014 10:21 Ryan Shrout

10:22

Ryan Shrout:

4 man-months effort to port core rendering engine.

Thursday March 20, 2014 10:22 Ryan Shrout

10:22

Ryan Shrout:

Demo time!

Thursday March 20, 2014 10:22 Ryan Shrout

10:22

Ryan Shrout

Thursday March 20, 2014 10:22

10:22

Ryan Shrout:

Rendering at static 60 FPS.

Thursday March 20, 2014 10:22 Ryan Shrout

10:23

Ryan Shrout

Thursday March 20, 2014 10:23

10:23

Ryan Shrout:

Bundles allows for instancing but with variance.

Thursday March 20, 2014 10:23 Ryan Shrout

10:24

Ryan Shrout:

Resource lifetime, track memory directly. No longer have D3D tracking that lifetime, much cheaper on resources.

Thursday March 20, 2014 10:24 Ryan Shrout

10:24

Ryan Shrout:

"It's all up to us, and that's how we like it."

Thursday March 20, 2014 10:24 Ryan Shrout

10:24

Ryan Shrout:

Does anyone else here worry that DX12 might leave out some smaller devs that can't go so low level?

Thursday March 20, 2014 10:24 Ryan Shrout

10:25

Josh Walrath:

I would say that depends on the quality of tools that MS provides, as well as IHV support.

Thursday March 20, 2014 10:25 Josh Walrath

10:25

Scott Michaud:

Not really, for me. The reason why they can go so much lower these days is because what is lower is more consistent.

Thursday March 20, 2014 10:25 Scott Michaud

10:26

Ryan Shrout:

And now back to info. Will you have to buy new hardware? I would say no since they just showed Xbox One... lol

Thursday March 20, 2014 10:26 Ryan Shrout

10:26

[Comment From killeakkilleak: ]

Small devs will use an Engine, not make their own.

Thursday March 20, 2014 10:26 killeak

10:26

Ryan Shrout

Thursday March 20, 2014 10:26

10:26

Ryan Shrout:

On stage now is Raja Koduri from AMD.

Thursday March 20, 2014 10:26 Ryan Shrout

10:27

Scott Michaud:

Not true at all, actually. Just look at Frictional (Amnesia). They made their own engine tailored for what their game needed.

Thursday March 20, 2014 10:27 Scott Michaud

10:27

Ryan Shrout:

AMD has been working very closely with DX12. Heh.

Thursday March 20, 2014 10:27 Ryan Shrout

10:27

Josh Walrath:

Shocking!

Thursday March 20, 2014 10:27 Josh Walrath

10:28

Ryan Shrout

Thursday March 20, 2014 10:28

10:28

Josh Walrath:

Strike a pose!

Thursday March 20, 2014 10:28 Josh Walrath

10:28

Ryan Shrout:

There is tension: AMD is trying to push hw forward, MS is trying to push their platform forward.

Thursday March 20, 2014 10:28 Ryan Shrout

10:28

Ryan Shrout:

Very honest assessment of the current setup between AMD, NVIDIA, MS.

Thursday March 20, 2014 10:28 Ryan Shrout

10:28

[Comment From GuestGuest: ]

Scott, with the recent changes with CryEngine, UE4 going subscription based more Indies might just go that route.

Thursday March 20, 2014 10:28 Guest

10:28

Ryan Shrout:

DX12 is an area where they had the least tension in Raja's history in this field.

Thursday March 20, 2014 10:28 Ryan Shrout

10:29

Scott Michaud:

Definitely. But that is not the same thing as saying that indies will not make their own engine.

Thursday March 20, 2014 10:29 Scott Michaud

10:29

Ryan Shrout

Thursday March 20, 2014 10:29

10:29

Ryan Shrout:

Key is that current users get benefit with this API on day 1.

Thursday March 20, 2014 10:29 Ryan Shrout

10:29

Ryan Shrout:

"Like getting 4 generations of hardware ahead."

Thursday March 20, 2014 10:29 Ryan Shrout

10:29

Ryan Shrout

Thursday March 20, 2014 10:29

10:31

Josh Walrath:

That answers a few of the burning questions!

Thursday March 20, 2014 10:31 Josh Walrath

10:31

Ryan Shrout:

Up now is Eric Mentzer from Intel.

Thursday March 20, 2014 10:31 Ryan Shrout

10:31

[Comment From KevKev: ]

Thank you! Great news guys!

Thursday March 20, 2014 10:31 Kev

10:31

Scott Michaud:

You're welcome! : D

Thursday March 20, 2014 10:31 Scott Michaud

10:32

[Comment From JimJim: ]

OH, intel and AMD in the same room....

Thursday March 20, 2014 10:32 Jim

10:32

Scott Michaud:

Intel, AMD, NVIDIA, and Qualcomm in the same room...

Thursday March 20, 2014 10:32 Scott Michaud

10:32

Ryan Shrout:

Intel has made big change in graphics; put a lot more focus on it with tech and process tech.

Thursday March 20, 2014 10:32 Ryan Shrout

10:32

Josh Walrath:

DX12 will enhance any modern graphics chip. Driver support from IHVs will be key to enable those features. This is a massive change in how DX addresses the GPU, rather than (so far) the GPU adding features.

Thursday March 20, 2014 10:32 Josh Walrath

10:32

[Comment From GuestGuest: ]

so this means xbox one will get a performance boost?

Thursday March 20, 2014 10:32 Guest

10:32

Scott Michaud:

Yes

Thursday March 20, 2014 10:32 Scott Michaud

10:33

Ryan Shrout

Thursday March 20, 2014 10:33

10:33

Scott Michaud:

According to "Benefits of Direct3D 12 will extend to Xbox One", at least.

Thursday March 20, 2014 10:33 Scott Michaud

10:33

Ryan Shrout:

Intel commits to having Haswell support DX12 at launch.

Thursday March 20, 2014 10:33 Ryan Shrout

10:34

Ryan Shrout:

BTW - thanks to everyone for stopping by the live blog!! :)

Thursday March 20, 2014 10:34 Ryan Shrout

10:34

Josh Walrath:

Just to reiterate... PS4 utilizes OpenGL, not DX. This change will not affect PS4. Changes to OpenGL will only improve PS4 performance.

NVIDIA has been working with MS since the inception of DX12. Still don't know when that is...

Thursday March 20, 2014 10:35 Ryan Shrout

10:35

[Comment From AlexAlex: ]

PS4 doesn't use OpenGL, but custom APIs instead...

Thursday March 20, 2014 10:35 Alex

10:35

Scott Michaud:

True, it's not actually OpenGL... but is heavily heavily based on OpenGL.

Thursday March 20, 2014 10:35 Scott Michaud

10:36

Ryan Shrout

Thursday March 20, 2014 10:36

10:36

Ryan Shrout:

They think it should be done with standards so there is no fragmentation.

Thursday March 20, 2014 10:36 Ryan Shrout

10:36

Ryan Shrout:

lulz.

Thursday March 20, 2014 10:36 Ryan Shrout

10:37

Scott Michaud:

Because everything that ends in "x" is all about no fragmentation :p

Thursday March 20, 2014 10:37 Scott Michaud

10:37

Ryan Shrout:

NVIDIA will support DX12 on Fermi, Kepler, Maxwell and forward!

Thursday March 20, 2014 10:37 Ryan Shrout

10:37

Ryan Shrout:

For developers that want to get down deep and manage all of this, DX12 is going to be really exciting.

Thursday March 20, 2014 10:37 Ryan Shrout

10:38

Ryan Shrout:

NVIDIA represents about 55% of the install base.

Thursday March 20, 2014 10:38 Ryan Shrout

10:38

Ryan Shrout

Thursday March 20, 2014 10:38

10:38

Ryan Shrout

Thursday March 20, 2014 10:38

10:39

Ryan Shrout:

Developers already have DX12 drivers. The Forza demo was running on NVIDIA!!!

Thursday March 20, 2014 10:39 Ryan Shrout

10:39

Ryan Shrout:

Holy crap, that wasn't on an Xbox One!!

Thursday March 20, 2014 10:39 Ryan Shrout

10:39

Scott Michaud:

Fermi and forward... aligning well with the start of their compute-based architectures... using IEEE standards (etc). Makes perfect sense. Also might help explain why pre-Fermi is deprecated after GeForce 340 drivers...

Thursday March 20, 2014 10:39 Scott Michaud

10:40

Ryan Shrout:

Support quote from Tim Sweeney.

Thursday March 20, 2014 10:40 Ryan Shrout

10:41

Ryan Shrout

Thursday March 20, 2014 10:41

10:41

[Comment From CrackolaCrackola: ]

Any current NVIDIA cards DX12 ready? Titan, etc?

Thursday March 20, 2014 10:41 Crackola

10:41

Ryan Shrout:

Up now is Eric Demers from Qualcomm.

Thursday March 20, 2014 10:41 Ryan Shrout

10:42

Scott Michaud:

NVIDIA said Fermi, Kepler, and Maxwell will be DX12-ready. So like... almost everything since GeForce 400... almost.

Thursday March 20, 2014 10:42 Scott Michaud

10:42

Ryan Shrout

Thursday March 20, 2014 10:42

10:42

Ryan Shrout:

Qualcomm has been working with MS on mobile graphics since there WAS mobile graphics.

Thursday March 20, 2014 10:42 Ryan Shrout

10:42

Ryan Shrout

Thursday March 20, 2014 10:42

10:42

Ryan Shrout:

Most windows phones are powered by Snapdragon.

Thursday March 20, 2014 10:42 Ryan Shrout

10:42

Josh Walrath:

We currently don't know what changes in Direct3D will be brought to the table, all we are seeing here is how they are changing the software stack to more efficiently use modern GPUs. This does not mean that all current DX11 hardware will fully support the DX12 specification when it comes to D3D, Direct Compute, etc.

Thursday March 20, 2014 10:42 Josh Walrath

10:43

Ryan Shrout:

DX12 will improve power efficiency by reducing overhead.

Thursday March 20, 2014 10:43 Ryan Shrout

10:43

Ryan Shrout

Thursday March 20, 2014 10:43

10:44

Ryan Shrout:

Perf will improve on mobile device as well, of course. But gaming for longer periods on battery life is biggest draw.

Thursday March 20, 2014 10:44 Ryan Shrout

10:45

Ryan Shrout:

Portability - bringing titles from the PC to Xbox to mobile platform will be much easier.

Thursday March 20, 2014 10:45 Ryan Shrout

10:45

[Comment From David UyDavid Uy: ]

I think all Geforce 400 series is Fermi. so - Geforce 400 and above.

Thursday March 20, 2014 10:45 David Uy

10:45

Scott Michaud:

I think the GeForce 405 is the only exception...

Thursday March 20, 2014 10:45 Scott Michaud

10:45

Ryan Shrout:

Off goes Eric.

Thursday March 20, 2014 10:45 Ryan Shrout

10:45

Ryan Shrout:

MS back on stage.

Thursday March 20, 2014 10:45 Ryan Shrout

10:46

Ryan Shrout:

And now a group picture lol.

Thursday March 20, 2014 10:46 Ryan Shrout

10:46

Ryan Shrout

Thursday March 20, 2014 10:46

10:47

Ryan Shrout:

By the time they ship, 50% of all PC gamers will be DX12 capable.

Thursday March 20, 2014 10:47 Ryan Shrout

10:47

Ryan Shrout:

Ouch, targeting Holiday 2015 games.

Thursday March 20, 2014 10:47 Ryan Shrout

10:48

Ryan Shrout:

Early access coming later this year.

Thursday March 20, 2014 10:48 Ryan Shrout

10:48

Ryan Shrout

Thursday March 20, 2014 10:48

10:48

Josh Walrath:

Yeah, this is a pretty big sea change.

Thursday March 20, 2014 10:48 Josh Walrath

10:48

Ryan Shrout

Thursday March 20, 2014 10:48

10:49

Ryan Shrout

Thursday March 20, 2014 10:49

10:49

Scott Michaud:

50% of PC Gamers sounds like they're projecting NOT Windows 7.

Thursday March 20, 2014 10:49 Scott Michaud

10:49

Ryan Shrout:

They are up for Q&A not sure how informative they will be...

Thursday March 20, 2014 10:49 Ryan Shrout

10:50

Josh Walrath:

OS support? Extension changes to D3D/Direct Compute?

Thursday March 20, 2014 10:50 Josh Walrath

10:50

Ryan Shrout:

Windows 7 support? Won't be announcing anything today but they understand the request.

Thursday March 20, 2014 10:50 Ryan Shrout

10:51

Ryan Shrout:

Q: What about support for multi-GPU? They will have a way to target specific GPUs in a system.

Thursday March 20, 2014 10:51 Ryan Shrout

10:51

Ryan Shrout:

This session is wrapping up for now!

Thursday March 20, 2014 10:51 Ryan Shrout

10:51

Ryan Shrout:

Looks like we are light on details but we'll be catching more sessions today so check back on http://www.pcper.com/

Thursday March 20, 2014 10:51 Ryan Shrout

10:52

Scott Michaud:

"a way to target specific GPUs in a system" this sounds like developers can program their own Crossfire/SLi methods, like OpenCL and Mantle.

Thanks everyone for joining us! We MIGHT live blog the other sessions today, so you can sign up for our mailing list to find out when we go live. http://www.pcper.com/subscribe

Thursday March 20, 2014 10:52 Ryan Shrout

10:57

Scott Michaud:

Apparently NVIDIA's blog says DX12 discussion begun more than four years ago "with discussions about reducing resource overhead". They worked for a year to deliver "a working design and implementation of DX12 at GDC".

EVGA GTX 750 Ti ACX FTW

The NVIDIA GeForce GTX 750 Ti has been getting a lot of attention around the hardware circuits recently, but for good reason. It remains interesting from a technology stand point as it is the first, and still the only, Maxwell based GPU available for desktop users. It's a completely new architecture which is built with power efficiency (and Tegra) in mind. With it, the GTX 750 Ti was able to push a lot of performance into a very small power envelope while still maintaining some very high clock speeds.

NVIDIA’s flagship mainstream part is also still the leader when it comes to performance per dollar in this segment (for at least as long as it takes for AMD’s Radeon R7 265 to become widely available). There has been a few cases that we have noticed where the long standing shortages and price hikes from coin mining have dwindled, which is great news for gamers but may also be bad news for NVIDIA’s GPUs in some areas. Though, even if the R7 265 becomes available, the GTX 750 Ti remains the best card you can buy that doesn’t require a power connection. This puts it in a unique position for power limited upgrades.

After our initial review of the reference card, and then an interesting look at how the card can be used to upgrade an older or under powered PC, it is time to take a quick look at a set of three different retail cards that have made their way into the PC Perspective offices.

On the chopping block today we’ll look at the EVGA GeForce GTX 750 Ti ACX FTW, the Galaxy GTX 750 Ti GC and the PNY GTX 750 Ti XLR8 OC. All of them are non-reference, all of them are overclocked, but you’ll likely be surprised how they stack up.

NVIDIA recently announced the launch of two new game bundles for purchasers of certain GeForce GTX desktop of GeForce 700M and 800M mobile series graphics cards. The new bundles will offer up a redeemable code for the Unreal Engine 4-powered survival horror game DAYLIGHT to buyers of new desktop cards or a total of $150 of in-game currency in three Free-To-Play titles when buying a system with a new NVIDIA mobile GPU (or as an alternative to the DAYLIGHT bundle with desktop cards).

The DAYLIGHT game bundle is included with certain GeForce GTX 600 and 700-series desktop graphics cards. Users will get a redeemable code for a downloadable version of the game which can be activated on release day (April 8, 2014). Specifically, the eligible graphics cards for this bundle are as follows:

GTX TITAN

GTX 780 Ti

GTX 780

GTX 770

GTX 760

GTX 690

GTX 680

GTX 670

GTX 660 Ti

GTX 660

Alternatively, NVIDIA is offering $150 (total) in in-game currency for three free to play games to users that purchase a notebook with a 700M or 800M mobile GPU or as an alternative to the Daylight game bundle when purchasing certain desktop GPUs. The bundle will offer $50 of in-game currency for Heroes of Newerth, Path of Exile, and Warface. Users that purchase a mobile GPU (700M or 800M series) or GTX 750 Ti, GTX 750, GTX 650 Ti, or GTX 650 from a participating e-tailer or system builder will be able to get this game bundle.

According to NVIDIA, both of its new game bundles are available now with cards and pre-built systems from Newegg, Amazon, Tiger Direct, NCIX, et al, and nationwide system builders respectively. NVIDIA has put together a full list of participating partners along with further information on the following bundle information pages:

Maxwell and Kepler and...Fermi?

Covering the landscape of mobile GPUs can be a harrowing experience. Brands, specifications, performance, features and architectures can all vary from product to product, even inside the same family. Rebranding is rampant from both AMD and NVIDIA and, in general, we are met with one of the most confusing segments of the PC hardware market.

Today, with the release of the GeForce GTX 800M series from NVIDIA, we are getting all of the above in one form or another. We will also see performance improvements and the introduction of the new Maxwell architecture (in a few parts at least). Along with the GeForce GTX 800M parts, you will also find the GeForce 840M, 830M and 820M offerings at lower performance, wattage and price levels.

With some new hardware comes a collection of new software for mobile users, including the innovative Battery Boost that can increase unplugged gaming time by using frame rate limiting and other "magic" bits that NVIDIA isn't talking about yet. ShadowPlay and GameStream also find their way to mobile GeForce users as well.

Let's take a quick look at the new hardware specifications.

GTX 880M

GTX 780M

GTX 870M

GTX 770M

GPU Code name

Kepler

Kepler

Kepler

Kepler

GPU Cores

1536

1536

1344

960

Rated Clock

954 MHz

823 MHz

941 MHz

811 MHz

Memory

Up to 4GB

Up to 4GB

Up to 3GB

Up to 3GB

Memory Clock

5000 MHz

5000 MHz

5000 MHz

4000 MHz

Memory Interface

256-bit

256-bit

192-bit

192-bit

Features

Battery Boost
GameStream
ShadowPlay
GFE

GameStream
ShadowPlay
GFE

Battery Boost
GameStream
ShadowPlay
GFE

GameStream
ShadowPlay
GFE

Both the GTX 880M and the GTX 870M are based on Kepler, keeping the same basic feature set and hardware specifications of their brethren in the GTX 700M line. However, while the GTX 880M has the same CUDA core count as the 780M, the same cannot be said of the GTX 870M. Moving from the GTX 770M to the 870M sees a significant 40% increase in core count as well as a jump in clock speed from 811 MHz (plus Boost) to 941 MHz.