It's the first day of IDF, so it's only natural that we see a bunch of non-IDF news start pouring out :). I'll kick them off with a few announcements from HGST. First item up is their new SN100 line of PCIe SSDs:

These are NVMe capable PCIe SSDs, available from 800GB to 3.2TB capacities and in (PCI-based - not SATA) 2.5" as well as half-height PCIe cards.

Next up is an expansion of their HelioSeal (Helium filled) drive line:

Through the use of Shingled Magnetic Recording (SMR), HGST can make an even bigger improvement in storage densities. This does not come completely free, as due to the way SMR writes to the disk, it is primarily meant to be a sequential write / random access read storage device. Picture roofing shingles, but for hard drives. The tracks are slightly overlapped as they are written to disk. This increases density greatly, but writting to the middle of a shingled section is not possible without potentially overwriting two shingled tracks simultaneously. Think of it as CD-RW writing, but for hard disks. This tech is primarily geared towards 'cold storage', or data that is not actively being written. Think archival data. The ability to still read that data randomly and on demand makes these drives more appealing than retrieving that same data from tape-based archival methods.

Further details on the above releases is scarce at present, but we will keep you posted on further details as they develop.

Today is the beginning of the 2014 Intel Developer Forum in San Francisco! Join me at 9am PT for the first of our live blogs of the main Intel keynote where we will learn what direction Intel is taking on many fronts!

At 6PM PDT on September 18th, 2014, NVIDIA and partners will be hosting GAME24. The evemt will start at that time, all around the world, and finish 24 hours later. The three main event locations are Los Angeles, California, USA; London, England; and Shanghai, China. Four, smaller events will be held in Chicago, Illinois, USA; Indianapolis, Indiana, USA; Mission Viejo, California, USA; and Stockholm, Sweden. It will also be live streamed on the official website.

Registration and attendance is free. If you will be in the area and want to join, sign up. Registration closes an hour before the event, but it is first-come-first-serve. Good luck. Have fun. Good game.

Join AMD’s Chief Gaming Scientist, Richard Huddy on Saturday, Aug. 23, 2014 at 10:00 AM EDT/7:00 AM PDT to celebrate 30 Years of Graphics and Gaming. The event will feature interviews with Raja Koduri, AMD’s Corporate VP, Visual Computing; John Byrne, AMD’s Senior VP and General Manager, Computing and Graphics Business Group; and several special guests. You can also expect new product announcements along with stories covering the history of AMD. You can watch the twitch.tv livestream below once the festivities kick off!

Let's be clear: there are two stories here. The first is the release of OpenGL 4.5 and the second is the announcement of the "Next Generation OpenGL Initiative".They both occur on the same press release, but they are two, different statements.

OpenGL 4.5 Released

OpenGL 4.5 expands the core specification with a few extensions. Compatible hardware, with OpenGL 4.5 drivers, will be guaranteed to support these. This includes features like direct_state_access, which allows accessing objects in a context without binding to it, and support of OpenGL ES3.1 features that are traditionally missing from OpenGL 4, which allows easier porting of OpenGL ES3.1 applications to OpenGL.

It also adds a few new extensions as an option:

ARB_pipeline_statistics_querylets a developer ask the GPU what it has been doing. This could be useful for "profiling" an application (list completed work to identify optimization points).

ARB_sparse_bufferallows developers to perform calculations on pieces of generic buffers, without loading it all into memory. This is similar to ARB_sparse_textures... except that those are for textures. Buffers are useful for things like vertex data (and so forth).

ARB_transform_feedback_overflow_queryis apparently designed to let developers choose whether or not to draw objects based on whether the buffer is overflowed. I might be wrong, but it seems like this would be useful for deciding whether or not to draw objects generated by geometry shaders.

KHR_blend_equation_advanced allows new blending equations between objects. If you use Photoshop, this would be "multiply", "screen", "darken", "lighten", "difference", and so forth. On NVIDIA's side, this will be directly supported on Maxwell and Tegra K1 (and later). Fermi and Kepler will support the functionality, but the driver will perform the calculations with shaders. AMD has yet to comment, as far as I can tell.

Next Generation OpenGL Initiative Announced

The Khronos Group has also announced "a call for participation" to outline a new specification for graphics and compute. They want it to allow developers explicit control over CPU and GPU tasks, be multithreaded, have minimal overhead, have a common shader language, and "rigorous conformance testing". This sounds a lot like the design goals of Mantle (and what we know of DirectX 12).

And really, from what I hear and understand, that is what OpenGL needs at this point. Graphics cards look nothing like they did a decade ago (or over two decades ago). They each have very similar interfaces and data structures, even if their fundamental architectures vary greatly. If we can draw a line in the sand, legacy APIs can be supported but not optimized heavily by the drivers. After a short time, available performance for legacy applications would be so high that it wouldn't matter, as long as they continue to run.

Add to it, next-generation drivers should be significantly easier to develop, considering the reduced error checking (and other responsibilities). As I said on Intel's DirectX 12 story, it is still unclear whether it will lead to enough performance increase to make most optimizations, such as those which increase workload or developer effort in exchange for queuing fewer GPU commands, unnecessary. We will need to wait for game developers to use it for a bit before we know.

Along with GDC Europe and Gamescom, Siggraph 2014 is going on in Vancouver, BC. At it, Intel had a DirectX 12 demo at their booth. This scene, containing 50,000 asteroids, each in its own draw call, was developed on both Direct3D 11 and Direct3D 12 code paths and could apparently be switched while the demo is running. Intel claims to have measured both power as well as frame rate.

Variable power to hit a desired frame rate, DX11 and DX12.

The test system is a Surface Pro 3 with an Intel HD 4400 GPU. Doing a bit of digging, this would make it the i5-based Surface Pro 3. Removing another shovel-load of mystery, this would be the Intel Core i5-4300U with two cores, four threads, 1.9 GHz base clock, up-to 2.9 GHz turbo clock, 3MB of cache, and (of course) based on the Haswell architecture.

While not top-of-the-line, it is also not bottom-of-the-barrel. It is a respectable CPU.

Intel's demo on this processor shows a significant power reduction in the CPU, and even a slight decrease in GPU power, for the same target frame rate. If power was not throttled, Intel's demo goes from 19 FPS all the way up to a playable 33 FPS.

Intel will discuss more during a video interview, tomorrow (Thursday) at 5pm EDT.

Maximum power in DirectX 11 mode.

For my contribution to the story, I would like to address the first comment on the MSDN article. It claims that this is just an "ideal scenario" of a scene that is bottlenecked by draw calls. The thing is: that is the point. Sure, a game developer could optimize the scene to (maybe) instance objects together, and so forth, but that is unnecessary work. Why should programmers, or worse, artists, need to spend so much of their time developing art so that it could be batch together into fewer, bigger commands? Would it not be much easier, and all-around better, if the content could be developed as it most naturally comes together?

That, of course, depends on how much performance improvement we will see from DirectX 12, compared to theoretical max efficiency. If pushing two workloads through a DX12 GPU takes about the same time as pushing one, double-sized workload, then it allows developers to, literally, perform whatever solution is most direct.

Maximum power when switching to DirectX 12 mode.

If, on the other hand, pushing two workloads is 1000x slower than pushing a single, double-sized one, but DirectX 11 was 10,000x slower, then it could be less relevant because developers will still need to do their tricks in those situations. The closer it gets, the fewer occasions that strict optimization is necessary.

If there are any DirectX 11 game developers, artists, and producers out there, we would like to hear from you. How much would a (let's say) 90% reduction in draw call latency (which is around what Mantle claims) give you, in terms of fewer required optimizations? Can you afford to solve problems "the naive way" now? Some of the time? Most of the time? Would it still be worth it to do things like object instancing and fewer, larger materials and shaders? How often?

Silicon Motion has announced their SM2256 controller. We caught a glimpse of this new controller on the Flash Memory Summit show floor:

The big deal here is the fact that this controller is a complete drop-in solution that can drive multiple different types of flash, as seen below:

The SM2256 can drive all variants of TLC flash.

The controller itself looks to have decent specs, considering it is meant to drive 1xnm TLC flash. Just under 100k random 4k IOPS. Writes are understandably below the max saturation of SATA 6Gb/sec at 400MB/sec (writing to TLC is tricky!). There is also mention of Silicon Motion's NANDXtend Technology, which claims to add some extra ECC and DSP tech towards the end of increasing the ability to correct for bit errors in the flash (more likely as you venture into 8 bit per cell territory).

At the Flash Memory Summit, Phison has updated their SSD controller lineup with a new quad-core SSD controller.

The PS3110 is capable of handling TLC as well as MLC flash, and the added horsepower lets it push as high as 100k IOPS.

Also seen was an upcoming PS5007 controller, capable of pushing PCIe 3.0 x4 SSDs at 300k IOPS and close to 3GB/sec sequential throughputs. While there were no actual devices on display of this new controller, we did spot the full specs:

According to an HGST press release, the company will bring an SSD based on phase change memory to the 2014 Flash Memory Summit in Santa Clara, California. They claim that it will actually be at their booth, on the show floor, for two days (August 6th and 7th).

The device, which is not branded, connects via PCIe 2.0 x4. It is designed for speed. It is allegedly capable of 3 million IOPS, with just 1.5 microseconds required for a single access. For comparison, the 800GB Intel SSD DC P3700, recently reviewed by Allyn, had a dominating lead over the competitors that he tested. It was just shy of 250 thousand IOPS. This is, supposedly, about twelve times faster.

While it is based on a different technology than NAND, and thus not directly comparable, the PCM chips are apparently manufactured at 45nm. Regardless, that is significantly larger lithography than competing products. Intel is manufacturing their flash at 20nm, while Samsung managed to use a 30nm process for their recent V-NAND launch.

What does concern me is the capacity per chip. According to the press release, it is 1Gb per chip. That is about two orders of magnitude smaller than what NAND is pushing. That is, also, the only reference to capacity in the entire press release. It makes me wonder how small the total drive capacity will be, especially compared to RAM drives.

Of course, because it does not seem to be a marketed product yet, nothing about pricing or availability. It will almost definitely be aimed at the enterprise market, though (especially given HGST's track record).

*** Update from Allyn ***

I'm hijacking Scott's news post with photos of the actual PCM SSD, from the FMS show floor:

In case you all are wondering, yes, it does in fact work:

One of the advantages of PCM is that it is addressed at smaller sections as compared to typical flash memory. This means you can see ~700k *single sector* random IOPS at QD=1. You can only pull off that sort of figure with extremely low IO latency. They only showed this output at their display, but ramping up QD > 1 should reasonably lead to the 3 million figure claimed in their release.

Marvell is notorious for being the first to bring a 6Gb/sec SATA controller to market, and they continue to do very well in that area. Their very capable 88SS9189 controller powers the Crucial MX100 and M550, as well as the ADATA SP920.

Today they have announced a newer controller, the 88SS1093. Despite the confusing numbering, the 88SS1093 has a PCIe 3.0 x4 host interface and will support the full NVMe protocol. The provided specs are on the light side, as performance of this controller will ultimately depend on the speed and parallelism of the attached flash, but its sure to be a decent performer. I suspect it would behave like their SATA part, only no longer bottlenecked by SATA 6Gb/sec speeds.

More to follow as I hope to see this controller in person on the exhibition hall (which opens to press in a few hours). Full press blast after the break.

*** Update ***

Apologies as there was no photo to be taken - Marvell had no booth at the exibition space at FMS.

Just minutes ago at the Flash Memory Summit, Samsung announced the production of 32-layer TLC VNAND:

This is the key to production of a soon-to-be-released 850 EVO, which should bring the excellent performance of the 850 Pro, with the reduced cost benefit we saw with the previous generation 840 EVO. Here's what the progression to 3D VNAND looks like:

3D TLC VNAND will look identical to the right most image in the above slide, but the difference will be that the charge stored has more variability. Given that Samsung's VNAND tech has more volume to store electrons when compared to competing 2D planar flash technology, it's a safe bet that this new TLC will come with higher endurance ratings than those other technologies. There is much more information on Samsung's VNAND technology on page 1 of our 850 Pro review. Be sure to check that out if you haven't already!

Another announcement made was more of an initiative, but a very interesting one at that. SSDs are generally dumb when it comes to coordinating with the host - in that there is virtually no coordination. An SSD has no idea which pieces of files were meant to be grouped together, etc (top half of this slide):

Stuff comes into the SSD and it puts it where it can based on its best guess as to how it should optimize those writes. What you'd want to have, ideally, is a more intelligent method of coordination between the host system and the SSD (more like the bottom half of the above slide). Samsung has been dabbling in the possibilities here and has seen some demonstrable gains to be made. In a system where they made the host software aware of the SSD flash space, and vice versa, they were able to significantly reduce write latency during high IOPS activity.

The key is that if the host / host software has more control over where and how data is stored on the SSD, the end result is a much more optimized write pattern, which ultimately boosts overall throughput and IOPS. We are still in the experimentation stage on Storage Intelligence, with more to follow as standards are developed and the industry pushes forward.

It might be a while before we see Storage Intelligence go mainstream, but I'm definitely eager to see 3D TLC VNAND hit the market, and now we know it's coming! More to follow in the coming days as we continue our live coverage of the Flash Memory Summit!

UPDATE: The event is over, but the video is embeded below if you want to see the presentations! Thanks again to everyone that attended and all of our sponsors!

It is that time of year again: another installment of the PC Perspective Hardware Workshop! Once again we will be presenting on the main stage at Quakecon 2014 being held in Dallas, TX July 17-20th.

Main Stage - Quakecon 2014

Saturday, July 19th, 12:00pm CT

Our thanks go out to the organizers of Quakecon for allowing us and our partners to put together a show that we are proud of every year. We love giving back to the community of enthusiasts and gamers that drive us to do what we do! Get ready for 2 hours of prizes, games and raffles and the chances are pretty good that you'll take something out with you - really, they are pretty good!

Our primary partners at the event are those that threw in for our ability to host the workshop at Quakecon and for the hundreds of shirts we have ready to toss out! Our thanks to NVIDIA, Seasonic and Logitech!!

Live Streaming

If you can't make it to the workshop - don't worry! You can still watch the workshop live on our live page as we stream it over one of several online services. Just remember this URL: http://pcper.com/live and you will find your way!

PC Perspective LIVE Podcast and Meetup

We are planning on hosting any fans that want to watch us record our weekly PC Perspective Podcast (http://pcper.com/podcast) on Wednesday or Thursday evening in our meeting room at the Hilton Anatole. I don't yet know exactly WHEN or WHERE the location will be, but I will update this page accordingly on Wednesday July 16th when we get the data. You might also consider following me on Twitter for updates on that status as well.

After the recording, we'll hop over the hotel bar for a couple drinks and hang out. We have room for at leaast 50-60 people to join us in the room but we'll still be recording if just ONE of you shows up. :)

I find this more interesting because idTech 5 has not exactly seen much usage, outside of RAGE. Wolfenstein: The New Order was also released on the technology, two months ago. There is one other game planned -- and that is it. Sure, RAGE is almost three years old and the engine was first revealed in 2007, making it seven-year-old technology, basically. Still, that is a significant investment to see basically no return on, especially considering that its sales figures were not too impressive (Steam and other digital delivery services excluded).

Happy to announce i'll be helping the amazingly talented id Software team with Doom and idTech 6. Very excited :)

Are you interested in attending Quakecon 2014 next weekend in Dallas, TX but just can't swing the BYOC spot? Well, thanks to our friends at Quakecon and at PC Part Picker, we have two BYOC spots up for grabs for fans of PC Perspective!

While we are excited to be hosting our PC Perspective Hardware Workshop with thousands of dollars in giveaways to pass out on Saturday the 19th, I know that the big draw is the chance to spend Thursday, Friday and Saturday at North America's largest LAN Party.

The giveaway is simple.

Fill out the form below with your name and email address.

Make sure you are able and willing to attend Quakecon from July 17th - July 20th. There is no point in winning a free BYOC spot that you cannot use!

We'll pick a winner on Friday, July 11th so you'll have enough time to make plans.

Sure, this is a little late. Honestly, when I first heard the announcement, I did not see much news in it. The slide from the keynote (below) showed four points: Tesselation, Geometry Shaders, Computer [sic] Shaders, and ASTC Texture Compression. Honestly, I thought tesselation and geometry shaders were part of the OpenGL ES 3.1 spec, like compute shaders. This led to my immediate reaction: "Oh cool. They implemented OpenGL ES 3.1. Nice. Not worth a news post."

Apparently, they were not part of the ES 3.1 spec (although compute shaders are). My mistake. It turns out that Google is cooking their their own vendor-specific extensions. This is quite interesting, as it adds functionality to the API without the developer needing to target a specific GPU vendor (INTEL, NV, ATI, AMD), waiting for approval from the Architecture Review Board (ARB), or using multi-vendor extensions (EXT). In other words, it sounds like developers can target Google's vendor without knowing the actual hardware.

Hiding the GPU vendor from the developer is not the only reason for Google to host their own vendor extension. The added features are mostly from full OpenGL. This makes sense, because it was announced with NVIDIA and their Tegra K1, Kepler-based SoC. Full OpenGL compatibility was NVIDIA's selling point for the K1, due to its heritage as a desktop GPU. But, instead of requiring apps to be programmed with full OpenGL in mind, Google's extension pushes it to OpenGL ES 3.1. If the developer wants to dip their toe into OpenGL, then they could add a few Android Extension Pack features to their existing ES engine.

Epic Games' Unreal Engine 4 "Rivalry" Demo from Google I/O 2014.

The last feature, ASTC Texture Compression, was an interesting one. Apparently the Khronos Group, owners of OpenGL, were looking for a new generation of texture compression technologies. NVIDIA suggested their ZIL technology. ARM and AMD also proposed "Adaptive Scalable Texture Compression". ARM and AMD won, although the Khronos Group stated that the collaboration between ARM and NVIDIA made both proposals better than either in isolation.

Android Extension Pack is set to launch with "Android L". The next release of Android is not currently associated with a snack food. If I was their marketer, I would block out the next three versions as 5.x, and name them (L)emon, then (M)eringue, and finally (P)ie.

To be doubly clear, if the title was not explicit enough, this announcement is not made by Valve. This company is called, "SteamBoy Machine team". If not a hoax, this is one of the many Steam Machines which are expected to come out of the SteamOS initiative. Rather than taking the platform to a desktop or home theater PC (HTPC) form-factor, this company wants to target the handheld PC gaming market.

If it comes out, that is a clever use of SteamOS. I can see Big Picture Mode being just as useful on a small screen as it is on a TV, especially with its large font and controller navigation. The teasers suggest that it will use the haptic feedback-based touchpads which Valve are expected to base the Steam Controller on. It will also include a 5-inch touchscreen.

Even if this company does not make good on their expectations, companies will now be considering portable SteamOS devices. This is the sort of outside-the-box thinking that Valve was pushing for when they wanted to create an open platform. Each party will struggle to win in their personal goals, yet they can also rely on the crowd (other companies or individuals) to keep up in areas where they do not want an edge.

Philosophy aside, the company is targeting 2015 with a "Standard Edition" supporting WiFi and 3G. It would make sense to have a WiFi-only model, but who knows.

While "Steam Machines" are delayed, Alienware will still launch their console form-factor PC. The $550 price tag includes a black Xbox 360 wireless controller (with receiver) and Windows 8.1 64-bit. Alienware has also designed their own "Console-mode UI" for Windows 8.1, which can be navigated directly with a controller. It will ship Holiday 2014.

Apparently PC-based consoles equate to dubstep and parkour.

About the "Console-mode UI", it will apparently be what the user sees when the Alpha boots. The user can then select between Steam Big Picture, media, and programs. They also allow users to boot into the standard Windows 8.1 interface.

As for its specifications:

Base Model ($550)

Upgrade Options

Processor

Haswell-based Intel Core i3

Core i5, Core i7 (user accessible)

GPU

"Custom" Maxwell-based, 2GB GDDR5(see next paragraph)

(none) (not user accessible, soldered on)

System Memory

4GB at 1600 MHz

8GB (user accessible)

HDD

500GB SATA3

1TB or 2TB (user accessible)

Wireless

Dual-band 802.11ac

(user accessible)

I/O

HDMI Out

HDMI In

Gigabit Ethernet

Optical Audio

2x USB 3.0 (rear), 2x USB 2.0 (front)

Included
Accessories

Xbox 360 Wireless Controller

Xbox 360 Wireless Accessories USB Adapter

The GPU is not specified, or even given a similar part to refer to. PC World claims that it will be comparable to the performance found on the two next-gen consoles. Since the 750 Ti has around 1.3 TeraFLOPs of performance, this GPU is probably near that, or slightly above it. PC Gamer says that it will be based on mobile Maxwell, so it might be similar to an current or upcoming laptop GPU.

One thing that has not been addressed is the HDMI-in port. We know that it supports passthrough for low latency, but we do not know what it will do with the input video. Alienware has several of these set up at their booth on the show floor, so we might hear more soon. While its specifications are a bit on the light side, particularly on the default amount of RAM (although that is easily and cheaply upgraded), its $550 price, which includes a wireless controller and its adapter, is also pretty good.

So my best guess is that Rockstar was waiting on the "next-gen" assets before they bothered releasing Grand Theft Auto V on the PC. The game will be released this fall, alongside Xbox One and PlayStation 4 ports. They do not mention distribution platforms, but Steam is a fairly safe assumption, at least now that Games for Windows has been given its final rest.

Hopefully, this delay in releasing a PC version will be a temporary hiccup due to the overlapping console generations. With Grand Theft Auto IV, the same could not be said. The problem is, with how secretive Rockstar is, we cannot really tell whether the above assumption is true, or whether they were just non-committal to the PC platform until now. At either rate, until the PC version is launched, Rockstar has not and will not get my money. Of course, there is always that danger that, by the time the game does launch, I will not be able to afford its time or expense.

That's why you should always release the PC version as early as possible.

Good Old Games (GOG), a subsidiary of CD Projekt RED, is releasing an online gaming manager similar to Steam and Origin. The difference is that everything about it is DRM-free and completely optional. Galaxy will manage game updates, provide achievements, and host communication between friends... if you want. If you don't? That's okay. Have fun.

Obviously, their most popular competitor is Valve. Steam has a history of being nice to their customers and erring on their side. GOG, historically, takes it to the consumer-friendly extreme. If it lives up to their statements, this is no exception. The hope seems to be just that people will remember GOG more often and have more happy customers.

Basically, most platforms are give-and-take. This is take what you want.

When will it launch? What will it look like? Who knows. We will get more news this year, which suggests that we will not get the software until at least next year. Hopefully they will take their time and get it right. I mean, it is not like they need to rush. It is not a mandatory DRM platform - it is not a DRM platform at all. I do expect they will try to target The Witcher 3's launch window (February 2015) for marketing purposes, though.

The Tech Report had their screenshot-fu tested today with the brief lifespan of NVIDIA's SHIELD Tablet product page. As you can see, it is fairly empty. We know that it will have at least one bullet point of "Features" and that its name will be "SHIELD Tablet".

Of course, being the first day of E3, it is easy to expect that such a device will be announced in the next couple of days. This is expected to be based on the Tegra K1 with 2GB of RAM and have a 2048x1536 touch display.

It does question what exactly is a "SHIELD", however. Apart from being a first-party device, how would they be any different from other TegraZone devices? We know that Half Life 2 and Portal have been ported to the SHIELD product line, exclusively, and will not be available on other Tegra-powered devices. Now that the SHIELD line is extending to tablets, I wonder how NVIDIA will handle this seemingly two-tier class of products (SHIELD vs Tegra OEM devices). It might even depend on how many design wins they achieve, along with their overall mobile market share.