3DMark, one of the industry’s leading gaming benchmark companies, has just released a new graphics benchmark for Android devices named Sling Shot, which tests your device against a range of OpenGL ES 3.1 and 3.0 API features.

The new Sling Shot benchmark is designed to put your smartphone through its graphics paces and consists of three different tests, two graphics scenarios and one physics test. The first graphics test puts heavy emphasis on geometry processing and only uses simple shaders, while the second test reverses the situation to stress more mathematically complex shaders. The physics test’s math is computed on your phone’s CPU.

Inside the test, you will find examples of the latest graphical effects included as part of the OpenGL ES 3.0 and 3.1 APIs. These include, surface, particle and volumetric illumination, depth of field, and bloom post processing. End results are displayed in a modified graph, which contains information about the frames per second, CPU clock speed, battery drain, and, perhaps most importantly these days, your handset’s temperature.

Most last generation handsets can run the ES 3.0 test, which runs at 1080p before being scaled to your actual display resolution. Newer devices, such as the HTC One M9 or Galaxy S6, can run both 3.0 and 3.1 options. Sling Shot using ES 3.1 bumps the resolution up to a full QHD (2560 x 1440) resolution before downscaling to your display’s actual size. This mode also makes use of more advanced Compute Shaders rather than Pixel Shaders for the first graphics test. Both modes can also be run “off-screen” using the Unlimited mode.

As part of the update, the benchmark suite now also supports Android TV devices so you can benchmark your set-top gaming performance, new UI changes for selecting your desired benchmark, and Russian localization.

The new 3DMark benchmark replaces the standalone Sling Shot app which was used for testing back in April and scores between the two versions are not comparable. You can download the 3DMark suite for free from the Google Play Store.

This year’s CES has seen Nvidia unveil the latest member of its Tegra SoC family. Formally known as Project Logan and Tegra 5, Nvidia’s new Tegra K1 will unite the company’s struggling mobile processor business with its far more successful desktop graphics division.

It’s exciting stuff when Nvidia claims that it can bring “next-gen console gaming graphics” to mobile devices with a power budget of just two watts. But there are some caveats to know. Let’s take a closer look.

CUDA cores galore

The big talking point about the Tegra K1 is the seemingly huge number of graphics cores squeezed into the new GPU, dwarfing the 72 cores offered by the old Tegra 4. GPU pedigree is even more important than core count, and the Tegra K1 does not disappoint. Unlike the Tegra 4, the K1’s architecture comes straight from Nvidia’s high-end Kepler architecture, the same technology that powers the mighty GTX 680, Titan, and 780Ti desktop graphics cards.

Although Kepler makes the transition to mobile pretty much untouched, the comparison to these top-tier GPUs is a little unfair. The Tegra K1 features only a single Nvidia SMX, containing a decent 192 CUDA cores, 8 Texture Units, and 4 ROPs, but that’s significantly cut down from Nvidia’s top of the line cards. The GTX 680, for comparison, contains a far more impressive 1536 CUDA Cores, 128 Texture Units, and 32 ROPs.

Nvidia hasn’t mentioned the core clock speeds or bandwidth for the K1 yet, but the company did list a peak shader performance figure of 365 GFLOPS during the CES presentation. It’s difficult to gauge the chip’s exact performance at this stage, but a ballpark comparison with the low end OEM GT630 (192 cores and 336 GFLOPs) might not be too far off.

But enough of the desktop comparisons, how did they squeeze so much into a chip that draws just 2 watts?

There are noticeable efficiencies to be made by moving from separate chips to a single SoC, and significantly cutting down the number of cores and ROPs would put the K1 well below Nvidia’s range of Kepler laptop chips, which already pull less than 20 watts. The larger 128KB L2 cache will also reduce the energy expended on off-chip memory access.

Particular attention has also been paid to low level optimizations to efficiently manage power consumption. Power and Clock Gating identifies GPU cores that are idle and lowers clock frequencies or completely gates these blocks to reduce power consumption on the fly. Support for ASTC texture compression should also help reduce the amount of work required for both UI and 3D rendering.

The K1 is a massive step up in terms of both graphical horsepower and energy efficiency, but not all of the improvements lie in the hardware department.

Next-Gen APIs

Perhaps the most significant new feature, in terms of offering a next-gen gaming experience on mobile devices, is support for the same graphics APIs as the K1’s bigger brothers. You may remember that the Tegra 4 lacked support for common OpenGL, CUDA, and DirectX 11 APIs, but was instead optimized for certain games, depending on the developer. The K1 departs from this less than ideal arrangement, offering full support for OpenGL 4.4, Microsoft’s DirectX 11.2, OpenGL ES 3.0, and Nvidia’s own CUDA 6.

With new APIs comes new graphical improvements, such as support for FXAA and TXAA anti-aliasing to help eliminate jagged edges, realistic physics simulations courtesy of Nvidia’s Physx, and Compute Shaders with a full range of high-end effects, such as Ambient Occlusion. The Tegra K1 will also be the first mobile GPU on the market to support hardware based tessellation, although Qualcomm has its own Adreno 420 in the works which will do much the same thing.

The biggest news here is that developers working on PC and console games could now scale their creations down to work on the K1. Interestingly, the Tegra K1 seems to have a fair bit more grunt than both the Playstation 3 and the Xbox 360, so scaled down multi-platform ports aren’t outside the realms of possibility. Nvidia has already shown off ports of Unreal Engine 4, Serious Sam 3, and Trine 2 running flawlessly on the Tegra K1.

Two CPU designs

The K1 SoC will be released in two CPU flavours with fully compatible pin designs, meaning that manufactures can easily swap between the two. The first is a familiar quad-core plus one Cortex A15 layout, almost exactly like the architecture found in the Tegra 4. The second will implement Nvidia’s own dual core ARM based CPU.

Just like the Tegra 4, the K1 CPU comes with four fully clocked A15 cores designed to do the heavy lifting, with another low power A15 “companion core” to manage the little tasks. Each core can also be gated for reduced power consumption, introducing additional cores only when needed. But there is a subtle difference between the K1 and Tegra 4, the K1 CPU is based on the new third revision of ARM’s Cortex A15.

The main improvement with R3 is increased power efficiency, due to improved clock gating. Additional power consumption has also been saved courtesy of the move over to 28nm HPM manufacturing, which Nvidia has chosen to put towards a ~20% clock speed boost, 2.3GHz up from 1.9GHz.

The Tegra K1’s A15 processor will be a slightly faster than the Tegra 4’s, but it’s not providing the same leap as the Kepler graphics chip. However, the tried and tested quad-core design means that Nvidia can start manufacturing the K1 quickly, with OEMs set to receive shipments this quarter.

It’s all change for Nvidia’s second CPU design, codenamed “Denver”, which drops the companion core and opts for a more traditional dual-core configuration. The cores will be based on the new ARMv8 architecture, which includes 64-bit as well as 32-bit support. Denver will clock in with a peak of 2.5GHz and has a larger 128KB L1 instruction cache and 64KB L1 data cache. Unfortunately little more is known about Denver at this point, but it’s interesting to see Nvidia drop the popular quad-core design in favour of just two cores.

Media features aplenty

Nvidia is also packing the Tegra K1 with plenty of extra features. The chip’s Image Signal Processor, which is in charge of various imaging tasks, has received an upgrade, in fact Nvidia has stuck two of them on the SoC.

Each ISP is capable of processing 600 Megapixels per second with a 14 bit input, up from the 400Mp/s at 10 bits available with the Tegra 4. There are general improvements to noise reduction, higher quality downscaling, and support for a 100 Megapixel image sensor. The inclusion of two ISPs also opens the door for dual camera operations that we’ve seen on other devices. Just like the Tegra 4, 4K video content is also supported via an HDMI output, although it’s doubtful that the GPU will handle 4K 3D gaming.

Final thoughts

The Tegra K1 finally looks like Nvidia is playing to its strengths, and for that reason alone the K1 is a pretty exciting prospect. This could well be the chip that we see in the next Nvidia Shield.

The biggest obstacle still remaining for Nvidia is finding a large enough consumer base. Appetite for console quality gaming on mobile devices just isn’t there yet. A few years down the line, initiatives like Steam OS may help push gaming down the Linux road and a strong gaming chip on Android may be a much bigger deal then.

Technically the K1 is strong, but in a “casual gaming” orientated market perhaps it’s not as groundbreaking as it first seems. We’ll have to wait and see what developers make of Kepler on Android, and if Denver presents anything new.

The next generation processor war is definitely on, we’ve already had a look at the promising big.LITTLE A53/A57 combination, and now its Qualcomm’s turn to hint at what it has up its sleeve for the next batch of processors. This time it’s not about raw CPU power, but what the new Snapdragons plan bring to graphics and gaming.

The new Snapdragon 600 and 800 chips, which were shown off at CES earlier this year, are the first processors to receive certification for OpenGL ES 3.0. The Adreno 300 graphics processors used in the Snapdragon chips will be supporting this new technology, which will bring a host of new features for developers and gamers alike.

Who needs a new API?

You might be wondering why a new API (application programming interface) is really that important. Considering we are so used to hearing about clock speeds or the number of cores, APIs are often overlooked but are equally as important when it comes to delivering high quality graphics orientated products.

Low level APIs, like Microsoft’s Direct X and OpenGL, allow developers to utilize graphics hardware without having to write everything from scratch. In other words, graphics APIs work as libraries of pre-coded instructions for developers to use, drastically speeding up development time.

New hardware and new APIs tend to go hand in hand, allowing for developers to easily access new hardware features. This is why OpenGL ES 3.0 certification is quite important, devices using Qualcomm’s Snapdragon 600 and 800 processors will be the some of the first to have access to the latest graphical features.

OpenGL ES 3.0

OpenGL ES is a royalty free API based on OpenGL, so it’s commonly used by smaller developers designing 2D/3D applications for small, portable electronic goods.

OpenGL ES 3.0 adds several optional features which will improve the performance and quality of smartphone games. Delving into the specifics, the introduction of occlusion queries will allow an application to better determine the visibility of objects, reducing the number of vertices rendered on screen when they can’t be seen. This should see games run a lot smoother.

The update also sees the introduction of instance rendering, where duplicate items can be rendered with slight alterations without the usual associated performance costs. Transform feedback for particles and support for four or more rendering targets will also assist developers in producing superior looking games.

More noticeably for end users, OpenGL ES 3.0 includes support for superior ETC2 / EAC texture compression, allowing for higher quality compression. This means that developers can squeeze slightly higher quality textures into the same file sizes. This will not only free up GPU memory, games will also require less space on your SD card.

Perhaps most importantly of all, OpenGL ES 3.0 is fully backward compatible with OpenGL ES 2.0. So if you’re planning on buying a device powered by a next generation Snapdragon processor, you’ll still be able to play games using the older API.

Gaming in 2013

Whilst many of the technologies introduced in OpenGL ES 3.0 have been around in the 3D gaming space for a while on high end hardware, it goes to show that smartphone and tablet hardware manufactures are taking gaming more and more seriously.

However it’s unlikely that gaming on Android will be dominated by OpenGL ES 3.0 devices in 2013, as many Android gaming orientated platforms launching this year, like OUYA, will still be using OpenGL ES 2.0. But that’s not to say that come late 2013 or 2014 we won’t be hearing more about support for newer APIs like OpenGL ES 3.0 or DirectX 11.

Qualcomm is certainly putting itself in a good position for the coming years by being the first to market with the latest graphics technologies, but it isn’t the only one looking to future proof its chips. ARM’s brand new Mali-T600 series should also be OpenGL ES 3.0 compliant, although it is still awaiting certification.

There isn’t any confirmation about which chips the new Mali GPUs will be appearing in as of yet, but if we see one thrown in along side one of ARM’s high performance big.LITTLE CPU’s, then we could well be setting ourselves up for a showdown between Qualcomm’s Snapdragon 800 and ARM’s A53/A57 at some point in the not to distant future.

It seems Vivante may have just beaten Samsung to the punch in terms of releasing a mobile GPU that has support for the latest GPU specifications like OpenGL ES 3.0 and OpenCL 1.1. Vivante’s GPUs are usually found in SoC platforms from companies we don’t hear much about, like Freescale and Marvell (which powers the latest Vizio Co-Star Google TV).

As a promoter and contributor in the Khronos OpenGL ES Working Group, Freescale worked closely with Vivante and committee members to define the next generation graphics API, which is expected to offer highly advanced graphics innovations to current and upcoming mobile and consumer devices. -Rajeev Kumar, i.MX product line manager for Freescale Semiconductor

The new OpenGL ES 3.0 graphics specification will help developers create mobile games that look a lot more advanced and more similar to what we can see on consoles or PCs. Next-gen GPUs will be able to even surpass consoles in performance, but they also need the necessary graphics features to create amazing looking games with impressive visuals.

Vivante GC Core series, starting with GC800 series and higher, will include OpenGL ES 3.0 features such as:

Updating the shader and GPU pipelines to include support of occlusion queries, transform feedback, instanced rendering, support of four / eight render targets, and OpenCL – ES3.0 interoperability to simultaneously support multiple contexts of graphics and GPU compute.

Support of new texture compression (ETC2 / EAC) and pixel formats included in the specification.

Support of the latest GLSL ES shading language including signed / unsigned 32-bit and 16-bit INT and FP operations based on IEEE-754 precision requirements.

Vivante’s new GPUs will also be built on a unified shader architecture, which should help maintain a similar level of performance for games that are more pixel shader intensive or for the ones that are more vertex shader intensive, with no compromises.

The new Vivante GPUs also includes a bunch of new hardware features, such as:

While not as known as Mali or Tegra’s GPUs, it seems Vivante has built some solid products here, that are future proof and seem very competitive with anything else on the market, although we can’t be sure of that until we see some real benchmarks. However, the GPU’s look promising, and I’m glad companies are so aggressive about pushing OpenGL ES 3.0 chips into the market, because there more there are out there, the faster developers will start making games specifically for it.

Samsung has just released the whitepaper for their long-awaited Exynos 5250 SoC, now called the Exynos 5 Dual, and there’s a lot of interesting information to be discovered in it.

The Exynos 5 Dual will be the world’s first Cortex A15-based chip, when it will ship later this year, presumably inside the 11.8-inch tablet Samsung is going to launch later this year. Samsung had to wait for this A15-based chip to support the very high WQXGA (2560×1600) resolution of the Galaxy Tab 11.6.

Some of the most important features of the newly unveiled Exynos 5250 are:

Let us take you through the most important features of the new Exynos 5 Dual.

Cortex A15

Cortex A15 is the next generation of ARM CPUs, arriving to replace the Cortex A9 design at the high-end of the scale. In a way, the Exynos 5250 will compete with Qualcomm’s S4 chip, based on the Krait design, which is a next-gen design as well, although with slightly less performance per clock than Cortex A15. This follows the same pattern of the first Snapdragon chips (S1, S2, S3), based on the Scorpion design, which also had slightly lower performance than Cortex A9.

Qualcomm’s advantage was that, in both cases, then and now, they came first to market with the new designs. But when the Cortex chips came out, they were able to match and exceed the performance of Qualcomm’s custom design chips.

It seems that history will repeat itself with the dual-core 1.7 Ghz Exynos 5 Dual arriving on the market and competing with Qualcomm’s dual core 1.7 Ghz S4 chips. Qualcomm will also have a quad-core 1.5 Ghz S4 Pro chip later this year, but it will be the same story as with the dual-core S4 vs the quad-core Tegra 3 chip comparison from earlier. I expect the Exynos 5 Dual to beat the quad core S4 Pro in performance for all single threaded apps, with the S4 Pro gaining a slight advantage in multi-threaded apps.

Mali T604 GPU

What’s nice about Exynos 5 Dual is that it doesn’t come just with a next-gen CPU, but also a next-gen GPU. This is a fortunate match, as they are both designed by ARM itself, so they benefit from higher integration, and also because ARM changes its GPU architecture only once every 5 years. So Mali T604 is the very first GPU design based on the new Midgard architecture, with unified shaders, OpenGL ES 3.0 and OpenCL 1.1 full profile.

We’ve already discussed that the OpenGL ES 3.0 specification is meant to help developers create more visually impressive games on mobile devices, that should even surpass current-gen consoles soon. OpenCL, or the Open Compute Language, is meant to give developers a way to harness the power of the GPU to enhance what the CPU can do, while making the whole assembly [...]]]>

Samsung has just released the whitepaper for their long-awaited Exynos 5250 SoC, now called the Exynos 5 Dual, and there’s a lot of interesting information to be discovered in it.

The Exynos 5 Dual will be the world’s first Cortex A15-based chip, when it will ship later this year, presumably inside the 11.8-inch tablet Samsung is going to launch later this year. Samsung had to wait for this A15-based chip to support the very high WQXGA (2560×1600) resolution of the Galaxy Tab 11.6.

Some of the most important features of the newly unveiled Exynos 5250 are:

Dual-core 1.7 Ghz Cortex A15 CPU

Mali T604 GPU

OpenGL ES 3.0

OpenCL 1.1 full profile

Support for WXQGA displays

Wi-Fi display support

12.8 GB/s memory bandwidth with 2 port 800 Mhz LPDDR3 RAM support

1080p 60 FPS video performance and VP8 codec decoder

USB 3.0 support

Let us take you through the most important features of the new Exynos 5 Dual.

Cortex A15

Cortex A15 is the next generation of ARM CPUs, arriving to replace the Cortex A9 design at the high-end of the scale. In a way, the Exynos 5250 will compete with Qualcomm’s S4 chip, based on the Krait design, which is a next-gen design as well, although with slightly less performance per clock than Cortex A15. This follows the same pattern of the first Snapdragon chips (S1, S2, S3), based on the Scorpion design, which also had slightly lower performance than Cortex A9.

Qualcomm’s advantage was that, in both cases, then and now, they came first to market with the new designs. But when the Cortex chips came out, they were able to match and exceed the performance of Qualcomm’s custom design chips.

It seems that history will repeat itself with the dual-core 1.7 Ghz Exynos 5 Dual arriving on the market and competing with Qualcomm’s dual core 1.7 Ghz S4 chips. Qualcomm will also have a quad-core 1.5 Ghz S4 Pro chip later this year, but it will be the same story as with the dual-core S4 vs the quad-core Tegra 3 chip comparison from earlier. I expect the Exynos 5 Dual to beat the quad core S4 Pro in performance for all single threaded apps, with the S4 Pro gaining a slight advantage in multi-threaded apps.

Mali T604 GPU

What’s nice about Exynos 5 Dual is that it doesn’t come just with a next-gen CPU, but also a next-gen GPU. This is a fortunate match, as they are both designed by ARM itself, so they benefit from higher integration, and also because ARM changes its GPU architecture only once every 5 years. So Mali T604 is the very first GPU design based on the new Midgard architecture, with unified shaders, OpenGL ES 3.0 and OpenCL 1.1 full profile.

We’ve already discussed that the OpenGL ES 3.0 specification is meant to help developers create more visually impressive games on mobile devices, that should even surpass current-gen consoles soon. OpenCL, or the Open Compute Language, is meant to give developers a way to harness the power of the GPU to enhance what the CPU can do, while making the whole assembly more power efficient. It can be used for games, digital photography, and other things that can be done faster with parallel computing (usually graphics related).

WXQGA Displays and 12.8 GB/s Memory Bandwidth

To power a display with a 2560×1600 resolution, which is double (or four times the pixels) of what you see in Android tablets today (1280×800), you need not only a powerful GPU, but also high-memory bandwidth, so you can send all that high resolution data to the screen. Fortunately, the Exynos 5 Dual comes with support for 12.8 GB/s memory bandwidth, with two port 800 Mhz LPDDR3 RAM.

It takes a 1GB/s bandwidth to draw a 24-bit WQXGA screen at 60 FPS, but that’s just for the screen alone. Add the interface and all the icons, and the required bandwidth grows to 8GB/s. But that’s just the effective bandwidth, and when taking into account a memory utilization of 80%, you reach 10GB/s. Exynos 5 Dual has been designed with support for 12.8 GB/s memory bandwidth, specifically to support a 2560×1600 resolution.

Such high resolution displays also use a lot more power than lower-res displays. Manufacturers mitigate the increased power consumption with more efficient and more powerful GPUs, larger batteries, and more efficient displays. The Exynos 5 Dual supports a so-called PSR mode, which enables the display to use 20x less power when displaying a static image, like when reading an ebook or a web page, or viewing a picture. PSR mode should help significantly reduce the overall power consumption of the device.

1080p 60 FPS Video, Wifi Display and VP8 Decoding

The 1080p 60 FPS video might not be very useful, unless you want to record super smooth videos, but it’s great for displaying stereoscopic 3D graphics. Exynos 5 Dual is the first chip in the market to support full HD 60 FPS video decoding/encoding, and, if you have some 3D glasses laying around, you can use HDMI to stream the video from your phone to the TV, and watch the videos in 3D.

Exynos 5 Dual also supports Wi-Fi Display technology, which means you can stream the videos and everything else inside your phone, wirelessly to your TV. Wifi Display requires a lot of memory bandwidth, as it has to decode and encode the full HD videos at the same time. But Exynos 5 Dual is able to do that, all while providing a minimum of 30 FPS experience.

Google’s VP8 codec is also supported by Exynos 5 Dual, and I think we’re going to see a lot more upcoming chips supporting hardware acceleration for VP8. One of the main arguments of the H.264 video codec over VP8 was that H.264 was already hardware accelerated by many devices on the market, while VP8 wasn’t. I think Google is trying to change that, and finally bring us widely used royalty-free and open source video codecs. Exynos 5 Dual is the first step in that direction, being able to decode full HD video at 60 FPS using the VP8 codec.

USB 3.0 Support

It looks like Exynos 5 Dual is bringing us another first in the mobile world — USB 3.0 support, a standard that can reach 5Gbps transfers, ten times faster than USB 2.0. With all the new laptops coming out these days supporting USB 3.0 and SATA 3 drives, mobile devices were beginning to become the bottleneck when it came to transferring files from and to the PC. USB 3.0 support will help you transfer files in seconds rather than minutes.

Exynos 5 Dual’s USB 3.0 port can operate as either Host or Device, so besides being able to transfer files to your PC, users will also be able to connect peripherals to the device, like keyboards, controllers, external storage, LTE modems. The support for USB host will give Exynos 5 Dual devices a high amount of flexibility.

Conclusion

The Exynos 5 Dual is the chip you should look for in upcoming tablets and smartphones, as it should have the most powerful CPU and GPU of any new chips coming out by the end of the year, including the S4 Pro and OMAP 5. Next year, we can begin talking about the quad-core Tegra 4 and the Exynos 5 Quad, but, until then, Exynos 5 Dual should reign supreme in the mobile market in terms of performance and features.

When the Crysis FPS game came out on the PC in 2007, it pushed the boundaries of what gaming graphics should look like. The “Can it run Crysis?” catchphrase became an Internet meme, a question asked whenever a new CPU or GPU is announced.

ARM hardware has been evolving at an amazing pace, with the performance of mobile GPUs doubling every 12 months. Low-power ARM chips are very close to surpassing the current gen consoles, in terms of performance and gaming visuals. Crytek, the company behind Crysis, seems to agree that, if the next-gen PS4 and Xbox720 won’t come soon, video consoles will be surpassed by tablets as the main devices for gaming:

We can’t comment on or even speculate as to when, or if, the next generation will be announced. But we think that it’s time for the next generation and we think that it’s overdue already. The current generations are drying out and the longer we wait for the next generation of consoles, the higher the likelihood that they could fall behind tablets in terms of being the first thing people reach for when the time comes to play games. Tablets are putting pressure on the gaming industry, and taking over in some ways, so that should be kept in mind. – Crytek founder Cevat Yerli.

Some Mali T604 demos are already very impressive, but I think ARM GPUs will only start surpassing consoles next year, when the second generation of Mali T600 GPUs will start appearing, like the 8-core Mali T628 and Mali T678 GPU’s.

But performance alone is not enough; you also need new graphics features. The newly ratified OpenGL ES 3.0 specification, which will be supported by all next-gen GPUs, starting with Mali T604 and Adreno 320, will help tablets surpass consoles in terms of visuals. If you doubt this, consider that the PS3 console mostly uses the OpenGL ES 2.0 standard, albeit a proprietary and modified version of it.

So, as long as developers start making games straight for OpenGL ES 3.0, OpenCL, and the new and more powerful GPUs, we should see games that look better on mobile devices than on consoles as soon as next year.

Now, is Crytek working on porting the PS3 version of Crysis on tablets already? Because even if the hardware can “run Crysis”, we still need Crytek to actually port the game to Android.

]]>

When the Crysis FPS game came out on the PC in 2007, it pushed the boundaries of what gaming graphics should look like. The “Can it run Crysis?” catchphrase became an Internet meme, a question asked whenever a new CPU or GPU is announced.

ARM hardware has been evolving at an amazing pace, with the performance of mobile GPUs doubling every 12 months. Low-power ARM chips are very close to surpassing the current gen consoles, in terms of performance and gaming visuals. Crytek, the company behind Crysis, seems to agree that, if the next-gen PS4 and Xbox720 won’t come soon, video consoles will be surpassed by tablets as the main devices for gaming:

We can’t comment on or even speculate as to when, or if, the next generation will be announced. But we think that it’s time for the next generation and we think that it’s overdue already. The current generations are drying out and the longer we wait for the next generation of consoles, the higher the likelihood that they could fall behind tablets in terms of being the first thing people reach for when the time comes to play games. Tablets are putting pressure on the gaming industry, and taking over in some ways, so that should be kept in mind. – Crytek founder Cevat Yerli.

But performance alone is not enough; you also need new graphics features. The newly ratified OpenGL ES 3.0 specification, which will be supported by all next-gen GPUs, starting with Mali T604 and Adreno 320, will help tablets surpass consoles in terms of visuals. If you doubt this, consider that the PS3 console mostly uses the OpenGL ES 2.0 standard, albeit a proprietary and modified version of it.

So, as long as developers start making games straight for OpenGL ES 3.0, OpenCL, and the new and more powerful GPUs, we should see games that look better on mobile devices than on consoles as soon as next year.

Now, is Crytek working on porting the PS3 version of Crysis on tablets already? Because even if the hardware can “run Crysis”, we still need Crytek to actually port the game to Android.

The OpenGL ES 2.0 specification has been finalized since 2007, but we didn’t get to use it in smartphones until 2009, when the iPhone and Android phones started supporting it in hardware. It still feels like a long time for the OpenGL ES specification to be updated, though. But the wait is over, as Khronos, the group responsible for OpenGL, OpenCL, OpenVL (augmented reality) and other open standards, has finally announced that the OpenGL ES 3.0 specification has been ratified.

The OpenGL ES 3.0 specification is largely an implementation of the desktop version of OpenGL 3.3, with some other features taken from OpenGL 4.x. Compared to DirectX, it’s somewhere between DirectX 9.3c and DirectX10, mainly because of the lack of geometry shaders, which were probably omitted because focusing on geometry shaders this early in the development of mobile GPU’s would use too much battery life.

However, considering that next year we should start seeing mobile GPU’s that are already more powerful than current-gen consoles, I expect Khronos to release the OpenGL ES 4.0 standard with geometry shaders a lot sooner than having to wait another 5 years. My guess is that we’ll see OpenGL ES 4.0 in 2014, or 2015 at most. Since OpenGL 4.0 supports tessellation, I wouldn’t be surprised if they added that feature in OpenGL ES 4.0 as well.

But the focus of OpenGL ES will always remain power consumption, as I’m sure nobody will want their mobile device to run out of battery after 2 hours of playing a game, no matter how impressive the graphics are. And actually, since there are already hundreds of millions of OpenGL ES devices on the market, and there will be billions soon thanks to Android and iOS, I think we’ll see indie developers port their mobile games to the desktop in OpenGL ES form, rather than the full OpenGL. The newly announced OpenGL 4.3 fully supports OpenGL ES 3.0, so all desktop chips will support OpenGL ES 3.0.

We should start seeing mobile GPUs that support OpenGL ES 3.0 coming out this year, including the Mali T604 and Adreno 320 GPU’s. Benchmark suits for OpenGL ES 3.0 should also start appearing by the end of the year.

]]>

The OpenGL ES 2.0 specification has been finalized since 2007, but we didn’t get to use it in smartphones until 2009, when the iPhone and Android phones started supporting it in hardware. It still feels like a long time for the OpenGL ES specification to be updated, though. But the wait is over, as Khronos, the group responsible for OpenGL, OpenCL, OpenVL (augmented reality) and other open standards, has finally announced that the OpenGL ES 3.0 specification has been ratified.

The OpenGL ES 3.0 specification is largely an implementation of the desktop version of OpenGL 3.3, with some other features taken from OpenGL 4.x. Compared to DirectX, it’s somewhere between DirectX 9.3c and DirectX10, mainly because of the lack of geometry shaders, which were probably omitted because focusing on geometry shaders this early in the development of mobile GPU’s would use too much battery life.

However, considering that next year we should start seeing mobile GPU’s that are already more powerful than current-gen consoles, I expect Khronos to release the OpenGL ES 4.0 standard with geometry shaders a lot sooner than having to wait another 5 years. My guess is that we’ll see OpenGL ES 4.0 in 2014, or 2015 at most. Since OpenGL 4.0 supports tessellation, I wouldn’t be surprised if they added that feature in OpenGL ES 4.0 as well.

But the focus of OpenGL ES will always remain power consumption, as I’m sure nobody will want their mobile device to run out of battery after 2 hours of playing a game, no matter how impressive the graphics are. And actually, since there are already hundreds of millions of OpenGL ES devices on the market, and there will be billions soon thanks to Android and iOS, I think we’ll see indie developers port their mobile games to the desktop in OpenGL ES form, rather than the full OpenGL. The newly announced OpenGL 4.3 fully supports OpenGL ES 3.0, so all desktop chips will support OpenGL ES 3.0.

We should start seeing mobile GPUs that support OpenGL ES 3.0 coming out this year, including the Mali T604 and Adreno 320 GPU’s. Benchmark suits for OpenGL ES 3.0 should also start appearing by the end of the year.

The GLBenchmark is now the standard benchmarking tool to compare mobile GPUs. We’re going to see a GLBenchmark 3.0 this fall with all the necessary tests for the new OpenGL ES 3.0 standard, but until then, we get a significantly improved GLBenchmark 2.5 version. The new version still tests only OpenGL 2.0 features, but does it in a much more aggressive way, leaving even the latest mobile GPUs struggling to achieve 15 FPS in most tests, let alone 30 FPS.

Considering that GLBenchmark 3.0 is just a few months away, and will probably arrive before any of the OpenGL ES 3.0 devices come to market, I’m surprised that the benchmark’s developers decided to make it this aggressive this early. Benchmark tools are supposed to “stress” GPUs, but I think they should also give a pretty accurate representations of what mobile GPUs can do today.

Because current high-end GPUs fail to reach even 15 FPS, less informed readers could be forgiven for believing that the current crop of graphics chips is pretty poor, which is of course not the case.

The new Egypt test is now called Egypt HD, and although it keeps the same animation, shows a much more complex scene. In addition, the offscreen test now defaults to a 1080p resolution, although you can customize it.

Anandtech has done a bunch of these tests for different Android devices to see how they compare:

And now a couple of tests using the old Egypt classic, that shows how the new Egypt HD tests are around 3x more aggressive:

The smartphone tests show that Galaxy S3’s overclocked Mali 400 GPU is still the leader of the pack, although the difference seems to minimize in the more complex graphics test, compared to the older and lighter Egypst classic test:

This makes me curious to see how the benchmarks will look for the Adreno 320 and Mali T604 GPU’s later this year, especially in the upcoming GLBenchmark 3.0 for OpenGL ES 3.0.

]]>

The GLBenchmark is now the standard benchmarking tool to compare mobile GPUs. We’re going to see a GLBenchmark 3.0 this fall with all the necessary tests for the new OpenGL ES 3.0 standard, but until then, we get a significantly improved GLBenchmark 2.5 version. The new version still tests only OpenGL 2.0 features, but does it in a much more aggressive way, leaving even the latest mobile GPUs struggling to achieve 15 FPS in most tests, let alone 30 FPS.

Considering that GLBenchmark 3.0 is just a few months away, and will probably arrive before any of the OpenGL ES 3.0 devices come to market, I’m surprised that the benchmark’s developers decided to make it this aggressive this early. Benchmark tools are supposed to “stress” GPUs, but I think they should also give a pretty accurate representations of what mobile GPUs can do today.

Because current high-end GPUs fail to reach even 15 FPS, less informed readers could be forgiven for believing that the current crop of graphics chips is pretty poor, which is of course not the case.

The new Egypt test is now called Egypt HD, and although it keeps the same animation, shows a much more complex scene. In addition, the offscreen test now defaults to a 1080p resolution, although you can customize it.

Anandtech has done a bunch of these tests for different Android devices to see how they compare:

And now a couple of tests using the old Egypt classic, that shows how the new Egypt HD tests are around 3x more aggressive:

The smartphone tests show that Galaxy S3’s overclocked Mali 400 GPU is still the leader of the pack, although the difference seems to minimize in the more complex graphics test, compared to the older and lighter Egypst classic test:

This makes me curious to see how the benchmarks will look for the Adreno 320 and Mali T604 GPU’s later this year, especially in the upcoming GLBenchmark 3.0 for OpenGL ES 3.0.

OpenGL ES 2.0 is a lighter version of the desktop OpenGL 2.0 (which is quite old), and stripped of the features that would consume too much power. It then attempts to maintain the perfect balance between visual graphics and battery efficiency for the remaining ones. Keep in mind that the OpenGL ES 2.0 was finished about 5 years ago, and back then the high-end chips used to be the ARMv6 GPU’s and CPU’s like the ARM11 we see now in the lowest end Android smartphones!

But with the latest GPU’s that can support incredibly high resolutions, while still keeping the power consumption low enough on mobile devices, OpenGL is getting quite old and limited. We need something new to advance 3D games on mobile devices, and lucky for us, OpenGL ES 3.0, based on OpenGL 3.2+ and a bit of OpenGL 4.x. The release of the specification will probably happen at SIGGRAPGH 2012, in August.

The good news is that you won’t have too wait too long to have OpenGL ES 3.0 in your device. Basically all the new mobile GPU architectures that are set to launch this year or next will be supporting it. That includes the Mali T604 and Adreno 320 (this year), and Tegra 4, PowerVR 600, and Mali T658 (next year). These new GPU’s will also support OpenCL 1.1 for GPU compute, so OpenGL ES 3.0 together with OpenCL, should allow developers to create some very impressive mobile games by next year.

Since WebGL is also based on OpenGL ES 2.0, I’m expecting to see an update and a transition to OpenGL ES 3.0 for WebGL,as well, although they’ll have to make sure that AMD, Nvidia and Intel are supporting it in their latest drivers first (on the desktop). Since Android 5.0 is launching this fall, it will hopefully support OpenGL ES 3.0 natively, so all the new devices coming into 2013 can make use of it by default.

There should be a new version (3.0) of GLBenchemark released by fall, that will make it easier for us to compare OpenGL ES 3.0 devices, and give us more accurate information about the new GPU’s performance.

]]>

OpenGL ES 2.0 is a lighter version of the desktop OpenGL 2.0 (which is quite old), and stripped of the features that would consume too much power. It then attempts to maintain the perfect balance between visual graphics and battery efficiency for the remaining ones. Keep in mind that the OpenGL ES 2.0 was finished about 5 years ago, and back then the high-end chips used to be the ARMv6 GPU’s and CPU’s like the ARM11 we see now in the lowest end Android smartphones!

But with the latest GPU’s that can support incredibly high resolutions, while still keeping the power consumption low enough on mobile devices, OpenGL is getting quite old and limited. We need something new to advance 3D games on mobile devices, and lucky for us, OpenGL ES 3.0, based on OpenGL 3.2+ and a bit of OpenGL 4.x. The release of the specification will probably happen at SIGGRAPGH 2012, in August.

The good news is that you won’t have too wait too long to have OpenGL ES 3.0 in your device. Basically all the new mobile GPU architectures that are set to launch this year or next will be supporting it. That includes the Mali T604 and Adreno 320 (this year), and Tegra 4, PowerVR 600, and Mali T658 (next year). These new GPU’s will also support OpenCL 1.1 for GPU compute, so OpenGL ES 3.0 together with OpenCL, should allow developers to create some very impressive mobile games by next year.

Since WebGL is also based on OpenGL ES 2.0, I’m expecting to see an update and a transition to OpenGL ES 3.0 for WebGL,as well, although they’ll have to make sure that AMD, Nvidia and Intel are supporting it in their latest drivers first (on the desktop). Since Android 5.0 is launching this fall, it will hopefully support OpenGL ES 3.0 natively, so all the new devices coming into 2013 can make use of it by default.

There should be a new version (3.0) of GLBenchemark released by fall, that will make it easier for us to compare OpenGL ES 3.0 devices, and give us more accurate information about the new GPU’s performance.