Slashdot videos: Now with more Slashdot!

View

Discuss

Share

We've improved Slashdot's video section; now you can view our video interviews, product close-ups and site visits with all the usual Slashdot options to comment, share, etc. No more walled garden! It's a work in progress -- we hope you'll check it out (Learn more about the recent updates).

An anonymous reader writes "The Khronos Group has published the first products that are officially conformant to OpenGL ES 3.0. On that list is the Intel Ivy Bridge processors with integrated graphics, which support OpenGL ES 3.0 on open-source Linux Mesa. This is the best timing yet for Intel's open-source team to support a new OpenGL standard — the standard is just six months old whereas it took years for them to support OpenGL ES 2.0. There's also no OpenGL ES 3.0 Intel Windows driver yet that's conformant. Intel also had a faster turn-around time than NVIDIA and AMD with the only other hardware on the list being Qualcomm and PowerVR hardware. OpenGL ES 3.0 works with Intel Ivy Bridge when using the Linux 3.6 kernel and the soon-to-be-out Mesa 9.1."
Phoronix ran a rundown of what OpenGL ES 3.0 brings back in August.

Also windows mobile and RT only use ARM(AFAIK), while there are a few Intel Android phones and it's expected to see Ubuntu phones using both architectures as well. Linux actually has a need for new OpenGL ES version support.

That would be like asking why the software that runs on an airplane doesn't run on your cellphone. Windows Embedded, just like Linux embedded, runs a stripped down kernel and a MUCH thinner OS because when you are dealing with an embedded system you just aren't gonna be doing as much as a full OS, as they are designed for specific functions.

On modern embedded systems the Linux kernel is the same. In the old days, or in cases where Linux is used on processors without VMM support, the kernel might be substantially different. Today most embedded systems use an x86 or ARM architecture. They don't use a "stripped down" kernel. They use the same kernel, and configure at build time to use the features they want. The same is true on the desktop. The only difference that you may be mistaken

Uhhh...yeah, kinda what I was talking about, you just don't include anything that isn't gonna be specifically need ed for the limited functions you are gonna support, hence a lower memory and space footprint. Just as you wouldn't build Windows Embedded with support for calc and WiFi and Windows sharing if its running in a Kiosk so too would you not include anything in embedded linux except the kernel and what module you need to support the specific hardware and perform the specific task you had in mind.

" Windows Embedded, just like Linux embedded, runs a stripped down kernel "

Again. You are simply wrong. Linux doesn't run a "stripped down" kernel. Just as with a desktop version the modules are either compiled at runtime into the kernel or loaded as needed. There is no difference. In an embedded system you know which modules are needed and which aren't so you build them into the kernel. There is no wasted storage space, since there is no need to have modules lying around on disk (including FLASH) jus

Except that ES 2.0 (2.X? 2.0 was directly followed by 3.0) supports very few features, even compared to various desktop GL 2.x implementations. Want to render to a depth buffer (for shadow mapping)? That's not supported by the core spec, you need an extension. Want multiple render targets in your FBO? Same thing. Even the GLSL is limited - officially, the fragment shader only supports loops with a compile-time fixed iteration count (some implementations relax this slightly, though). Not to mention that the

The only things that OpenGL provides that ES doesn't are the big, ugly, slow things which are useful for certain kinds of graphic design and industrial apps, but are completely useless for high-performance games. You're really not missing much, and in general if you're using things which are not provided by OpenGL ES to write apps where the real-time user experience counts, you are doing it wrong.

I went from OpenGL 1.x over to OpenGL ES, so I don't know most of what modern OpenGL can do. But one glaring weakness is that OpenGL ES doesn't support drawing quads, only triangles. Yeah, the GPU processes a quad as two triangles internally, but if it supports quads, there's less vertex data to generate and pass to the GPU. You can somewhat make up for it by using glDrawArrays, which uses array indexing into the vertex list, but in a lot of cases (especially for 2D scenes), it's still less efficient than if you had quads.

He explained it in the comment: "there's less vertex data to generate and pass to the GPU"

This is false. If you're drawing a quad, you pass 4 vertices. If you draw 2 triangles forming a quad, you also pass 4 vertices (using a triangle strip and index buffer). The index buffer is not updated every frame, just once.

In my experience that will show an artifact. Like an odd line between triangles where there shouldn't be. It's been a while since i've worked in straight gl but you should reuse the vert to prevent that. Even if the verts are in the exact same position, it won't matter.

What SGI machine supports modern OpenGL with vertex shaders but no vertex caching? If you don't need vertex shaders, then you don't need to do things the newer way with later versions of OpenGL anyway. The drivers will just translate everything you do.

And SGI machines have split quads up into triangles for a long time now. It is faster than trying to check for coplanarity or better than dealing with artifacts that some pure quad rendering methods produce if the points are not coplanar (or even some dep

That's not always true. Sometimes it's more efficient to just stick the vertex data directly into the vertex buffer as your frame is generated, then do your draw order sorting at the end by rewriting the index buffer.

Triangle strips require everything in one draw call to be connected. If you want to draw N quads, you have to make N draw calls, passing 4 vertices each time. There's a significant amount of overhead involved in a draw calls, so this is slow. With quad support, to draw N quads you can just make a big array of 4N vertices and process it all in one draw call.

Here's a trick you can do to draw all your 2-triangle-quads in one draw call; pass 4 identical vertices for each quad, and 4 other attributes representing each corner of the quad. (-1, -1), (1, -1), etc. Then displace the vertex according to the corner attribute in the vertex shader. There's more data to pass, but I assume you're not passing the data every frame. If that's the case, don't. If you're moving the quads, just pass the position.

On the other hand, if you're not the one writing the apps, it can be infuriating to use a system that supports only OpenGL ES. Last time I tried to use Ubuntu on a system with only OpenGL ES support, I discovered that OpenGL ES basically meant "no graphics acceleration", because nothing in the repository supported it; everything wanted OpenGL.

That's probably changed since then (it was a few years ago), but it was pretty frustrating at the time, especially since the GPU itself was rated for full OpenGL, it was only that PowerVR charged extra for that driver and TI didn't want to license it.

Not true. Full blown OpenGL supports geometry and tesselation shaders for example and loosens up various restrictions or limitations of ES. e.g. draw quads instead of screwing around triangulating everything. And while the fixed function pipeline is deprecated, it's still useful to just knock out something and far simpler rather than screwing around trying to compile, link and use a shader which does a matrix transformation and little else.

The only things that OpenGL provides that ES doesn't are the big, ugly, slow things which are useful for certain kinds of graphic design and industrial apps, but are completely useless for high-performance games.

You can do things like augmented reality on a smartphone. Use the built-in camera to take a live video stream of a particular location, the MEMS gyroscope and tilt sensors to determine the orientation of the system and GPS to determine the latitude and longitude. Combine this information together and render 3D information on top of this view. Maybe it's a terrain map, geological layers, the direction to the nearest public bar, train station, police station or A&E.

Beating out nVidia and AMD is marginally surprising, but OpenGL ES support in Windows I'd figure to be the lowest priority 3D interface to implement for Windows, behind Direct3D and OpenGL. MS' attempt at targeting the lower end market is still emphasizing Direct3D, with OpenGL on Windows mostly only mattering for the occasional game engine and engineering application. OpenGL ES on Windows I'm thinking is a very very small slice of potentially interested parties.

Well depends on what the average consumer needs from their PC. If it is not gaming (which a consumer would buy a discrete card anyway), most consumers need some graphics for web surfing and the like. With the built-in graphics of Ivy Bridge, there is enough GPU power for the average consumer. Why would this average consumer need Direct3D for YouTube?

Well depends on what the average consumer needs from their PC. If it is not gaming (which a consumer would buy a discrete card anyway), most consumers need some graphics for web surfing and the like. With the built-in graphics of Ivy Bridge, there is enough GPU power for the average consumer. Why would this average consumer need Direct3D for YouTube?

To say that they 'need' it would be a gross overstatement; but if they are doing their casual youtubing on a relatively recent wintel, they'll be using it anyway [msdn.com]...

According to Intel, the GMA 3150 can help the CPU decode MPEG2 videos. The DXVAChecker shows hooks for MPEG2 (VLD, MoComp, A, and C) up to 1920x1080. Therefore, the performance of the N450 and N470 with GMA 3150 is currently not sufficient to watch H.264 encoded HD videos with a higher resolution than 720p. HD flash videos (e.g. from youtube) are also not running fluently on the Atom CPUs.

It supports MPEG2. I don't think I've ever played an MPEG2 video on this machine and it could probably decode them fine

I use it on the desktop for Android development because it's a pain in the arse to develop OpenGL ES at the best of times. Development turnaround is a lot faster than uploading to a device and discovering the shader is broken because of a syntax error.

I'm calling ES directly through JOGL bindings and the GLES2 profile. I don't care if the driver is doing it over OpenGL, DirectX or directly. As far as I'm concerned it's ES and that's the primary thing for me. Makes it vastly easier to develop code, sparing any actual android work until things are beginning to take shape.

I'm not using the Android SDK VM for OpenGL ES 2.0 work. I'm using Java and JOGL on the desktop to develop the rendering code in a test harness. The JOGL and the Android bindings are close enough together that I can write two backends and an abstraction layer and have 99% of the rendering code common to either. I can turnaround stuff probably 5x faster too without the uploading to a device.

The Android SDK VM is slow enough at the best of times and the OpenGL software emulation is abysmally slow. It's real

There will be a windows driver soon, I assume. I also assume NVIDIA and AMD will provide drivers for windows.
Asking what specifically you would use OpenGL ES 3.0 on windows for is like asking specifically what you would use OpenGL on windows for. It's for portable 3D graphics. OpenGL ES 3.0 looks like it will be the most portable version yet.

On Windows, the GPU is driven by either DirectX or OpenGL. Native OpenGL ES drivers for Windows are ONLY needed for cross-platform development where applications destined for mobile devices are built and tested on Windows first.

Now, this being so, the usual way to offer ES on the desktop is via EMULATION LAYERS that take ES calls and pass them on to the full blown OpenGL driver. So long as full OpenGL is a superset of ES (which is mostly the case), this method works fine.

The situation is different on Linux. Why? Because traditionally, Linux has terrible graphics drivers from AMD, Nvidia AND Intel. Full blown OpenGL contains tons of utterly useless garbage, and supporting this is more than Linux is worth. OpenGL ES is a chance to start over. OpenGL ES 2.0 already is good enough for ports of most AAA games (with a few rendering options turned off). OpenGL ES 3.0 will be an almost perfect alternative to DirectX and full blown OpenGL.

OpenGL ES 2.0/3.0 is getting first class driver support on Linux class systems because of Android and iOS. OpenGL ES 3.0 will be the future standard GPU API for the vast majority of computers manufactured. However, on Windows, there is no reason to expect DirectX and full blown OpenGL to be displaced. As I've said, OpenGL ES apps can easily be ported to systems with decent OpenGL drivers.

Intel is focusing on ES because, frankly, its drivers and GPU hardware have been terrible. It is their ONLY chance to start over and attempt to gain traction in the marketplace. On the Windows desktop, Intel is about to be wiped out by the new class of AMD fusion (CPU and GPU) parts that will power the new consoles. AMD is light-years ahead of Intel with integrated graphics, GPU driver support on Windows, and high speed memory buses with uniform memory addressing for fused CPU+GPU devices.

Inside Intel, senior management have convinced themselves (falsely) that they can compete with ARM in low power mobile devices. This is despite the fact that 'Ivybridge' (their first FinFET device) was a disaster as an ultra low power architecture, and their coming design, Haswell, needs a die size 5-10 times its ARM equivalent. The Intel tax alone ensures that Intel could never win in this market. Worse again is the fact that Intel needs massive margins per CPU to simply keep the company going.

PS Intel's products are so stinky, Apple is about to drop Intel altogether, and Microsoft's new tablet, Surface Pro 2, is going to use an AMD fusion part.

Sorry, but you didn't get very far into that:wglShareLists causes a few things to be shared such as textures, display lists, VBO/IBO, shaders. You should call wglShareLists as soon as possible, meaning before you create any resources. You can then make GL rendering context 1 current in thread1 and make GL rendering context 2 current in thread2 at the same time. If you upload a texture from thread2, then it will be available in thread1.

Are you wanting to perform draw calls at the same time? That should nev

Since there is only one interface to the GPU (and thus only one command pipeline being used to the best of its ability for each context), multi-threaded access to OpenGL contexts only serves to slow rendering down.

Since youdon't use Linux, or don't know how to configure it properly, you should refrain from speaking as though you have. NVIDIA and Intel have great Linux drivers. I cannot speak for AMD, since I haven't used them in years, but you seem to confuse the Open Source NVIDIA driver (nouveau) with the proprietary drivers, which work awesome and allow full use of the GPU through CUDA. Intel's Open Source driver is also quite good.

You are forgetting how much people want their mobile devices to do everything their PCs will do and AMD does have a pretty big advantage when it comes to GPUs. it was a big enough advantage that not one, not two, but three of the next gen consoles are using their GPUs and two are using their APUs so having lots of multimedia power is a big plus and Intel is still behind in that area. i'm frankly shocked Intel hasn't bought Nvidia by now, it would give them the graphics expertise they could really use.

The PS4's use of an APU is an unsubstantiated rumour at this point, the Wii U definitely doesn't use an AMD CPU at all (it's a PowerPC for Pete's sake), and the XBox 720's rumoured to have a GPU that's more than double the size of the biggest one they're shipping today (as far as I can tell), indicating a more traditional CPU/GPU architecture...

The PS4 is NOT a rumor as dev kits have already been sent, its using an "octo-core" (I put quotes because in the AMD arch the new modules work more like HT than full cores so IRL its a quad with HT) Jaguar based APU at 1.7GHz, the specs are already out there and they all say the same thing.

As for the 720 if the rumors are true then it IS using a similar jaguar octo-core, its simply also using an AMD graphics chip (but no discrete graphics memory other than a little bit of SRAM which makes it even more likel

I find it highly questionable that a console manufacturer, who are normally incredibly sensitive to die size and unnecessary complexity, would ship consoles with both an iGPU and a dGPU requiring questionable multi-GPU techniques (which tend to add latency and microstutter) when they could simply ship a pure CPU with a more powerful GPU. Considering Intel's edge in manufacturing and IPC, there seems to be little advantage to using an AMD APU if you're going to rely mainly on an external GPU anyhow.

Then you haven't kept up on current events and how the new GCN cores work. you see when you are in "multimedia mode" like say watching netflix or streaming video or even playing your "cut the rope" style games? it shuts down the powerful GPU and switches to the weaker APU to save a LOT of power. Then when you are playing your CoD or Crysis what it does is hand all the GPU work to the bigger chip and the APU does physics and other tasks that can be handled by a GP-GPU.

On Windows, the GPU is driven by either DirectX or OpenGL. Native OpenGL ES drivers for Windows are ONLY needed for cross-platform development where applications destined for mobile devices are built and tested on Windows first.

That's a really good reason to have native OpenGL ES drivers IMHO. But why would you create an app on windows, and release it on any platform except windows?

On the Windows desktop, Intel is about to be wiped out by the new class of AMD fusion (CPU and GPU) parts that will power the new consoles. AMD is light-years ahead of Intel with integrated graphics, GPU driver support on Windows, and high speed memory buses with uniform memory addressing for fused CPU+GPU devices.

Yet despite their allegedly superior technology, last year was an unmitigated disaster, a steady decline from Q1 to Q4 losing over 30% in revenue and where in Q4 2011 they had a gross margin of 46%, in Q4 2012 it was down to 15%. They're cutting in R&D, in 2011 they spent 1453 million, in 2012 1354 million and if they spend the whole of 2013 at Q4 2012 levels then 1252 million. Yes, hopefully the PS4/Xbox 720 will give AMD a much needed cash infusion but their technology is not at all selling so maybe t

Yet despite their allegedly superior technology, last year was an unmitigated disaster

Sales of personal computers are not doing well due to economic conditions and Intel has a tight grip on Dell etc, as well as closing the price gap that made AMD look like vastly better value for money in earlier years. It doesn't mean they are doomed.

the market of high end desktop/workstation/server chips that AMD has pretty much abandoned

Excuse me? Where did you get that from? There's a 32 core opteron coming out that

While Intel may have a tough time battling ARM on the low power front, AMD is totally lost.

Intel has already proven they can't battle ARM on the low power front. Even when they made ARM processors themselves they were higher-power than everyone else's. Literally. But your point about AMD power consumption is well-taken. Not since Geode have they managed to be impressive in that department.

And it's basically irrelevent on x86/x64 linux too. Afaict when software supports ES it generally has a compile time switch between regular opengl and opengl ES. Pretty much all GPUs seen in x86 systems support regular opengl while only a subset of them support ES so the sane thing for a distro to do is to use regular opengl for the x86/x64 builds of their software and only build for ES on architectures where ES only GPUs are common.