If this is your first visit, be sure to
check out the FAQ by clicking the
link above. You may have to register
before you can post: click the register link above to proceed. To start viewing messages,
select the forum that you want to visit from the selection below.

Comment

If it's DX11.1 compliant then it should be OpenGL 4.x compliant, not 3.2.

You'd like to think that, wouldn't you.

While likely very true from a hardware perspective, the Windows drivers simply aren't there for it. Intel GPUs that offer D3D 10 features only offer GL 3.0 on the Windows drivers, despite the hardware theoretically being capable of full GL 3.3. I think the Ivy Bridge drivers will do GL 3.1, despite offering D3D 10.1.

This is one of the many, many reasons why OpenGL is just best avoided if you're only developing for Windows. You can argue all you want whether or not it's OpenGL's/Khronos' fault, but the reality is that 60% of the GPUs in the world have more features and better performance using D3D on the operating system used by 90% of people. :/

(The other reason is that GL is just a poorly designed API -- e.g. binding Uniform Buffer Objects by slot directly in the shader wasn't added to Core until 4.2, and the ARB extension that adds the support to older cards is only supported by NVIDIA's drivers; likewise, binding vertex attributes by slot wasn't added to Core until 3.3. Major D3D-class features came to GL over a year later, e.g. Uniform Buffer Objects, Texture Buffer Objects, primitive restart, and instancing weren't added to GL until 3.1 and it took until GL 3.2 to add geometry shader support to Core. Those features existed as extensions, but they were neither universally available nor universally high-quality, so you couldn't actually use them in a shipping product. Granted, even once in Core, the implementations tended to be buggy, likely due to a lack of any kind of test suite for implementations to be verified against.)

Comment

The graphics unit on Haswell is expected to be Direct3D 11.1 and OpenGL 3.2 compliant.

This refers to the state of the Windows drivers, not the hardware limitations. Haswell could be fully GL4 compliant if the driver support is there.

Sandy Bridge has been quite impressive performance-wise for being Intel integrated graphics, but with Ivy Bridge this performance is going to be upped substantially (as much as twice as fast as Sandy Bridge).

I've been hearing more like 50% faster, but either way it should be a nice boost.

This will happen again with Haswell where I'm told its integrated graphics should be comparable to a mid-to-high-end discrete GPU.

Even if we're talking double IB, which is in turn double SB, that's definitely mid-range discrete territory, not high end. And that's current generation mid-range - by the time Haswell is released, it's likely low-end again.

Comment

Your main competitor had already fully embraced the open-source driver effort and releases hardware code a full YEAR before releasing the actual hardware. You don't see them losing sales or profits; in fact, they're doing pretty well. Why can't AMD just kill off their pathetic excuse for a driver bundle for GNU/Linux (proprietary, at that) and focus all efforts on Mesa/Gallium3D?

Comment

Your main competitor had already fully embraced the open-source driver effort and releases hardware code a full YEAR before releasing the actual hardware. You don't see them losing sales or profits; in fact, they're doing pretty well.

Yep. They don't make 3D workstation hardware so an open-source-only model works quite well for them. If our hardware focus was the same we would probably have the same open-source-only Linux driver approach, in fact that's what we used to do in the pre-R200 days.

Why can't AMD just kill off their pathetic excuse for a driver bundle for GNU/Linux (proprietary, at that) and focus all efforts on Mesa/Gallium3D?

Because we would lose a big customer base which needs the 3D performance and features that can only be delivered cost-effectively by a proprietary driver. I have answered this question a lot of times already.

If you ignore the 3D workstation market then say you can't see any reason for fglrx to exist it's hard for me to give good answers.

Comment

(The other reason is that GL is just a poorly designed API -- e.g. binding Uniform Buffer Objects by slot directly in the shader wasn't added to Core until 4.2, and the ARB extension that adds the support to older cards is only supported by NVIDIA's drivers; likewise, binding vertex attributes by slot wasn't added to Core until 3.3. Major D3D-class features came to GL over a year later, e.g. Uniform Buffer Objects, Texture Buffer Objects, primitive restart, and instancing weren't added to GL until 3.1 and it took until GL 3.2 to add geometry shader support to Core. Those features existed as extensions, but they were neither universally available nor universally high-quality, so you couldn't actually use them in a shipping product. Granted, even once in Core, the implementations tended to be buggy, likely due to a lack of any kind of test suite for implementations to be verified against.)

off topic but i often have this visions of elanthis walking into the Kronos Offices with explosives and detonating them after yelling D3D Akbar