Yesterday, the ninth Firefox 4.0 beta was released. One of the major new features in Firefox 4.0 is hardware acceleration for anything from canvas drawing to video rendering. Sadly, this feature won't make its way to the Linux version of Firefox 4.0. The reason? X' drivers are "disastrously buggy". Update: Benoit Jacob informed my via email that there's some important nuance: hardware acceleration (OpenGL only) on Linux has been implemented, but due to bugs and issues, only one driver so far has been whitelisted (the proprietary NVIDIA driver).

"He further requests help from Xorg developers and distributors on this issue, since they are still working on it for the future. In other words, if you happen to know people from those parts, be sure to let them know about the difficulties the Firefox team is apparently having with X. "

Please, please - don't bother. The fact that the OpenGL implementations in current X drivers for many cards are buggy is hardly news to anyone, least of all the developers. Inundating them with 'OMG WHERE'S MY FIREFOX ACCELERATION U SUCK!' messages is not going to help.

Linux OpenGL implementations are not very different from what we (used to?) have with html+css+... implementations. They are a buggy, inconsistent mess but if you know the safe path across the minefield you can still produce a working product. Sometimes the obvious path is not the "proper" one.

It's likely that Mozilla guys are performing some operations that don't match the semantics of underlying layers well (after all it's a multiplatform program). Such corner cases are more likely to have bugs or suffer from poor performance. This of course is not an excuse for guys producing these bugs but I can easily imagine another application doing the same things differently and managing to work these bugs around.

Yep, indeed: with WebGL we are basically exposing 95% of the OpenGL API to random scripts from the Web. So even "innocuous" graphics driver bugs can suddenly become major security issues (e.g. leaking video memory to scripts would be a huge security flaw). Even a plain crash is considered a DOS vulnerability when scripts can trigger it at will. So yes, WebGL does put much stricter requirements on drivers than, say, video games or compiz.

But is it the job of Firefox to shield from blatant (security) bugs in the underlying OpenGL API and neglecting the bugfree implementations in the process?

First of all, if an implementation is shown to be 'bug-free' then we'll gladly whitelist it in the next minor update.

And yes, it is our job to shield the user from buggy drivers, buggy system libraries, whatever. You don't want to have to wait for your OpenGL driver to be fixed to be able to use Firefox 4 without random crashes.

Rather more use and exposure would motivate the driver developers to fix their buggy drivers.

That would be nice, but we also need to be able to ship Firefox 4 ASAP without lowering our quality standards.

Perhaps a blacklist could be implemented notifying the users that their driver is buggy and Firefox will run unaccelerated? This would raise awareness without negatively affecting the "good systems".

This is information of a very technical nature that most users won't know how to act upon. For technical users, we *are* already printing this information in the terminal.

Linux OpenGL implementations are not very different from what we (used to?) have with html+css+... implementations. They are a buggy, inconsistent mess but if you know the safe path across the minefield you can still produce a working product.

There's a major difference between both: if you use a sane (process oriented) design: a bug in an html(etc) component only crash a tab, or at worse the webbrowser (if poorly designed), a bug in an OpenGL driver can crash the *whole* computer and it is much, much more complex to debug, especially with hardware acceleration, and without hw acceleration OpenGL isn't very interesting!

As I already said, there's a difference between unresponsive and being bloated.

Not all lean software is responsive. A single-threaded design where UI rendering is on the same thread as the number-crunching algorithms (like firefox's one, though thankfully they're working on that) is all it takes to make software unresponsive, no matter how well the rest is coded.

Bloat is not exactly the best term when trying to make a firefox vs chrome comparison which advantages chrome. Firefox is now nearly the mainstream web browser which consumes the least amount of memory, AFAIK, while Chrome would be near the top with its multi-process model.

Being more responsive does not equate being less bloated. Vista x64 is probably very responsive on a machine based on one of those upcoming Bulldozer CPUs from AMD, backed by 16GB of DDR4 ram, 4 Vertex 2 SSDs in RAID 0, and a SLI of 4 high-end graphic cards. That wouldn't make it less bloated. Responsiveness depends on proper use of threads and having powerful hardware underneath, not so much on how heavy software is (except when you go the Adobe way and make software so heavy that your OS constantly has swap data in and out while you run it because your RAM is full).

Geez, so many fickle users out there. Most don't appreciate even a little that it was Firefox that stirred up the browser wars, when the alternatives were a sluggish Netscape and an anti-standards IE. So you're Firefox is 'sluggish'? Sounds like you have other issues on your system too. My primary box is a six year old P4 and Firefox launches/views pages pretty well.
Also don't forget Google only offered Chrome to Windows users for quite a while, leaving Linux users with a somewhat supported 'build your own' option of Chromium. Their excuse was a public statement about how it was too difficult and problematic to offer Linux or OS X versions. Yet Firefox and Opera have been popping out concurrent versions for multiple platforms for years. (OK, well Opera has been concurrent version-wise only recently, but their developers are too busy innovating unique ideas that other browsers pick up on.)

Couldn't the openGL mode be enabled on a whitelist basis ? I thought the Nividia proprietary drivers were pretty good, as far as 3D is concerned (you can use pretty recent games under Wine for instance) ?

I thought the situation was pretty good, with the Video Acceleration API, compositing & 3D accel being pretty good with nv cards.

If a driver can pass almost all these tests (and doesn't crash running them...) then it's quite probably good enough and we should try to whitelist it!

Looking forward to enabling the whitelist once we get more data. It must be said that the above WebGL test suite is AFAIK the first time that Khronos publishes a complete, public test suite for a *GL standard. I hope to convince developers of GL drivers to use it to test their drivers against.

NVIDIA proprietary driver is not buggy, for what we are doing (which is pure OpenGL). We are enabling hardware acceleration on X with the NVIDIA proprietary driver. So the title of this OSNews story is inaccurate.

The FGLRX driver is crashier, it's blacklisted at the moment, this could change (everything hopefully will change :-) )

Yes, you can turn the whole driver blacklisting off by defining the MOZ_GLX_IGNORE_BLACKLIST environment variable. Just launch firefox with this command (you can use it in the properties of your desktop icon, too):

MOZ_GLX_IGNORE_BLACKLIST=1 firefox

We did this blacklisting to put and end to the endless series of linux crashes that were caused by buggy graphics drivers, and were causing lots of grief among linux users ("firefox 4 is crashy!"). This was the top reason for crashiness on linux.

We are looking forward to un-blacklisting drivers as soon as they get good enough, see the discussion in this bug (scroll down past the first comments sent by an angry user):

Yes, you can turn the whole driver blacklisting off by defining the MOZ_GLX_IGNORE_BLACKLIST environment variable. Just launch firefox with this command (you can use it in the properties of your desktop icon, too):

MOZ_GLX_IGNORE_BLACKLIST=1 firefox

While I really like the fact that this is a runtime and not a build time choice, why do it as an environment variable and not in about:config?

An environment variable requires that .desktop files for menus and other forms of UI launcher have to be modified or some system or user level environment script be modified.

Especially since I read in one of you other comments that another related feature is switchable through about:config

Really it's just because we're in a rush now and an environment variable switch can be implemented in 1 line of code (while an about:config switch is, say, 5 lines of code ;-) )

I see
Still, if it is about 5 lines, it might result in more people reporting whitelistable combinations.

Eventually yes it'll be in about:config.

Excellent!

It is good to see that another (additional to KDE) big free software application provider is now running into driver bugs holding back implementations of current state of the art interfaces.

Maybe you can share notes on whitelisted/blacklisted combinations with the developers of KWin. They've been in this very situation for a couple of months now and might have data which could be useful to you as well.

The interesting part is that KWin did not run into driver bugs.
The driver bugs ran into KWin.

For that to understand one has to look to KDE 4.0:
KWin (!) worked quite good on most drivers.
Yet in KDE 4.5 suddenly things did not work. Why?
No, not because of KWin, the code in those areas was mostly untouched from 4.0!
Instead drivers suddenly said they did support features while this was not the case.

That's exactly what we are doing ;-) the NVIDIA proprietary driver is whitelisted at the moment. So you get WebGL right away. If you want accelerated compositing too (at the risk of losing the benefit of XRender) go to about:config and set layers.acceleration.force-enabled to true.

No, sorry but you can't blame Xorg. It is an open source project and anyone can contribute. The quesion is why Nvidia does not contribute to Xorg instead of replacing its stack into its proprietary driver?
Xorg is an extremely complex piece of software but it is also extremely capable. It is understandable that it has more bugs than MacOS X or Windows graphic stack. MS Office has more bugs than Notepad. Xorg just need more developers and cooperation from hardware manufacturers.
What if you manufacture a good card but the driver for it sucks? Your product sucks overall. Manufacturers need to put more effort into the software part on linux. They will loose customers in the long run if they don't.

uh, I am certainly blaming Xorg. It's overy complex, and has too many features that have no real place in todays computing environment. I use Linux everyday, and Xorg is the weak spot in the whole OS, it's slow, it crashes (not often, but Windows 7 has never crashed on the same computer, nor did Vista). There is a reason that Red Hat and Ubuntu are looking at Wayland, and that is simplicity, reliability and speed.

Wait. You are too quick to dismiss Xorg features as irrelevant. They are relevant to many people. Wayland may be a nice alternative for you but it is still far from being as stable as Xorg. Xorg has problems but it has many strengths. You would not be using it if it had more problems that useful features.
For me there is no alternative to Xorg because I need network transparency. Yes, network transparency is relevant, today. On Windows you have to use a hack like VNC or a product like RDP which both suck or buy Citrix which is also a hack, costs an arm and sucks. When you are used to Xorg and NX this a huge step back.

Both RDP and VNC is much more usable than Xorgs network transparency, both over wireless or the Internet, where Xorg is unusably slow. RDP even supports 3D on Windows 7 and Vista.

People don't need network transparency, people need network access, which Windows does 100% better than Xorg, and VNC does a better job. FreeNX proves that Linux can provide a proper, usable remote GUI environment, but holding on to this broken functionality is part of the problem with Xorg. You can even use RDP with Linux, using xRDP, which is much more usable than network transparency.

VNC and RDP do not replace Xorg. Only Citrix does but poorly. With Xorg you can have an application server and administer your applications in a single place. Just let your users connect and use their applications as it was local. They can resize windows, put them next to their local window, cut and paste, everything. It is integrated into their desktop. They don't need another desktop with poor quality graphics and scrollbars.

FreeNX is nice but it does not replace Xorg either. It depends on it, it is a layer on top of Xorg.

X is no longer as network transparent as it used to be, unfortunately.

It was perhaps the case 20 years ago when most of the computer graphics, font rendering etc. was done on the server side. Now we have xshm, xrender which enable reasonably fast client side rendering on local machines but no longer work across the network (at least not if you care about the user experience).

The network itself has changed too. Over the years bandwidth has increased dramatically but latency hasn't changed that much. (Hard)wired networks are now often replaced with wifi connections, VPNs and other ad-hoc networks.

X is still able to deliver its promise on LANs (ideally with NIS/NFS) and with some classes of applications (e.g. engineering apps using 2D vector rendering). But in most other applications, even if the program manages to start up properly, you still have to be very aware of the fact it is not running locally (if only for performance and reliability reasons).

Rdesktop and VNC chose different way: if it is no longer possible to make the graphics rendering network transparent lets make it obvious and put the user in control. Thus, having remote session in a separate desktop is GOOD - it makes it easy to find out which application is running where. Having a possibility to disconnect from and reconnect to a remote session (and thus move your own existing session between computers) is GOOD. Using protocols that benefit from increased bandwidth and don's stress the network latency (asynchronous transfer of bitmaps, video) is GOOD. Having additional features (audio redirecting, file transfer) is GOOD.

After all, with a network you can do much more than just open a window from a machine A on a machine B.

NVidia works better for OpenGL, and only OpenGL, because that is what they focus on. Even the ancient VESA driver is faster and more stable the NVidia drivers when it come to 2D graphics, the nouveau driver is somewhere between 100 and 1000x times faster while using 100 times less memory (XOrg with nvidia: 300Mbyte resident, XOrg with nouveau: 22Mbyte, where almost all is the binaries).

NVidia works better for OpenGL, and only OpenGL, because that is what they focus on. Even the ancient VESA driver is faster and more stable the NVidia drivers when it come to 2D graphics, the nouveau driver is somewhere between 100 and 1000x times faster while using 100 times less memory (XOrg with nvidia: 300Mbyte resident, XOrg with nouveau: 22Mbyte, where almost all is the binaries).

Sorry, but the nvidia 260.19.21-1 on Debian sits at 136MB presently.

4 tabs in Chrome Unstable, 60+ files open in Kate, 3 tabs in Konsole, Inkscape Trunk open as well, plus the usual crap running in the background for KDE 4.5.x.

It does help. The open drivers for both of those offer 3D as standard. The open drivers for NVidia only offer 'experimental' 3D, after much blood, sweat and tears of reverse engineering. The Gallium3D/DRM changes are not complete yet, as we get to the optimizing end of thigns, it's going to get interesting. Phoronix is quite a good place to keep up.

If Linux had to ensure that it preserve a stable source interface, a new interface would have been created, and the older, broken one would have had to be maintained over time, leading to extra work for the USB developers

This is not how responsible devs work. You tell them you are supporting the interface until date X and mark it as depreciated. i.e. I was fixing some Java code I wrote 3 years ago for Java 1.3 ... added the fixes and compiled I was warned things were to be depreciated in future version ... so I updated accordingly.

Simple, get your kernel driver into the main kernel tree (remember we are talking about GPL released drivers here, if your code doesn't fall under this category, good luck, you are on your own here, you leech <insert link to leech comment from Andrew and Linus here>.)

This here is basically a big middle finger up to any driver dev. It is basically "GPL or Else".

A number of times this has caused internal kernel interfaces to be reworked to prevent the security problem from occurring. When this happens, all drivers that use the interfaces were also fixed at the same time, ensuring that the security problem was fixed and could not come back at some future time accidentally.

Any change to code of an is a risk, It can regress functionality and/or introduce new bugs ... any 1st years software engineer knows this.

There are some things that are not easy to be talked about. I'll try to put the results of past conversations:

A binary-only driver is very bad news, and should be shunned. That proprietary software doesn't respect users' freedom, users are not free to run the program as they wish, study the source code and change it so that the program do what they wish, and to redistribute copies with or without changes. Without these freedoms, the users can not control the software or control their computing. As Stallman says: without these freedoms, the software controls the users.

Also, as Rick Moen said: binary-only drivers are typically buggy for lack of peer review, poorly maintained, not portable to newer or different CPU architectures, prone to breakage with routine kernel or other system upgrades, etc.

In the article of http://www.kroah.com/log/linux/stable%5Fapi%5Fnonsense.html it's explained that:
Linux does not have a binary kernel interface, nor does it have a fixed kernel interface. Please realize that the in kernel interfaces are not the kernel to userspace interfaces. The kernel to userspace interface is the one that application programs use, the syscall interface. That interface is _very_ stable over time, and will not break.

The author of the article says that has old programs that were built on a pre 0.9something kernel that still works just fine on the latest 2.6 kernel release. This interface is the one that users and application programmers can count on being stable.

That article reflects the view of a large portion of Linux kernel developers: the freedom to change in-kernel implementation details and APIs at any time allows them to develop much faster and better.

Without the promise of keeping in-kernel interfaces identical from release to release, there is no way for a binary kernel module like VMWare's to work reliably on multiple kernels.

As an example, if some structures change on a new kernel release (for better performance or more features or whatever other reason), a binary VMWare module may cause catastrophic damage using the old structure layout. Compiling the module again from source will capture the new structure layout, and thus stand a better chance of working -- though still not 100%, in case fields have been removed or renamed or given different purposes.

If a function changes its argument list, or is renamed or otherwise made no longer available, not even recompiling from the same source code will work. The module will have to adapt to the new kernel. Since everybody (should) have source and (can find somebody who) is able to modify it to fit. "Push work to the end-nodes" is a common idea in both networking and free software: since the resources [at the fringes]/[of the developers outside the Linux kernel] are larger than the limited resources [of the backbone]/[of the Linux developers], the trade-off to make the former do more of the work is accepted.

On the other hand, Microsoft has made the decision that they must preserve binary driver compatibility as much as possible -- they have no choice, as they are playing in a proprietary world. In a way, this makes it much easier for outside developers who no longer face a moving target, and for end-users who never have to change anything. On the downside, this forces Microsoft to maintain backwards-compatibility, which is (at best) time-consuming for Microsoft's developers and (at worst) is inefficient, causes bugs, and prevents forward progress.

ABI compatibility is a mixed bag. On one hand, it allows you to distribute binary modules and drivers which will work with newer versions of the kernel (with the already told long-term problems of proprietary software). On the other hand, it forces kernel programmers to add a lot of glue code to retain backwards compatibility. Because Linux is open-source, and because kernel developers even whether they're even allowed, the ability to distribute binary modules isn't considered that important. On the upside, Linux kernel developers don't have to worry about ABI compatibility when altering datastructures to improve the kernel. In the long run, this results in cleaner kernel code.

Bad news for who? Users? They just want something that works. The current system already provides plenty of bad news.

That proprietary software doesn't respect users' freedom

Freedom as defined by Stallman's newspeak that only exists to push his agenda.

binary-only drivers are typically buggy for lack of peer review

Everyone in this thread agrees that the proprietary nvidia drivers are the best.

On the other hand, Microsoft has made the decision that they must preserve binary driver

Why does Microsoft have to be pulled into this? Why not limit the discussions to Unix systems that have a stable ABI?

Tell me where Linux would have been held back if they kept an stable abi with a 3 year cycle. FreeBSD keeps a stable abi with minor releases, so be specific and show in comparison how Linux has had an advantage.

Thats why they don't want a stable kernel api.
You think you want a stable kernel interface, but you really do not, and you don't even know it. What you want is a stable running driver, and you get that only if your driver is in the main kernel tree.

Everyone in this thread agrees that the proprietary nvidia drivers are the best.

I certainly don't. They are the only driver on my system that crashes, regulary. They don't keep up with X developments, so you are left behind. I can't wait to not have to use them.

Why not limit the discussions to Unix systems that have a stable ABI?

Fine. Which Unix system support the most devices and architecture? (In fact more then any OS, ever.)

Tell me where Linux would have been held back if they kept an stable abi with a 3 year cycle. FreeBSD keeps a stable abi with minor releases, so be specific and show in comparison how Linux has had an advantage.

* LONG POST * (basically, the whole ABI-stability thing is a convenient thing to blame, but is just a distraction from the real problem).

I don't see that FreeBSD has gained any advantages by having a more stable ABI than Linux. In terms of graphics drivers, it has exactly the same problems that Linux has, for exactly the same reasons.

Those reasons are that Xorg is full of legacy crap that nobody uses anymore, which still needs to remain fully supported (and no, I'm not talking about network transparency). This makes Xorg far more difficult to maintain and improve without breaking everything, and slows development down.

Worse still - the newer stuff that people actually use / want to use doesn't work properly. Either because it's not had enough time spent on it, or because it interacts poorly with the legacy crap.

Not having a stable ABI doesn't hurt the open-source side of things. Xorg developers have to problems keeping up-to-date with Linux (and the FreeBSD developers have no problems keeping up with the latest DRI2 changes from Linux either). So, the only group it could possibly hurt are the closed-source guys. That'd be Nvidia and ATI, basically. Let's see what Nvidia have to say...

- The drivers are focused on workstation graphics (CAD, 3D modelling and animation) first, because that's where Nvidia make their money.
- Desktop or gaming features are added if they have spare time, but are a much lower priority.
- The driver is almost entirely cross-platform, with most of it being shared between Linux, FreeBSD, Solaris, Mac OS X, and Windows. The Linux-specific kernel module is tiny.
- The lack of a stable kernel ABI is "not a large obstacle for us", and keeping the Linux-specific driver up to date "requires occasional maintenance... but generally is not too much work".

As for other drivers... I don't see the problem. Nearly everything in a modern PC will run just fine with no special drivers. On Windows, you use Microsoft's drivers, on Mac OS X you use Apple's drivers (and they even work on general PC hardware with few problems), and on Linux you just use the standard kernel drivers.

The only exceptions are printers, video card drivers, and wireless network drivers.

Printer drivers are user-space (even on Windows these days), so the question of a stable kernel ABI is irrelevant. Besides, Linux and Mac OS X use the same printer driver system (CUPS, which is owned by Apple), yet only HP bother to provide Linux drivers.

As for wireless network cards... the hardware manufacturers can not be trusted to make drivers that don't suck, for any OS. The in-kernel drivers for wireless devices kick the ass of any vendor-supplied Linux driver, or of the Windows drivers running through NDISWrapper.

One other point - remember the problems Microsoft had with third-party drivers on Windows? How the number one cause of BSODs was Nvidia's video driver? How much trouble lousy third-party drivers caused?

To solve this problem, Microsoft had to develop a huge range of static test suites, and a fairly comprehensive driver testing regime. They then had to force hardware manufacturers to use these tools and certify their drivers, by adding scary warnings about unsigned drivers. Later on, they even removed support for non-certified drivers entirely.

The Linux community can not do that, for a whole heap of licensing, technical, and logistical reasons. Plus, we don't have the money, and we don't have the clout to force hardware manufacturers to follow the rules. So they won't - they just won't release Linux drivers at all.

I don't see that FreeBSD has gained any advantages by having a more stable ABI than Linux. In terms of graphics drivers, it has exactly the same problems that Linux has, for exactly the same reasons.

FreeBSD does not have even close to the same desktop marketshare or mindshare as Linux and as such does not get the same amount of attention from hardware companies. The point of bringing up FreeBSD is that it has had a stable abi for minor releases and yet no one has told me how Linux was able to leap ahead in terms of specific features that could not wait a minor release cycle.

Let's see what Nvidia have to say...

Your link doesn't work. Try this one:
The Challenge In Delivering Open-Source GPU Drivers http://www.phoronix.com/scan.php?page=news_item&px=ODk3MAFor proper Sandy Bridge GPU support under Linux you are looking at the Linux 2.6.37 kernel, Mesa 7.10, and xf86-video-intel 2.14.0 as being the critical pieces of the puzzle while also an updated libdrm library to match and then optionally there is the libva library if wishing to take advantage of the VA-API video acceleration

So you cherry picked a few positive quotes. Would it have taken more or less labor for them to provide a binary driver for a 4 year interface or their shim / open source shenanigans? Actions speak louder than words and by their actions they clearly prefer to release binary drivers for stable interfaces. Users prefer binary drivers to having an update break the system. Users just want something that works.

The only exceptions are printers, video card drivers, and wireless network drivers.

Wait so you are saying everything else works fine in Linux? What about webcams, sound cards and bluetooth? No complaints about Audigy then?

Printer drivers are user-space (even on Windows these days), so the question of a stable kernel ABI is irrelevant.

The question is obviously related to video card drivers and most of your long winded post is irrelevant. I asked a simple question that you haven't been able to answer.

As for wireless network cards... the hardware manufacturers can not be trusted to make drivers that don't suck, for any OS.

Bullshit, I can list numerous network cards that have excellent customer ratings. Intel cards especially have been stellar for me.

How the number one cause of BSODs was Nvidia's video driver? How much trouble lousy third-party drivers caused?

No I don't recall that actually. If you tally up video card driver issues then Linux definitely comes out on top. There are endless cases of video card drivers being broke in Linux. That requires more than a restart.

Your link doesn't work. Try this one:
The Challenge In Delivering Open-Source GPU Drivers http://www.phoronix.com/scan.php?page=news_item&px=ODk3MAFor proper Sandy Bridge GPU support under Linux you are looking at the Linux 2.6.37 kernel, Mesa 7.10, and xf86-video-intel 2.14.0 as being the critical pieces of the puzzle while also an updated libdrm library to match and then optionally there is the libva library if wishing to take advantage of the VA-API video acceleration

What a mess.

You don't understand what you're talking about. What the above quote says, is that Ubuntu 11.04 should support Sandy Bridge out of the box.

Bad news for who? Users? "
Yes, for users, not for monopolists. You just have to think long-term.

Something that forces users to depend on a company which has the target of sucking the biggest amount of money from them... works for the company. You know, Bill Gates got to be the richest man and Microsoft got to be a convicted monopolist (at least three times).

"That proprietary software doesn't respect users' freedom

Freedom as defined by Stallman's newspeak that only exists to push his agenda. "
You can try to modify free software to your needs... and you can try to modify proprietary software to your needs, and see where we have our hands tied :-(

"binary-only drivers are typically buggy for lack of peer review

Everyone in this thread agrees that the proprietary nvidia drivers are the best. "
The word "typically" doesn't mean "always", that's why people use the word "typically" instead of the word "always". Also when Nvidia stops mantaining a driver (in Windows, Linux, etc) we start seeing what happens, so we have to think long-term.

"On the other hand, Microsoft has made the decision that they must preserve binary driver

Why does Microsoft have to be pulled into this? Why not limit the discussions to Unix systems that have a stable ABI? "
It's to show what happens with the "choose ABI" alternative.

Tell me where Linux would have been held back if they kept an stable abi with a 3 year cycle.

If there are problems in this thread with elementary facts, imagine if we start speculating.

You can try to modify free software to your needs... and you can try to modify proprietary software to your needs, and see where we have our hands tied :-(

I really like how GPL advocates proselytize the basics even on a site called OSNEWS, even to someone who clearly knows about Linux and who Stallman is. Reminds me of Mormons who knock on doors and ask if you have heard of Jesus. Jesus Christ? No I have never heard of him. I've lived in America all my life and have not heard of the guy. Is he somehow related to that holiday...whats it called.... Santaday or something?

It's to show what happens with the "choose ABI" alternative.

OSX has a stable ABI and has clearly not been as successful as Linux on the desktop.

If there are problems in this thread with elementary facts, imagine if we start speculating.

I see you can't answer the question either.

Perhaps I should write a formal proposal and see if the Linux devs can answer it. Stable_abi_nonsense was written years ago, so where are the benefits? Which specific feature could not have waited a 3 year stable abi cycle?

OSX has a stable ABI and has clearly not been as successful as Linux on the desktop.

OSX also has a billion dollar marketing campaign, and limits to run on only special chosen hardware because they either don't want to or can't support as much hardware as linux can. Poor choice of example, there.

Some of the BSDs have stable APIs, and that hasn't seemed to help them be successful. I'm sure your argument would be that they don't have enough market share for that to make a difference. And you're right - where you're wrong is thinking that Linux would be any different. Linux doesn't have enough marketshare for hardware companies to be very interested in it either, and for those that are the changing ABI is a relatively small inconvenience.

Perhaps I should write a formal proposal and see if the Linux devs can answer it. Stable_abi_nonsense was written years ago, so where are the benefits? Which specific feature could not have waited a 3 year stable abi cycle?

If having a stable API was that important, the distros would just freeze on a particular kernel/X/etc. version for 3 years while all the devs kept working on newer code that could change. In fact, that's exactly how corporate support is handled. So, why doesn't everything work that way?

It's not difficult to figure out - general Linux users are more interested in getting the new features that the changing API provides as soon as possible, and are willing to give up the stable API which could get them more binary drivers on old distros. Because this is OSS, there is no way to control what users pick - you can't simply dictate that people use the old distros, because they are free to grab whatever they want, and they've chosen otherwise.

OSX also has a billion dollar marketing campaign, and limits to run on only special chosen hardware because they either don't want to or can't support as much hardware as linux can.

Poor choice of example, there.

Right but FreeBSD has a smaller budget than Linux but can maintain a stable ABI for minor releases.

Some of the BSDs have stable APIs, and that hasn't seemed to help them be successful.

Linux drew popularity by being successful on the server where the unstable abi is less of an issue. FreeBSD has numerous advantages but Linux has the inertia.

Linux doesn't have enough marketshare for hardware companies to be very interested in it either,

I already posted a Phoronix article about the troubles Intel has gone through. A stable abi would mean less work for video card companies, end of story.

If having a stable API was that important, the distros would just freeze on a particular kernel/X/etc. version for 3 years while all the devs kept working on newer code that could change.

I've already gone over this. If a distro freezes the kernel then they run into a host of compatibility issues. For a desktop distro it is more trouble than it is worth. Then on top of it you have the subsplit problem. A distro that maintains a stable binary interface for video drivers won't matter much to gpu companies since most distros would still have the standard kernel.

Linus has designed Linux in a way that discourages forking and binary drivers. He doesn't care if Linux is a success on the desktop or even the server. It's a hobby kernel to him and the Linux desktop legions need to learn this and accept that at its core Linux is not designed to compete with Windows or OSX. Distros like Ubuntu aim for the desktop but have to continually deal with disruptive changes made upstream. It's a big mess but Linus prefers it that way. He is on record as stating that Linux is software evolution, not software engineering. If kernel changes break working hardware downstream that is all part of evolution. If Linux only gains success as a server and embedded OS then that is fine with him.

"You can try to modify free software to your needs... and you can try to modify proprietary software to your needs, and see where we have our hands tied :-(

I really like how GPL advocates proselytize the basics [...] "
Nt-JerkFace talked about freedom that "only exists to [...]" and it was answered telling where we all are free to do something and where we are not.

You would have better gpu drivers if Linus provided a stable ABI. ....
Because having a stable abi in a Nix is just unthinkable...like OSX, Solaris, oh wait nevermind. Where are all the benefits from the unstable abi? How has Linux leaped passed other Nix systems?

*Cough Bullshit *Cough.
As someone that actually maintains a fairly large out-of-tree kernel project (with >200K LOC), I find your comment to be misguided, at best.
Less than 1% of my team's time is spent on making the code compatible with upstream kernel.org release, and I'm using far more API's than your average graphics card driver. (Sockets, files, module management, etc)

How would have Linux been held back if they kept the abi on a three year cycle?

No idea.
From my -own- experience, I can't say that maintaining Windows kernel code with its own semi-stable ABI is any easier compared Linux. (Actually, the availability of the complete kernel source makes Linux far easier - at least in my view)

Getting back to the subject, you claimed that the lack of stable ABI is the main problem with writing good drivers, I claimed, from my own personal experience (that may or may not be relevant in the case of graphics card writers) that this is -not- the case.
Now, unless you some actual experience and/or evidence to prove your point, your initial argument is pure speculation.

I can't say that maintaining Windows kernel code with its own semi-stable ABI is any easier compared Linux.

Define semi-stable.

Write a binary driver for Windows and it will work for the life of the system. Write one for Linux and it will likely be broken with the next kernel update.

Getting back to the subject, you claimed that the lack of stable ABI is the main problem with writing good drivers

No I didn't claim that. I claimed that Linux drivers would be better if it had a stable ABI. There is a difference. Hardware companies would producer higher quality drivers and in a more timely matter if there was a stable abi. This is partly due to IP issues and companies wanting to get drivers out on release day.

Microsoft should give Linus millions in stock for being so stubborn with binary drivers. It's a needless restriction that has held back the Linux desktop, especially during the XP days. That single decision has helped Windows keep its dominate position.

Write a binary driver for Windows and it will work for the life of the system.

In general it's true, but I had drivers getting broken by SP releases and between different classes of Windows. (E.g. XP vs 2K3).

Write one for Linux and it will likely be broken with the next kernel update.

Again, at least from my own experience, this is complete (!!!) bullshit.
ABI changes in the kernel are -few- and -far between-.
In the same fairly large kernel project mentioned above, we have 35 (!!!) LINUX_KERNEL_CODE required to support Linux 2.6.9 -> 2.6.35.
This means, that in-order to support all the kernels used from RHEL 4.0 till RHEL 6.0 and Fedora 14 (~6 years) we only had to make 35 adjustments or less than 6 changes a year.
... At the average of 10-60 minutes a change (and I'm exaggerating), we spent on average ~3 (!!!!) hours a year on keeping our project current.
Color me unimpressed.

No I didn't claim that. I claimed that Linux drivers would be better if it had a stable ABI. There is a difference. Hardware companies would producer higher quality drivers and in a more timely matter if there was a stable abi. This is partly due to IP issues and companies wanting to get drivers out on release day.

As much as I enjoy reading Phoronix, in this particular case I wasn't too impressed.
Plus, see my comment below.

Microsoft should give Linus...

You're completely mixing Linux stable ABI (as in Linux kernel stable ABI) and Xorg and Mesa ABI.
Two completely different things.
(Plus, I have zero experience with latter, so I can't really comment on that...)

millions in stock for being so stubborn with binary drivers. It's a needless restriction that has held back the Linux desktop, especially during the XP days. That single decision has helped Windows keep its dominate position.

Wow, you're mixing so many different things I don't know where to start...
Binary drivers, stable ABI in kernel, stable ABI in Mesa and Xorg... you really making a salad over here.

I'll start by pointing out that nVidia (undoubtedly the best binary driver in Linux) is not really concerned by the lack of so called stable ABI [1].
I'll continue by pointing out that other OS, which do have a stable ABI (Solaris?) haven't fared better than Linux, quite on the contrary.

In short, thus far you didn't really provided any proof for your POV - not from personal experience and not from actual binary driver developers (see below).
Maybe it's time to reconsider?

1) The lack of a stable API in the Linux kernel. This is not a large obstacle for us, though: the kernel interface layer of the NVIDIA kernel module is distributed as source code, and compiled at install time for the version and configuration of the kernel in use. This requires occasional maintenance to update for new kernel interface changes, but generally is not too much work.

No I'm not, a stable ABI for video cards would reduce the total amount of work required for gpu companies which is extended into Xorg as seen by that article.

I'll start by pointing out that nVidia (undoubtedly the best binary driver in Linux) is not really concerned by the lack of so called stable ABI [1].

Cherry picking positive p.r. comments. Do you expect a major company like NVIDIA to come out and say that Linus is a stubborn asshole? Would a stable 3 year ABI be more or less work for NVIDIA and other hardware companies? Just answer that question. Oh and please don't claim that opening their specs would be the easiest route. AMD has already done this and now were have heard that there is a lack of open source driver developers.

I find it hilarious that the Linux defenders are so adamant about this issue. How dare I question the resounding success of Linux on the desktop. Linus and Greg KH have already stated that the kernel is a minefield for companies that want to release binary drivers. Year of The Desktop Linux would have happened years ago if the guy at the top was interested in meeting the needs of third parties like Nvidia that can help the success of alternative systems.

Oh and you still haven't answer my question, along with everyone else here. Show me what couldn't have waited 3 years.

Graphics is a whole nother kettle of fish. As far as I know, writing a graphics driver involves writing multiple high-quality JIT compilers, a memory management layer, and a bunch of difficult-to-debug libraries. Plus you need a minimal OS on the ASIC side. The statistic I heard (and believe) is that the NVidia driver on Windows contains more code than the all the other drivers on a typical system combined.

As you pointed out, unlike, say a kernel based deep packet inspection software (ummm....) that's forced to use 70 different kernel APIs (from memory management to files, sockets, module management, assortment of contexts and memory spaces) a Video driver, such as as the nVidia driver is fairly light on kernel API's making it far less susceptible to kernel changes.
Most of the code (JIT, HW register management, etc) can easily be shared between Windows and Linux.

To quote nVidia [1] ~90% of their code is shared between Windows and Linux.

I'd estimate greater than 90% of the Linux driver is cross-platform code. The NVIDIA GPU software development team has made a very conscious effort to architect our driver code base to be cross-platform (for the relevant components). We try to abstract anything that needs to be operating system specific into thin interface layers.

What's blacklisted on buggy X drivers is OpenGL. It is used for WebGL, and for accelerated compositing of the graphics layers in web pages.

However, for the latter (compositing), we are still working on resolving performance issues in the interaction with XRender, and that won't make it into Firefox 4, so we don't enable accelerated compositing by default (regardless of the driver blacklist) so if you want accelerated compositing (at the risk of losing the benefit of XRender) you have to go to about:config and set

layers.acceleration.force-enabled

to true.I'm happily using it here, and it can double the performance in fullscreen WebGL demos.

Interesting test. With Chrome/Chromium (same version), it's faster in Linux with both fglrx and Gallium3d than in Windows7. Not much between fglrx and Gallium3d. With Firefox 4 Beta9, fglrx is slower than Chrome (but faster than Firefox 3.6 Linux and Windows), whereas both Windows and Linux w/Gallium3d are very fast indeed. Which is to say that the Gallium3d Radeon 5xx0 driver finally does something very well.

Gallium helps X too. Means X has a single driver for Gallium3D/KMS/DRM that works on multiple cards. Removing drivers from X will greatly help X as it will mean much less code and make changing things much easier. It doesn't just make X alternatives possible. Everyone is a winner.

It's open source, someone else can do it if they can't. There is a graphics drivers problem, but it's getting much better and the future is bright (gallium3d and friends), but even with what we have now, many many many applications manage to do OpenGL just fine on X (even with the crappy closed NVidia drivers I must run, that crash X about once a month). They will look like fools if some else does a fork of FireFox with working OpenGL. My guess is that this will be what will happen because they are effectively throwing down the gauntlet. If this does happen, it will be the soul purpose of the fork and Mozillia will probably quietly take the code, grumbling under their breath. ;-)

It's open source, someone else can do it if they can't. There is a graphics drivers problem, but it's getting much better and the future is bright (gallium3d and friends), but even with what we have now, many many many applications manage to do OpenGL just fine on X (even with the crappy closed NVidia drivers I must run, that crash X about once a month). They will look like fools if some else does a fork of FireFox with working OpenGL. My guess is that this will be what will happen because they are effectively throwing down the gauntlet. If this does happen, it will be the soul purpose of the fork and Mozillia will probably quietly take the code, grumbling under their breath. ;-)

pffft... the fast that they can't do this as easily as they can with other platforms (Windows, Mac) already tells a lot about Linux and X.`

It's open source, someone else can do it if they can't. There is a graphics drivers problem, but it's getting much better and the future is bright (gallium3d and friends), but even with what we have now, many many many applications manage to do OpenGL just fine on X (even with the crappy closed NVidia drivers I must run, that crash X about once a month). They will look like fools if some else does a fork of FireFox with working OpenGL. My guess is that this will be what will happen because they are effectively throwing down the gauntlet. If this does happen, it will be the soul purpose of the fork and Mozillia will probably quietly take the code, grumbling under their breath. ;-)

If some simple tests supplied by WebGL's vendor can already lead to this result, I agree that WebGL should not be enabled by default for this chipset. As jacquouille said, it's too much of a security risk.

No, do like they do with plugins, separate process. Let it crash, if/when it crashes say it's probably the graphics drivers fault >insert card name here<. With the open drivers, some one will try and fix it, with the closed, well lets hope they care enough about last year's device.

But things which manage just fine only use a subset of the OpenGL API. As jacquouille said, the goal of WebGL is to put 90% of said API in the hand of scripts, without knowing which parts of it said scripts will use...

Unless you advocate supporting only a subset of WebGL, the part which doesn't crash on the currently used drivers. Then we simply don't agree. We've had too much partial web standard support in the past, I think.

Wine is one of the things that manage, and for OpenGL it will probably do very little bar pass it on. But the DX implimentation is more complex OpenGL. Crashes in Wine are normally because of the nature of it, i.e reimplimenting a bag of closed APIs to run closed programs that use those APIs, not due to graphics drivers. That what I think anyway, don't know of any real data on this.

Xorg drivers are buggy. Yes ... sure. Buggy are the crap called GFX that are

1. Not standardized
2. Under documented

If the GFX had a hardware standard for access, more people could improve the OpenGL stack (coincidentally this happens with Microsoft. however vendors write blobs to standardize to their interfaces) . We live in 2010 and graphics cards after all the technological advancements could not export a common hardware access API. It makes me wonder about the author's unfair and inaccurate description of the GFX situation in general. Why not have HW accelerated browsers in Haiku or Syllable. Because people would be involved in an eternal hunt for documentation. The only answer is standards. There enough FPGAs out there to burn a standard driver in it. If you want my 2 cents

1. GFX should handle interface to the monitor and do elementary 2D acceleration via a standardized way (Have you heard about VESA?)
2. 3D/GP computing should be re-factored to another chipset (APU by AMD is a good terminology) that could be put on PCIE card to provide standardized access. Put an FPGA to do the translation from standardized calls to vendor HW.

For example I buy a cheap standard 2D gfx card and a standardized accelerator board cheaper because it is more oriented on GP computing and weaker in 3D (I want solve differential equation with octave on FreeBSD for example ). If you want to be cheaper, buy only the first, let you 8core CPU do the rest.

So, we could have 2 markets, cheap 2D standardized cards, like OHCI, OHCI1394, PCI-ATA cards and accelerator / co-processor cards that should be also standardized. Less unemployment.

What we have now? Everything combined in a proprietary, non standard compliant uncompetitive manner and older vendors killed. OSS is part of the global market and making drivers for special OSes is uncompetitive. Even the mighty windows need a vendor driver.

There is always the cheapness factor. But would you sacrifice price for freedom and standards compliance? If yes, then , in my opinion, computing is not for you.

The field of graphics cards is advancing at a rapid piece. Bridling it with some committee-derived standard would be extremely hurtful to the companies involved and mostly unnecessary anyway. They already provide drivers for the platforms that matter and since they can control the card and the driver, they can develop at a much faster pace.

By the way, there already is a standard interface and it's called OpenGL. DirectX would count too. Adding yet another layer is just bloat and unnecessary.

err, no. Supporting a standard API doesn't magically make your hardware less capable. It's not a feature superset, it's such that what the standard requires is a subset of the features. A good standard should provide an extension point for specific/proprietary features and a means for probing capabilities.
I'm not a gamer but I guess there are several cards from distinct manufacturers that support DirectX 10. Are all cards incapable of doing anything that isn't in the DX10 api? I doubt it.

"Sadly enough, GL drivers on Windows aren't that great either," he notes, "This is why WebGL is done via Direct3D on Windows now... But that mostly a matter of performance issues."

Colour me confused by why is it a bad thing that WebGL is implemented on top of Direct3D instead of OpenGL? if the outcome is consistent with WebGL implemented using OpenGL then why is it even a problem? I mean if the outcome is the same then why is it a 'sad situation'?

""Sadly enough, GL drivers on Windows aren't that great either," he notes, "This is why WebGL is done via Direct3D on Windows now... But that mostly a matter of performance issues."

Colour me confused by why is it a bad thing that WebGL is implemented on top of Direct3D instead of OpenGL? if the outcome is consistent with WebGL implemented using OpenGL then why is it even a problem? I mean if the outcome is the same then why is it a 'sad situation'? "

WebGL is based on OpenGL ES 2.0. All smartphones, outside of the Windows world are OpenGL ES 2.0 compliant.

It should be obvious you want WebGL as a layer abstracted from OpenGL ES 2.0.

WebGL is based on OpenGL ES 2.0. All smartphones, outside of the Windows world are OpenGL ES 2.0 compliant.

It should be obvious you want WebGL as a layer abstracted from OpenGL ES 2.0.

That makes absolutely no sense what so ever - the issue is layering WebGL on top of Direct3D and for the programmer who is programming for WebGL he doesn't care what happens under the hood and behind the scenes because all he is concerned about is the fact that WebGL is provided. If the WebGL on top of Direct3D implement the whole WebGL stack, a programmer can programme against WebGL and it runs on Windows, MacOS X and Linux regardless of what the back end is then the whole commotion is for nothing other than the for the sake of drama.

I think the whole sadness has to do with the fact that they have to maintain two separate back ends instead of a single one. Sorry to sound pathetic but boo-f--king-whoo. Its time that the Firefox developers stop writing their code for the lowest common denominator and started taking advantage of the features which operating systems expose to developers. That apparently they're ok to use Direct3D/Direct2D/DirectWrite but maintaining an extra backend to WebGL is apparently 'one step too far'? good lord. There is a reason I refuse to use Firefox on Mac OS X.

Performance is an issue, too. Having WebGL code translated to Direct3D on Windows is akin to DirectX-based Windows programs running on top of Wine, which get all their Direct3D calls translated to OpenGL.

Sure, those programs don't know about it, but the call translation overhead results in very poor performance in the end. And THAT they care about.

ATI is partially to blame for bad OpenGL drivers on Windows (since ATI cards are quite widespread). They never invested the same effort as into DirectX drivers. Nvidia on the other hand produces decent OpenGL drivers across all platforms.

Ideally these open-source developers will be able to get the WebGL issues on Mesa straightened out quickly. However, it already would be too late to get them fixed and then white-listed for Firefox 4.0. Mesa 7.10.1 / Mesa 7.11 will likely not be out for a couple of months and if these next releases do carry the WebGL fixes, for most users it's then a matter of waiting for the distribution vendors to pick-up the new packages. Maybe in time for Mozilla Firefox 4.1 these Linux GPU acceleration issues will be sorted out.

...*clears throat*...ahem, so isn't this going to push Wayland developers to move even faster so Linux can' finally have a proper graphics server? About damn time they retire that stupid kludge of a software called X.