And over the weekend, the saga regarding Canonical, GNOME, and KDE has continued. Lots of comments all over the web, some heated, some well-argued, some wholly indifferent. Most interestingly, Jeff Waugh and Dave Neary have elaborated on GNOME's position after the initial blog posts by Shuttleworth and Seigo, providing a more coherent look at GNOME's side of the story.

Its really simple to allow distrobutions to get away with murder and blame the upstream.

nt_facejerk really was not to know that the topic area he took me on in I have taken on one of the lead developers of Reactos and Some of the most important reverses of NT tech it over before. Its a topic area I have never been defeated in and I know extremely well.

Linux Kernel Project technically is causing no issues. Issues are being introduced downstream of it. If you take a stock kernel source from kernel.org and build it no patching try to run ubuntu with default configuration it don't run very well.

Fedora Redhat SUSE CRUX Debian Arch. All that I have tested personally can operate on a kernel source taken straight from kernel.org without alteration.

What is the most common distributions tested. Ubuntu related using ubuntu patched kernels. This of placing the distribution on a pure stock kernel and see what happens is a good test of what odds you have that closed source drivers will work perfectly. The worse the distribution performs the less likely closed source driver will work. This is pure tampering related.

Next is what is called dependency hell. Distrobution systems have a known flaw. This flaw prevents you from taking binaries from where ever. Only 1 version of a .so(equal to a windows .dll) can be installed at the same time in most cases.

Its really lack of multi version install supported by package managers and the dynamic loader distributions use that cause massive amounts of incompatibility.

Windows uses two items SXS and compatibility shims. Both in fact user space not kernel space to give Windows magical backwards compatibility support for applications. Is there any technology limit provided by the Linux kernel preventing this. Answer No.

These issues of massive incompatibility causes from this also exists on FreeBSD NetBSD and Solaris and most other Unix based systems. So its not a unique Linux defect.

One of Linux biggest historic mistakes was coping how Unix systems did their dynamic libraries. Debian using the freebsd kernel shows just as much trouble as Debian using Linux kernel. This is one of the first things that clued me up that the kernel argument could be complete crap.

Like another case of only providing one. Why does ubuntu have to provide only 1 version of x.org server? When they could have provided 1 versions for open source drivers and 1 version for closed source so avoiding failures. Applications using X11 protocol really would not have cared. The kernel would not have cared. But no diskspace to fit the most flashy features is more important than not hurting the user.

Ubuntu claims user-friendly but a lot of what is does is the worst kind of user-friendly being just a baited trap waiting to bite you.

Ubuntu claim of user-friendly need to be taken as we will be flashy and we don't give a stuff if your computer crashs or does other bad things to you.

Of course Ubuntu is not the only one that needs to be picked on for providing second rate packaging systems and dynamic loaders for modern day requirements. Fixing these things are not prity.

While people are getting the problem wrong pressure is not applied to where the problem is so the problem never gets fixed.