It's clear that the KDE Project has done a very poor job in communicating our policy on releasing binary packages. I say this because as the primary contact on the release blurbs, I am the one that gets swamped with emails asking "where isinsert-your-distro package?" and "how does this package work?" and "why are you discriminating against that-distro?" These emails obviously stem from the (incorrect) belief that the KDE Project is responsible for creating those packages. The following document will hopefully clear up just what our policy is in this situation.

KDE Package Policy Explained

The KDE Project only releases source code. Period. When we make a
release, we package up our code into source code archives (.tar.bz2)
and put them on our FTP
server. Those are the only packages that we release and
support.

We do recognize, however, that most people want binary packages for
their particular distribution or platform. As a result, in the days
before a release, we make the source packages available to "packagers"
who then create binary packages from them. The packagers send us
their results and we put them up on our FTP site and mirrors for the
convenience of our users.

This explains why some packages are available immediately and some
take awhile to appear. While we do "pre-release" the source packages to packagers
with ample time to create the binaries, sometimes a few packagers are
busy and they don't upload their packages in time for the release
date.

In the case of Linux distributions, the packagers are the Linux
companies themselves. For instance, if you inspect a SuSE RPM from
our FTP site, you will see that it was created by SuSE. Mandrake,
Caldera, and Slackware all do the same. This ensures that the RPMs
fit into the distribution in the way that it was intended, rather than
as a third-party "add-on".

We also accept some binary packages from individuals where the
companies or groups behind the platform do not distribute KDE
themselves, and so individual volunteers contribute packages to our
FTP server. Examples are cases like Tru64, *BSD, Solaris or HP-UX (or
similar).

The Red Hat packages are a special case in that while the companydoes distribute KDE, they don't officially make the binary
packages that are found on our FTP server. In prior releases, the
Red Hat packages were created by a Red Hat employee packaging KDE in his
spare time. When his other responsibilities (the ones that he was
paid to do) took precedence, the KDE packages were (understandably)
slow in coming. Creating packages is very hard work, so we don't
fault him for this. As a stop-gap measure, we are looking for a Red Hat user to contribute binary packages for 2.1.1. Stay tuned.

The Future

We are looking into changing this policy in the future.
Rather than getting the packages from the vendors and putting them on
our FTP site, it might be best if the vendors put them on their own
sites and we just pointed to them. This would have the advantages of
freeing up bandwidth on our servers (which are always overloaded on
release days) and making it clear where the responsibility of support
lies.

Comments

Three cheers! This message needs to be delivered. Sometimes I cruise the newsgroups and answer a few questions. Very often questions go like this. "I just installed KDE on Linux 7.0 and it doesn't do such and such". That and the whining about where packages for a distro are. It often seems as if Linux is no longer a community but two communties. One that produces things and the one that berates the other for not doing it to their expectations.

It should be noted that packaging is no small deal. I've said many times there would be much less code if KDE had to focus on all the binary packages for people. In the Quanta project we had more problems with donated binaries than anything else. I am very happy now NOT posting binaries. Distros are handling it and and life is smoother.

I strongly advocate that the binary packages are moved to the distros!!! I can see no reason why I should have to wait several days to get a reasonable bandwidth where my DSL is faster than a dial up. Also I really get tired of people whining about how KDE developers are shorting them. It should be posted in big letters and very clearly. YOUR DISTRO IS RESPONSIBLE FOR BINARIES.

That's why I want change from RedHat to Mandrake.
If my distro don't make what I want, and I don't pay for them do this (as I got the install from a magazine or from iso files), I must be quiet and find someone who do the job.
But if you payied for your distro, complain about not having the rpms you want!
But complai with the right people, ok? :)

I always wonder.. why do I have to upgrade the _entire_ KDE just to get that ripper module inside konqueror working, or to get the latest bug-fixes in KMail, or to get latest Konqueror for that matter!

So, it would be really nice if we can upgrade the following components sepeartely:

1. KDE-Base
2. KDE-Apps
3. KMail
4. Konqueror

2,3&4 obviously depend on 1. But their development should go on w.r.t latest stable version of KDE-Base. Just like other applications (Quanta, Aethera etc.) I know konqueror and kmail are deeply integrated, but there has to be some solution to this! maybe the parts of KDE-Base used by KMail and konqi could be upgraded when 1> KDE-Base is upgraded or 2> KMail/Konqi are upgraded. I am not totally sure.. but this is just food for thought. IMHO, KDE development is going way fast and thats excellent.. but the users loose in the sense that they have to upgrade the whole KDE every 2 months!

Also, it'll be cool if we were able to install KIOSlaves dynamically thru a GUI in KControl. Thus, if I come up with a cool IOSlave, I can put it up on my website and anybody can download it and install it and the entire KDE will be able to use it!

The way things are right now, the whole modularity is of no use to a normal user coz he/she cannot upgrade just kmail or just konqueror or add that new cool IOSlave!

does anybody have any solutions for this problem? If this continues, we might just end up with the developers and enthusiasts (with a good connection) remaining up2date with KDE and others using an ages-old KDE!

> If this continues, we might just end up with
> the developers and enthusiasts (with a good
> connection) remaining up2date with KDE and
> others using an ages-old KDE!

Gotta agree with this. Complaining about users not understanding that vendors are responsible for binary packages may be justified but it's also missing the point.

Are you (KDE developers) not concerned that your hard work is lost on many who could benefit from it the most? The promise of KDE is ease of use -- and the pain of updating is an "ease of use" issue!

I use Linux-Mandrake, a distro that has contributed a great deal to KDE. But even Mandrake can't seem to get it right when it comes to updates. They post the binaries with little or no documentation -- 2.1.1 appeared without even a README explaining how to uninstall older versions and replace with newer, how to deal with dependencies, etc. The Mandrake mailing lists are full of posts from frustrated users, many of whom end up reinstalling the whole OS because their KDE update went off the rails.

KDE is frankly rocking the *nix world right now with its development schedule. It would be great if a mechanism could be developed that would make it easier for average users to actually use these frequent updates.

Very true! I am using SuSE linux 7.0 (previously used RH6.2, so had to get used to SuSE first). I always end up downloading a version of kde just when the next version is released. It takes nearly 2 nights and a fat phone bill (coupled with angry relatives complaining that the phone is engaged) to download kde fully. Couldn't get the sources to compile (I saw the solution in the compilation faq tho'), so I used the SuSE rpms. These weren't very friendly either. I finished downloading kde2.1 just when 2.1.1 was released. Then I see reviews of the next kde and curse myself. *damn*
I am mostly using konqueror now (works even better than IE bundled with Windows ME, particularly with ftp). Konqueror is great for everyday use, but lacks the one feature I would like, ie a good download manager. I have to switch to windows to use Flashget, which seems to make the best use of the bandwidth available to me. Is there a download manager for kde2.1? I used Caitoo from kde1.2 but did not like it much.

Hi,
there are several good download managers (with stop/restart in case connection breaks) on Linux.
a text based is wget which is very reliable. I believe there are front-ends for it, go take a look at apps.kde.org.
The one I use is 'Downloader for X' (nt) which is not kde and not wget but easy to use:http://www.krasu.ru/soft/chuchelo/

don't use Win for theese tasks because it is no good staying 2 nights online whith that OS, unless you like getting in trouble ;-)

I think some of us are missing the point that you dont _have_ to update KDE if you dont want to. Just because 2.1.1 comes out 2 months after 2.1 it dosent mean you _have_ to download it.
You can adopt a policy of downloading every second or third release. Sure updates could be compiled for older versions of KDE but that would create quite a bit more work, and at any rate its up to the Distributions to do that, not KDE

Well, I obviously don't speak for the KDE developers, but here is my take on it....

The KDE developers are just that: developers. Their _self appointed_ (e.g. voluntary) task is to create this tremendous architecture which all of humanity is free to benefit from.

So, why don't they extend this just a bit further and provide actual binaries? Here are three reasons that I can think of:

1) TIME. They are already extraordinarily busy creating the technology. Would you prefer development slowed down significantly, or that people would do what they do not enjoy doing thereby stripping them of the joy they find in this pursuit (and increasing the odds that they will stop)?

2) EXPERTISE. The organizations behind each Linux distribution, BSD variant and commercial Unix all create their own environments. For the KDE developers to take on the task of supporting these operating systems with binaries is not only doing the job of these distributors, but means that the KDE people would have to work with all the iodiosyncracies of each OS. Every OS has its own way of doing things and therefore the different sets of packages will and do vary. Who knows the OS best and is therefore in the best position to create binary distributions for it? Why, the companies who make them in first place.

3) FAIRNESS. To officially provide binaries for any given OS, but not for others, would imply a favoritism that is not a part of the KDE project. Within KDE all *nix, *BSD and Linux systems are treated with interest (assuming that there are KDE'ers on that given system). The amount of work that goes into making sure KDE builds on a wide variety of platforms is very impressive. Therefore, if the KDE team themselves can not provide quality binaries for each and every supported system themselves, I think it is a good idea to treat all of them equally and request that either the OS manufacturers themselves or interested users create the binary packages.

Finally... remember. There is no body stopping you as a loyal user of your operating system from stepping up to the plate and creating wonderful binary distributions of KDE. In fact, this would probably be more than welcomed by your fellow users if your OS provider isn't doing their job well. This is how debian, HP/UX, Solaris and (to my knowledge) FreeBSD get their binaries: users stepping up to the challenge.

Around and within the KDE project there are programmers, artists, documenters, translators, testers..... and there are packagers. I think this only makes sense. I also think that with enough encouragement from that other group of KDE people, the USERS, the OS manufacturers can and will step up to the plate and offer quality KDE binary packages. Vote with your voice and with your dollars, and perhaps even some of your own energy. That is how this world of Open Source Software works....

I don't think an IOSlave installer would be hard to make. Just a shell script could do it, I think. That's pretty easy to install - just type the script's name at the bash prompt and you're done. A GUI would be nice, but KControl already has too many panels (or maybe they're just badly organized), and a shell script would do the job nicely.

> I always wonder.. why do I have to upgrade the _entire_ KDE just to get that ripper module inside konqueror working, or to get the latest bug-fixes in KMail, or to get latest Konqueror for that matter!

It depends on whether a fix is back ported or if a program is built to a base specification. It's possible to build to KDE 2.0 spces or to KDE 2.1.1 or even 2.2 pre specification. It is all up to what the author wanted to accomplish. If you wish to take advantage of a new feature then it's libs are required.

Here is all you need to upgrade. kdesupport, kdelibs and kdebase in that order. That also gives you konqueror. Kmail is in kdenetwork. That's far from all the packages.

> The way things are right now, the whole modularity is of no use to a normal user coz he/she cannot upgrade just kmail or just konqueror or add that new cool IOSlave!

Again I repeat... that is arbitrary. If the io slave is built to your specification (and to my knowlege there has been no substantial change to the base architecture there for a while) then you DON"'T need to upgrade to make it work.

> does anybody have any solutions for this problem? If this continues, we might just end up with the developers and enthusiasts (with a good connection) remaining up2date with KDE and others using an ages-old KDE!

KDE 2x took a long time to come out. People complained. When I looked at it and realized that it meant that KDE would be able to produce faster releases I got excited. Somehow I failed to imagine that people would begin bitching about it releasing too fast. Here's a clue... it will always take distros time to catch up no matter how fast or slow a program releases. (duh?) However you don't have to upgrade, or you can. For the most part KDE 2.x programs are compatible unless a developer writes to newer or enhanced libs.

Pay attention. This will get you the latest bleeding edge KDE with the least bandwidth requirements. It will also work on any system. Grab a copy of Cervisia and compile it. Now this is difficult so pay attention. Extract it using the graphical archiver tool Ark. Go to the directory in konq and use the tools menu to open a console. It will open in your directory. Now type
$ ./configure && make
$ su
[enter root password]
# make install
Now go to kde.org and get the information for cvs and enter it into Cervisa. You can now get only the updated files at any time. With RPMs and tar files you have to get them all. After your first build you will only recompile what is affected by the updates. This is fast and clean. You can use Cervisia to update to a tag level for fixes or let 'er rip for bleeding edge.

Oh, one more command for cvs. on the first build or when directories are added you have one additional command.
$ make -f Makefile.cvs
I have also seen shell scripts to handle this and they are easy to write.

I do recommend you build your KDE in a seperate directory and kde.org has files explaining how to run two KDEs. Once you get it started you can update at any time using graphical tools and it is bandwidth sensitive and quick.

As a side note, I came to Linux from OS/2 at the end of 1999 and had been mostly working with languages like REXX and Basic. So compiling programs in Linux was a foreign thing to me... but it is not hard and to my experience has far less problems than running RPMs.

>It is all up to what the author wanted to accomplish. If you wish to take advantage of a new feature then it's libs are required.

and the problem is that every developer has updated his/her machine to the latest and greatest KDE (even CVS probably as you suggest in the end). What about the users of the application? In simple english "you won't get next version of KMail unless you upgrade to KDE 2.2". Now suppose next version of KMail has "IMAP" as the new feature.. does it have anything to do with KDE 2.2? There could be millions of KDE 2.0.1 users our there.. and they'll never get KMail with IMAP unless they go through the pain of upgrading the entire KDE. Similarly with konqueror.. A lot of my friends still are on default LM7.2 which has KDE 2.0.1.. they keep complaining about Konqueror bugs which are fixed long time back.. but the problem is, konqueror is not released outside of KDE base packages.. hence to get latest konqi, full upgrade is needed!

>Here is all you need to upgrade. kdesupport, kdelibs and kdebase in that order. That also gives you konqueror. Kmail is in kdenetwork. That's far from all the packages.

why isn't konqueror seperate and developed individually? As I understand konqueror uses KParts (khtml etc.) to show the contents.. then why does teh konqueror code itself needs to be in the base libaries? Also, why isn't konqueror released periodically with latest and greatest khtml (since browsing is the main use of konqueror) which will upgrade the khtml the user installed with the KDE he/she has. Similarly, its clear that KMail is outside of KDE's base.. then why can't we have KMail released seperately just like Quanta, Aethera etc? On the other hand, the release schedules of konqueror, kmail etc. can be synchronized with those of KDE-base and we'll get the same effect as we get now.. i.e. KDE2.2 will come with KMail 1.4, konqueror 2.2 etc.. but people preferring to only upgrade KMail can do so too..

Now the question arises of new features.. to tackle that, KDE people definitely need to find a solution so that applications can automagically use the new feature of a newer KDE-base if it exists, else switch to old one. For eg., KDE 2.2 comes out with new printing.. there has to be some way so that KMail will automatically use this new printing scheme (read classes) instead of the old QPrinter if and when it finds teh new libraries installed. KDE has a lot of talended people and I am sure someone out there will be able to find a solution!

>If the io slave is built to your specification (and to my knowlege there has been no substantial change to the base architecture there for a while) then you DON"'T need to upgrade to make it work.

then why don't we get the latest and greatest cd ripping io-slave? why don't we have it for KDE 2.0.1 users?

>Pay attention. This will get you the la [snip]

You didn't get my point Eric. Let me first tell you that I am using KDE 2.1.1 on my LM 7.2.. I am pro in CS and have been using linux/unix for 4-5 years. I can very well download RPMS, install then and figure out problems myself. However, here, I am trying to solve the problem for newbies and even normal users or expert users who want to use linux for production purposes and not for playing with it. You cannot expect a corporation to upgarde KDE on all of its 1000s of machines every 2 months!

The solution of getting latest CVS and compiling is perfect for KDE developers themseleves.. but do you think of users?? Anybody who is doing productive work on his/her machine dosn't want to be on the CVS bleeding edge to have an unstable desktop!

> has far less problems than running RPMs.

this is plaing wrong. I have been using RPMs and never had any problems with them. The reason being I always use RPMs provided by Mandrake. Everything is perfect then.

Also, the whole point of KDE being user-friendly goes away when u ask u're users to compile!! its easy for devlopers and pros like us, but put u'reself in u're grandma's shoes and then think of it.. and if u dono't want to do that, then our motives are totally different!

Ok back to the main point.. some possible solutions:
-one solution to this seems to be major API changes be allowed only in distant releases. For eg. any application written for 2.0 - 2.1.1 should run w/o problems on 2.x.. this will ensure that atleast for 9 months u can afford to not upgrade ;)
-another is to provide backward compatible code.. for eg. provide KPrinter library for KDE 2.0 so that applications written for KDE 2.2 work with 2.0
-segregating base and apps can reduce this problem to a certain extent. Major KDE releases should be once a year with dates synchronized such that all applications that form KDE are released with these major upgrades. Minor upgrades could be frequent and should only include base library package upgrades. The application developers should make sure that applications run across the minor KDE revision. One advantage would be that it'll be easier for people to upgrade as they'll have to deal with fewer RPMs. Consider this: I have KDE 2.0.. new kmail comes out.. but they say I need to upgrade KDE-base packages coz it has new printing functionality.. so I go out and download 2-3 RPMs and upgrade my KDE base to 2.2.. then kmail installs peacefully and works perfectly. All other apps continue working as before since base libaries are always backward compatible (or rather should always be!!)

>I failed to imagine that people would begin bitching about it releasing too fast

I am not bitching.. I love the pace at which KDE is developing.. but that pace shouldn't kill it either right? If the user base decreases because of the tremendous confusion between releases and the pain to upgrade, then KDE will loose its following... which means Linux will loose its following.. and that shouldn't happen. Hence I am raising this issue here. Some how this entire development needs to be synchronized and some new solution has to come out to maximise the time between full upgrdes for people. One year life-cycle would be a good thing IMHO. You can expect people using linux to upgrade once a year.

Please guys, think of somethign cool to solve this issue.

I request someone who is on KDE lists to post on the lists so that KDE developers read this discussion and contribute. I am not subscribed so can't do that.

> Now suppose next version of KMail has "IMAP" as the new feature.. does it have anything to do with KDE 2.2?

Yes. It could use new stuff from the base.

Look, what you suggest is completely unrealistic and would lead to total chaos in no time. Right now, upgrading KDE is very simple : download everything, recompile/install everything. That may take a long time, but it's still very simple. And most importantly it's simple to manage both from the developper and the user's point of view.

Compare this with schemes like "upgrade this package to this version and that package to that version if you need feature X in package Y" etc... By the time you figure out all the dependencies (which most of the time will end up being pretty close to "the whole shebang", you'd have downloaded everything already. Plus it would be a total nightmare for developpers and package builders.

You're right, this is totally unrealistic. You can't expect to upgrade an application without upgrading all of its dependencies! This is why you can't upgrade Adobe Acrobat without buying a new copy of Windows, right? This is why you have to recompile all of Red Hat from source entirely every time you upgrade to a new version of KBiff, right?

Of course not! There is a huge time savings from only upgrading packages X,Y, and Z instead of the whole desktop. If I want a new KMail feature I shouldn't have to upgrade my entire desktop, possible breaking the configuration files for konqueror, knode, etc.

And "just" downloading everything and recompiling everything is a huge pain in the ass IMHO. I have yet to successfully compile KDE from start to finish without hitting all kinds of strange problems. This is *not* the solution if you really expect to be the "user-friendly" desktop that brings Linux to the masses.

It would be nice of the developers to at *least* document which config files are broken by new releases, letting the user know what kind of bumps to expect in the upgrade.

>And "just" downloading everything and recompiling everything is a huge pain in the ass IMHO. I have yet to successfully compile KDE from start to finish without hitting all kinds of strange problems.

I hear you! I've never been able to get KDE to compile.

>Sorry if I'm ranting, but I've spent *far* more time nursing KDE upgrades along than I ever spent rebooting my Windows machine.

Yeah, you sound like me, before I switched to Debian. Then suddenly, all my KDE problems magically went away. Now I get KDE upgrades *before* they've been announced on Slashdot, and they always install over the previous version perfectly, keeping all of my settings. I even have anti-aliased text in KDE, with _NO_ effort whatsoever! A simple command (apt-get install task-kde) finds, downloads, installs, and configures KDE _and_ all its dependencies, in one gigantic automated step! apt-get and kde.debian.net are the greatest!

Sorry if I sound like a broken record here, but I just can't get over the incredible coolness of apt-get. I recommend that you try Debian out (or for an easier graphical install, try that Progeny Debian that was just released).

Something like you describe will probably happen in the near future, but right now it's perfectly understandable that "everything" is just moving forward, from base to apps.

The base libs are bound to stabilize (API-wise, e.g. no new features) while the apps will keep on evolving. Right now it's just not the case, and maintaining two version of each app is a nightmare in the long range. They already do it between each minor releases (2.1 vs 2.2), but fortunately they try to keep it as short as possible, and limit the changes in the "old" branch to critical big fixes only. They don't backport new features and IMHO they are right in not doing so.

As for compiling KDE, for me it has always been "tar -xvf ; ./configure; make; make install". The only problems I've had are when I try to use --enable-final, which doesn't work for every packages. It's right though that config file are a serious problem. I had to clean up my ~/.kde several times after a new install.

>You're right, this is totally unrealistic. You can't expect to upgrade an application without upgrading all of its dependencies! This is why you can't upgrade Adobe Acrobat without buying a new copy of Windows, right?

Wrong example ;-) The right example would have been: You can't upgrade M$IE without upgrading a whole lot of the libraries which come with Windows. The only difference is that all new libraries you need are included in the installation package of the new IE whereas with KMail you also have to update kdelibs (because AFAIK some KMail related bugs have been fixed in some kio_slaves which are part of the libs) and maybe kdebase. But I think updating kdelibs could suffice.

If OTOH you use the packages provided by some distrubution you maybe have also to install the latest version of QT and some other libs this particular KDE build depends on. But for this the packager is too blame (was it really necessary to use the latest version of QT for this build?) and not the developers.

At least for KMail I can explain you why that is not possible. I even though about releasing a separarate version of KMail independant of the KDE release schedule when IMAP is ready for use, but that is no longer possible. Exactely because of IMAP there have been made a few additions to kdelibs to make implementing it much easier and now KMail of course depends on that changes. That is one of the advantages of keeping everything together. If an application needs a new feature in kdelibs this feature can be added and the application is immediately able to use it. Otherwise it would be every time necessary to wait for the next stable release and that would slow down development very much.

Isn't this supposed to be the benefit of components though? You write components that can operate independently(and can co-operate) so you don't have to upgrade EVERYTHING to add one feature to your suite/app/whatever. You just have to upgrade the component which is dynamically loaded when needed. If the component wasn't there to begin with, it should be just a matter of installing it and having it registered with whatever service links the components with the services they provide.

Sure reuse, protection, network transparency are all cool component features too, but the main benefit of componentization is supposed to be complete separation of implementation from interface. If this isn't currently the case, then I'd say it's time to rethink the strategy behind the current component framework.

This whole things helps back up the Minnows(*) argument that GNU/Linux is nothing but a geek toy. The arguments put forward regarding upgrading everything en masse are untenable. To expect your average user to upgrade through successive version (4 in the matter of a few months), is asking too much.

If you examine the Windows release schedules, you'll see that although major functionality is brought in on every major release, it doesn't stop major application upgrades in the mean time. And these certainly do not require a complete low-level upgrade.

Regarding binary distribution vs. source compilation, let me just say, if you ever seen a genuine newbie (or even a casual computer user) get to grips with Windows Update you'll know that most people are too afraid of "messing around" with what they perceive as a working system. They may need the upgrade, but if something were to go wrong, they wouldn't know how to fix it.

Telling these same people to download and compile source code (and try to unravel the compilation error messages !) is ridiculous.

Even RPMs should be considered too difficult. There should be no reason why the whole upgrade process can not be handled behind the scenes. The compilation process (if it's to be done) should be hidden behind a GUI. The RPM installation should be hidden behind a GUI. There should be no reason why anyone has to open a command line to upgrade KDE.

If this means that a generic installer is created for each binary distribution, this should be a small price to pay for increased ease of use.

>If you examine the Windows release schedules, you'll see that although major functionality is brought in on every major release, it doesn't stop major application upgrades in the mean time. And these certainly do not require a complete low-level upgrade.

Every new version of M$Office or M$IE installs a lot of new libraries. And you have to reboot several times. If this isn't a low-level upgrade why has Windows to be rebooted?

OTOH if you upgrade KDE you don't have to upgrade the kernel or X. Only some libraries and the KDE packages. So upgrading KDE is certainly not low-level. You don't even have to reboot after upgrading. ;-)

I've found rpms for KDE 2.1.1 in the RedHat rawhide tree. To install them, I needed several other packages, but after all was said and done, it's all working fine on my RH7 box. Probably not good for those with limited bandwidth, though...

Several other packages?
I installed the rawhide binary RPM packages for kde 2.1.1 on my redhat7. It took a whole day. When I got all of the kde and all the packages it depends on I noticed I had downloaded about 86,876K. And the installation was not easy at all. I had to install quite a few new libraries. However many applications on my box depend on the old versions of those libs (notably openssl, openldap). My box runs apache and sendmail and I didn't want to take any chances and break something just to get the latest update of my favourite desktop environment. Although I was finally able to resolve all conflicts I'm not sure if I want to go through that procedure again in a few weeks when kde 2.2 is released.

"...the lack of an easy to use contemporary desktop environment for UNIX has prevented UNIX from finding its way onto the desktops of the typical computer user in offices and homes...It is our hope that the combination UNIX/KDE will finally bring the same open, reliable, stable and monopoly free computing to the average computer user."

So tell me, does KDE hope to bring its oh-so-easy-to-use environment to the "average computer user" by only releasing the source code? There's a cognitive disconnect here, and one that needs to be seriously examined.

In fact, if KDE's goal is to win the desktop space, then they should actually make this their first priority, in both development and devliery. Binaries should be released first, and they should be released simultaneously for all major Linux distributions. Furthermore, the development of said binaries should not be outsourced, or in the least should be tested by KDE developers.

It seems KDE feels its responsibility ends where a users desktop experience actually begins. Mr.
Granroth identified wanting to move binaries off of the KDE servers so users would know, in essence, that it wasn't their fault if things didn't work! This sort of hands-off hacker ethic has no place in an ostensibly user-centric project.

It isn't that the KDE project doesn't want to release binaries, it's that it's impractical. If KDE had to support every single distro out there (Red Hat, Caldera, Mandrake, Slackware, SuSE, Debian, and of course lots more) not to mention the several different OSes that KDE will run on (*BSD, Solaris, etc) with binary packages, there would be no time for coding! Besides, they would need access to a free machine that ran each of these, so they could compile.

Why should KDE do that when a small army of independent packagers can do it better, faster, cheaper? If there were no packagers working to provide binaries, then the KDE project would have an obligation to release their own binaries. But since all these nice people already do it for them, there's no point. Yes, sometimes the volunteer packagers make mistakes, but would the KDE project really be able to guarantee that their own volunteer packagers would be that much better? I don't see the benefit from the KDE project taking on that enormous burden.

Is it more impractical to end users or developers? Are we forgetting the goals of the KDE project?

What's really impractical is hoping that the KDE source code will somehow be cobbled together by users or individuals not associated with KDE into something consistently workable.

"Why should KDE do that when a small army of independent packagers can do it better, faster, cheaper?"

If the "small army" could actually do it better, faster, or cheaper then this news item would not exist. Outsourcing the package creation and then simply not caring/not supporting it is what makes this problematic. It also assures that some distributions get packages first, and that some (such as Red Hat) may not get them at all.

The point I'm trying to make is that spending time on the binaries should be an official part of the project. KDE developers should _collaborate_ to make sure that binaries are of the highest quality, and not simply hope that they will be "good enough" in the hands of one busy individual.

The biggest argument against this seems to break down to "it's hard," and "we don't want to do this." Well, I'm sorry to sound like a grump, but I just don't have a whole lot of sympathy. This sort of argument could be made against almost any developer-unfriendly activity, such as writing documentation or bug fixing. Unfortunately, both activities are absolutely essential to creating a positive and affirming user experience.

Similarly, creating packages that work well, and that have the KDE stamp of approval is part of being a desktop provider.

It's the job of linux distributions to support KDE packages made for their distro.

How could you possibly expect the kde project to know all the various quirks and library differences between every linux distribution??

If you want to blame someone, blame RedHat not putting up the resources to hire a packager to do the job. Do you really want to use a distro that only half-ass supports the best desktop software for linux anyway?

> "It isn't that the KDE project doesn't want to > release binaries, it's that it's impractical."
>
> Is it more impractical to end users or
> developers? Are we forgetting the goals of the > KDE project?

In which way did _you_ help KDE to be able to say "_we_ are forgetting" ?
IMO the goals of KDE are to deliver a high quality DE to the user and to have fun developping, most of us do it in their spare time.
It is simply impossible for us developers to create binary packages for each and every distribution/OS.

There is no denying that it would be nice to have all the packaging covered... but how about reality? Here is reality...

1) Pretty much every vendor modifies their KDE install. Should that be acknowleged or circumvented?

2) KDE is not just a set of applications. It is integrally tied to a user's graphical shell. This means that it has to address start up issues which vary not only between distros but often between point releases of distros. How many dozen times should you have to build your program and get it tested?

3) There are a number of dependency issues such as alsa and SSH, but these are more critical when it comes to compiled binaries. So there are lots of issues introduced here, especially if your install is not vanilla.

Here is a thought. Ximian has been at their installer idea for some time and are now heavily financed... yet they do not support (to my knowlege) at this time Mandrake or SuSE. Why? Has anyone here ever tried to make an RPM? I say tried because I know hot shot programmers who tried and gave up!

Here's a final consideration. We release Quanta Plus as source but previously had people donating RPMs. I also follow newsgroups. As I see it the biggest problems out there are with RPMs! Personally I'm getting sick of them, but grateful when I need one and it works right. Still they are so easily hosed. Files from one version move to another package... dependencies want me to install beta compilers. No thanks!

KDE is not open binary... it is open source. Doesn't that mean anything any more? I would love to see every user convenience... but I would hate to see two to three weeks of hell added to a release to do packaging and half the developers walk away because they are too busy handling support emails to program and it's no longer any fun to develop.

If you find this to be a problem I suggest you either get your money back from the KDE developers (hint, they are giving this away) or you pitch in to make your vision reality. Telling them they are hypocrits for not having the resources to pacify you and act on their every idea seems to be missing the spirit of open source!

There is no denying that it would be nice to have all the packaging covered... but how about reality? Here is reality...

1) Pretty much every vendor modifies their KDE install. Should that be acknowleged or circumvented?

2) KDE is not just a set of applications. It is integrally tied to a user's graphical shell. This means that it has to address start up issues which vary not only between distros but often between point releases of distros. How many dozen times should you have to build your program and get it tested?

3) There are a number of dependency issues such as alsa and SSH, but these are more critical when it comes to compiled binaries. So there are lots of issues introduced here, especially if your install is not vanilla.

Here is a thought. Ximian has been at their installer idea for some time and are now heavily financed... yet they do not support (to my knowlege) at this time Mandrake or SuSE. Why? Has anyone here ever tried to make an RPM? I say tried because I know hot shot programmers who tried and gave up!

Here's a final consideration. We release Quanta Plus as source but previously had people donating RPMs. I also follow newsgroups. As I see it the biggest problems out there are with RPMs! Personally I'm getting sick of them, but grateful when I need one and it works right. Still they are so easily hosed. Files from one version move to another package... dependencies want me to install beta compilers. No thanks!

KDE is not open binary... it is open source. Doesn't that mean anything any more? I would love to see every user convenience... but I would hate to see two to three weeks of hell added to a release to do packaging and half the developers walk away because they are too busy handling support emails to program and it's no longer any fun to develop.

If you find this to be a problem I suggest you either get your money back from the KDE developers (hint, they are giving this away) or you pitch in to make your vision reality. Telling them they are hypocrits for not having the resources to pacify you and act on their every idea seems to be missing the spirit of open source!

"1) Pretty much every vendor modifies their KDE install. Should that be acknowleged or circumvented?"

Whatever works the best for each distribution. As developers, these and other questions are matters that need to be addressed internally.

"KDE is not open binary... it is open source. Doesn't that mean anything any more?"

It means that KDE is open source. It doesn't mean anything with regard to this discussion, which is about the focus of the KDE project. Are you or are you not out to create a user-friendly Linux desktop? What is the desktop market? Is it dominated by C++ developers, or people who simply want to get work done? You tell me.

And with that in mind, what is more valuable to end-users: source, or binary?

"Telling them they are hypocrits for not having the resources to pacify you and act on their every idea seems to be missing the spirit of open source!"

I don't expect KDE developers to "pacify" me though I do expect them to give some thought as to the ultimate goals of the KDE project, and just how they expect to get there with an end-user-comes-last mentality when it comes to binary distribution.

Is this IE (this is not my computer) being a bitch or am I just plain stupid ?!

I just finished a very long reply to the very comment I'm now replying to, and wanted to see how it looked -> preview. Ok, it looks fine, so I hit the browser's back button, overlooking the fact that the add button is on this preview page as well. Comment gone ! I hit forward again : "page expired".

Tell me, does IE just plain suck ? I tried this with konqueror and it remembers my comment.

Well, I don't feel like thinking about what to reply once more. Sorry.

The reality here is that Underthumb is, without a doubt, 100% correct.

Oh, now I know I've just stepped on the toes of every single Linux user/developer who's been in it for the long run, but that's just the way it goes. See, the problem here are the GOALS that KDE is going for... an "easy to use" or "user-friendly" desktop environment for Linux. Notice that no specific Linux distro is mentioned. (That's the nutshell explanation - Underthumb has quoted it earlier on in the discussion).

*If* that is what KDE is _actually_ shooting for, then it's nothing short of functionally retarded for developers to hide their heads in the sand and/or use the "not my job" excuse. Getting the software up and running is the _very first step_ in a user's desktop experience! It's the step that everything _else_ depends on! Not addressing this part of the process or foisting the responsibility of it off on others is pretty much the textbook definition of hypocritical. Publishing just the source code with an announcement, pat on the butt, and a "good luck" message simply _does not cut it_.

And questions like "exactly what have *you* done lately" and "how are you helping to contribute" are well beside the point, and all of you who ask this (rather childish) question damn well *know* it's beside the point. And in the (very unlikely) event that you don't understand why this is, then I'll tell you: put simply, *neither myself, or Underthumb, or the legions of would-be KDE users posted a manifesto stating that we would make the Linux desktop more user friendly*.

That is why it isn't about my contributions, or Underthumb's contributions, or any other individual's contributions to the project. The people who work on KDE said that they are working on making the Linux desktop a more user-friendly experience, which makes it completely their responsibility for doing so - *especially* with regards to the all-important installation process. My vote is that they should very well stick to that manifesto or change it to reflect the reality of their situation, whatever that reality may be (currently, I think it's "we want to create a user-friendly desktop shell that you'll only get to experience if someone knowledgeable sets it up on your machine.")

Binaries (and decisions regarding how and when these binaries are released) should be an integral part of KDE's developmental process. Period. Not to pacify myself or others, not because I want to see the developers suffer, not because everyone who is new to Linux is stupid rather, because KDE said they were working to make things easier, and I'd like them to stick to their word and address what is possibly the biggest user-friendliness issue facing KDE at the moment.

If you're going to volunteer to do something, then damn it, put your best efforts into doing it.

That's my $0.02, take it for exactly what it's worth.

--WorLord (At least we can all rest easy knowing that Gnome has made many more - and far worse -mistakes in the last year or so)

I still do not see _any_ benefit from the KDE project going out and telling all the existing packagers "Thanks, but we don't need you anymore" and doing the whole thing themselves.

Look, the KDE project couldn't do a better job itself! First, it would have to go out and get people to do the packaging. Who would do it?

The core KDE developers? Not if you still want KDE to be developed! They would have to spend all their time making packages, fixing dependencies, resolving conflicts, answering package-related e-mails, etc. We can rule them out.

Should KDE go out and find all new packagers? Who would they draft into their service? It would be really hard to find new packagers.

The remaining option would be for KDE to adopt the existing packagers to be part of the KDE project. And this doesn't fix anything! Giving the packagers "honorary KDE developer" status wouldn't suddenly increase the quality of their work, and there's no "KDE stamp of approval" that will magically make packages function better either.

The fact is, the current packagers DO work almost as part of the KDE project. They have complete access to all the resources that are available to the developers (as do you, me, and everyone else who has Internet access).

I really don't see how to improve the situation. It's not like binaries aren't available, or something! Maybe the RedHat rpms are a week late. I'm sorry, I really am. (I'm sorry for anyone who can't use apt-get) But giving KDE the burden of providing packages, WHEN THERE ARE ALREADY THESE GREAT PEOPLE WHO DO IT, is silly.

What amazes me about your last post is not how well you criticized what I've said - which you didn't do at all, IMO - but rather, how well you've criticized things I *didn't* say. You've read so much into my post that you missed the boat entirely. Here, let's review:

"I still do not see _any_ benefit from the KDE project going out and telling all the existing packagers "Thanks, but we don't need you anymore" and doing the whole thing themselves."

Please quote where I suggested that the KDE project do this. (Clue: I didn't).

"Look, the KDE project couldn't do a better job itself!"

Yes, they could, but I'll get to that in a minute.

"First, it would have to go out and get people to do the packaging. Who would do it?
The core KDE developers? Not if you still want KDE to be developed!"

Oh, don't lay that one down. I ask you, who BETTER is there out there to supervise the compilation of a working binary (notice I didn't say "working binarIES") then the people who WROTE THE PROGRAMS themselves?? After all, it should be blatantly obvious (simply by the program's existence) that they are, shall we say, MORE then familiar with the whole ./configure, make, make install routine.

Learning to compile is the first part of learning to code. The first lesson I ever got in C was how to write and _compile_ a working "hello world" program. If they can _write_ it, it should be more then obvious by now that they can successfully _compile_ it.

"The fact is, the current packagers DO work almost as part of the KDE project. They have complete access to all the resources that are available to the developers (as do you, me, and everyone else who has Internet access)."

And they will probably still make their distro-specific packages regardless of what I'm about to propose. In fact, they will probably make their distro-specific packages BECAUSE of what I'm about to propose.

"I really don't see how to improve the situation."

That's because you've restricted your options to the point of futility. If you're willing to read it, I'm willing to offer a simple solution:

KDE should go the way of Opera, and give the option of a single _statically linked_ binary that is completely distribution inspecific (which is kind of the point of static linking). One huge *.tar.gz file (and it WILL, in all likelihood, be huge), maybe KDE-everything.tar.gz, that unzips to /opt/kde and contains all the lib.xxxxxx.so.x files (and such) required to run a self-sufficient, latest and greatest, shiney and new version of KDE. Perhaps an installation script that unzips the file for the user, and edits the .xinitrd file accordingly (not unlike WindowMaker) - and this is something simple enough even for peons like me to write.

Thus endeth all of the problems, and everyone is happy. Would the file be huge? Yes. Yes, it would. But would it provide a simple, reliable, and easy to install version of the Latest KDE (read: a "user-friendly desktop experience")?

Yes. Yes, it sure would.

(And the idea can even be expanded upon, even! KDE-Barebones.tar.gz would be just KDE and the minimal amount of apps in the core packages. KDE-MidGrade.tar.gz would be business oriented. You get the picture by now, probably.)

NOW, whether or not the KDE crew would want to sully their hands with close integration with X or Y distribution or package format is entirely up to them, and a different matter altogether. If I were on the team, I'd vote against such a thing, though - and rightfully so, IMO. While I may believe that it's the KDE crew's responsibility to provide a working binary of their product, I do *not* believe that it's KDE's responsibility to bow and scrape to the various and strange niceties and inconsistencies of the various Linux distributions out there, and at no time have I ever stated or implied such as that. If Mandrake users want a version of KDE that ties in with the Mandrake Menu System and ends in i586, then the Mandrake Packagers can make it (which they will doubtlessly be doing for cooker, anyway). If the debian people want to use apt-get to update their KDE, then I'd say it's up to Debian to provide such things (again, they'll be getting around to this eventually anyway).

But I do think that they are fully responsible for offering a *some kind of a working binary distribution of their product* in the interest of remaining true to their aforementioned _Stated Goals_. And I think a single statically linked binary that doesn't care WHAT distro of Linux it gets unzipped to would be just the trick to accomplish this.

>What amazes me about your last post is not how well you criticized what I've said - which you didn't do at all, IMO - but rather, how well you've criticized things I *didn't* say.

I'm sorry, I misconstrued your statement that "Underthumb is, without a doubt, 100% correct" as indicating that you actually agreed with what he said and were going to back up his argument. You were both apparently arguing that KDE should take on the full responsibility of providing binaries itself. You made no mention of your scheme for KDE to provide a single "official" set of portable binaries.

Now that you have, in fact, revealed your scheme in all its glory, I can respond to your actual argument. (by the way, why didn't you just say what you meant in the beginning?)

First of all, I agree that of course the developers are the ones who are best suited to compile KDE (although they are not best suited to adapting to the quirks of each and every distribution, which you seem to agree with).

However, I think your idea of providing a "single, statically linked binary" is not feasable for other reasons.

Here's the problem: KDE is not a single binary (not even close), so how do you propose to make a single statically linked binary out of it? Statically link every component, you say? My God, that's bloatware extreme! KDE memory usage would skyrocket! Only people with _over_ 256 MB of RAM would even be able to _think_ about running a statically linked KDE. A KDE with every component statically linked against QT would be impossible to run. Include QT as a shared library and statically link the rest, you say? Still extreme bloatware. The KDE project would _not_ be doing itself or anyone else a service by providing unusable binaries.

Besides, the utterly gigantic tar.gz file would be out of the reach of all but the most persistent modem downloaders.

These statically linked binaries would appeal only to a very small audience: Those people who have broadband connections to the Internet and also have truckloads of RAM that they don't mind wasting for no reason. Anyone else who tried these binaries would be repulsed by their large size and insane memory usage, and they would probably think of KDE as bloated and slow from then on.

This distribution method works well for Opera, and I think its a great idea in their case. The difference is that Opera is one small binary. KDE is much larger, and is composed of many smaller binaries. This makes static linking impractical.

What really should happen is changes should be made in the GNU environment to facilitate moving binaries around much like Windows does. However, the focus of the GNU project has never been and never will be portable binaries, so it is unlikely that this will happen.

>What amazes me about your last post is not how well you criticized what I've said - which you didn't do at all, IMO - but rather, how well you've criticized things I *didn't* say.

I'm sorry, I misconstrued your statement that "Underthumb is, without a doubt, 100% correct" as indicating that you actually agreed with what he said and were going to back up his argument. You were both apparently arguing that KDE should take on the full responsibility of providing binaries itself. You made no mention of your scheme for KDE to provide a single "official" set of portable binaries.

Now that you have, in fact, revealed your scheme in all its glory, I can respond to your actual argument. (by the way, why didn't you just say what you meant in the beginning?)

First of all, I agree that of course the developers are the ones who are best suited to compile KDE (although they are not best suited to adapting to the quirks of each and every distribution, which you seem to agree with).

However, I think your idea of providing a "single, statically linked binary" is not feasable for other reasons.

Here's the problem: KDE is not a single binary (not even close), so how do you propose to make a single statically linked binary out of it? Statically link every component, you say? My God, that's bloatware extreme! KDE memory usage would skyrocket! Only people with _over_ 256 MB of RAM would even be able to _think_ about running a statically linked KDE. A KDE with every component statically linked against QT would be impossible to run. Include QT as a shared library and statically link the rest, you say? Still extreme bloatware. The KDE project would _not_ be doing itself or anyone else a service by providing unusable binaries.

Besides, the utterly gigantic tar.gz file would be out of the reach of all but the most persistent modem downloaders.

These statically linked binaries would appeal only to a very small audience: Those people who have broadband connections to the Internet and also have truckloads of RAM that they don't mind wasting for no reason. Anyone else who tried these binaries would be repulsed by their large size and insane memory usage, and they would probably think of KDE as bloated and slow from then on.

This distribution method works well for Opera, and I think its a great idea in their case. The difference is that Opera is one small binary. KDE is much larger, and is composed of many smaller binaries. This makes static linking impractical.

What really should happen is changes should be made in the GNU environment to facilitate moving binaries around much like Windows does. However, the focus of the GNU project has never been and never will be portable binaries, so it is unlikely that this will happen.

"Now that you have, in fact, revealed your scheme in all its glory, I can respond to your actual argument. (by the way, why didn't you just say what you meant in the beginning?)"

I *did* say what I meant in the beginning... but I had no idea you'd read quite so much into it.

"Here's the problem: KDE is not a single binary (not even close), so how do you propose to make a single statically linked binary out of it?"

KDE is a gaggle of programs (executables) that require a flock of files to run (libraries).

Provide all of them in one zip (tar.gz) file. Or, ideally, privide a bare-bones, mid-range, and full set of each in separate tar.gz files.

"Statically link every component, you say? My God, that's bloatware extreme! KDE memory usage would skyrocket! Only people with _over_ 256 MB of RAM would even be able to _think_ about running a statically linked KDE."

Actually, I completely fail to see how the RAM requirements would change one iota. It takes RAM to load the executables + libraries on ANY system. I don't really think it matters *where* those libraries are located on the hard drive. The only difference would be that the static KDE binaries would provide these library files in one single directory (as opposed to how different distributions place them in different spots on the filesystem).

Now, there WOULD be wasted resources and redundancies WRT hard drive space; there would be whatever versioin of QT that came with the distribution AND a KDE-provided version of QT in the KDE directory (for example). And this would, undoubtably, make the download bigger. But KDE is already out of reach of most modem users - I think I d/led the 2.1.1 Mandrake Packages at roughly 80M. At that scale, what's another 20M tacked on? For downloads of that scale, I set it to download and go to bed either way, and it's all going to ZIP or CD/R anyway... both of which can handle the size.

"The KDE project would _not_ be doing itself or anyone else a service by providing unusable binaries."

They would be perfectly usable. Just a bit larger.

Whether or not they would be the most _practical_ solution for the _advanced_ Linux user is another matter entirely, and one that falls out of the scope of the argument. I suspect this is exactly where the Cooker/Rawhide/Woody people would step in and do what they've always done best, and I personally would probably wait for that (if for no other reason then to get the Pentium-optimizations that MD packages provide).

Does this make more sense, or am I off on understanding how Static things work?

The difference between shared and static libraries is that shared libraries are shared, while static are simply included in the executable, and therefore are not shared between different apps. Which means statically linked KDE would multiply memory usage and it would also multiply disk usage ( it wouldn't be another 20M, but maybe another 80M, or more probably 160M, or maybe even 320M, or maybe even more, I'm really not going to find out ).
This makes your 'universal static KDE binaries' idea useless and we're back at what's the best thing to do - to let the distros create the right packages ... after all, they get paid to create packages for people, unlike KDE developers, right ?

Is it then possible to compile it shared, and simply include all the necessary shared *files* (libraries) in the distribution tar.gz? I've done this before on my local machine (with KDE library files of a different version) so I know something like it could be done.

And will you all stop saying that I advocate the idea that the distro packagers should stop, or are going to stop, making packages already? I've already clarified *twice* now that I don't believe that this should be the case, and frankly, I'm tired of hearing it stated or insinuated.

Including all of KDE's shared libraries with it would reduce the memory requirements from a statically linked KDE. It still wouldn't be an optimal solution, though, because there would be duplicates of nearly every shared library in memory. For example, KDE would come with its own glibc, and the system's glibc would also have to be in memory at the same time for all other applications. It might be worth looking into, though. Certainly it's much better than a statically linked KDE.

Still, though, I think that KDE would be better off letting people download their distribution's packages, and staying out of the binary distribution business. It reduces the support load that KDE must bear, and it provides a smaller download, better performance, and better distro integration for the end user.