In as much as possible they're going to share the same code,e.g. message display (using a KPart I think) so there's no duplication of functionality. There's no chance of them merging though, most of the KDE developers prefer to have a separate app per function. It would be nice if some-one duplicated the KOShell with Kmail, KNode and KOrgainiser parts though...

I agree that for most things individual apps are best, but that's not necessarily true all the time, and each case should be judged on its own merits.

In this case I think a combined app would be a good thing. Being able to save News articles and emails of common interest side by side in the same folder would be a very useful for me. You'd only have one folder hierachy to manage, and finding/searching for information would benefit too.

Being able to save messages from the usenet into a mail app does not require both of these items to be the same application though. It would require KNode to be able to save messages to KMail folders. More chit chat between the two applications would be sweet, but I wouldn't want to see a Mozilla style solution making them the same application.

This is similar to the notion of how KMail pulls from KAB to get addresses, or KNode now offloads sending E-Mail through to KMail. Better integration, not a merger.

Is this a joke ?
Sorry, if it sounds harsh, but what do you mean with "sleep" ?
Power management ? This is a kernel issue: It should put your computer
into power-save-mode after 30 min for example if there is no user activity.
You mean s.th. like "shutdown -z", see "man shutdown" for further details.

I don't even have a laptop, but rather a desktop machine. I live in California where power is expensive and in short supply, why be wasteful?

I can't run "apm -s" from an xterm because that leaves me logged it and since I've got multiple users, I may not be the user that wakes up the computer! (there is probably a way to work around this, but that's not the point).

Currently I set up my BIOS to put the computer to sleep automatically after a couple of hours, but this tends to screw me up during long FTPs when there isn't any keyboard/mouse activity.

A "suspend" option next to "logout" and "shutdown" in KDM would be very nice. Windows 2000 does something very similar that I use regularly at work.

What about us folks with SMP mashines? (multiprocessor machines)
... someone must have seen this comming out of the kernel at boottime: "APM is not SMP safe... omiting" (or somthing really close to that, cant check right now since im on a freind's 'net)

So a way to tweak it would be in order. I imagine somthing like:

# KDM cfg

shutdown_exec=shutdown -h now
sleep_exec=apm -s

# /KDM cfg

this would give me an opotunity to write a script that spins the disks down and shuts of the monitor... and hope HLT instructions kick in :-)

Some wishes I have for KDE3:
putting the new ksplash that there is at kde-look.org, it's really kool.
putting a shutdown option on turning kde off should had already had on kde2 IMHO
but, most of anything speed. YES I KNOW about the problems with c++, but why they happen? Because of too much libs and fat programs. Can't they try to simplify more the apps? I mean in source level, I know how a program can get REALLY better after 2 or 3 partial rewrites, because during the time you are programming the thing you learn a lot, and the original code aren't optimized as it could because the programmers didn't knew much, this really happens, even with skilled programmers, and happens more with open source and newbie programmers.
So it would be a good idea simply programmers getting a look at old code and trying to clean it a lot after adding new functions.
(remember, I'm not saying all programmers do it, but some do, I do) :)

> putting the new ksplash that there is at kde-look.org, it's really kool.

Which one? There are tons at kde-look.org. The development version does have a new spash picture, it is Konqui on an orange and green backround and it says KDE 3.0 Coming soon to your desktop, or something like that. KDE 3 will definitely have a new splashscreen.

> most of anything speed. YES I KNOW about the problems with c++, but why they happen? Because of too much libs and fat programs.

No, they happen because the GNU linker is inefficient. They have nothing to do with "too much libs." Shared libraries are *good* because they reduce program size.

A point Mosfet raised at some point, which I think would be really useful, would be to add an alpha channel to the theme format so you could have translucencies in normal pixmap themes instead of having to write a custom engine like Liquid. Then we'd have one *foxy* desktop. Better yet, how's about doing the same thing for KWin themes, so we could have shadows under the windows too? Course I suppose this'll all have to wait till KDE 3.1

I want this too!!! The most users I have convinced that KDE is better fell in love with it because of the looks in it.

Please hear my plea thy great and over-worthy developers: beg-beg-beg-please-please-please get this in before the code freeze... pease?

pro: this would give us an extra point in comparisons like the one posted here on mon-nov-05

con: it would take some time from some more important things... but from what i've seen so far u guys knows some sort or magic? (can I get a copy of your bible: "how to become an enormously productive and cool hacker in no time at all")

I also agree. KDE is so slooooowly....:-(
I keep using it but the shitty windows 2000 it's a lot more responsive in the same machine. Explorer for example loads in a flash, the same with the file manager...
Everybody is blaming the GNU linker, but the thing it's that maybe this is not the only reason. We should also review the KDE code. Please. Stability it's OK. KDE doesn't crash, next step it's speed and we should do this now.

What about spending 6 months only in optimization of the code?
It doesn't necessarly means that we would stop adding features, but concentrating a little bit more in the already written code, improveing it where possible...
Please, please, please, just think a little bit about that.

Huh? My IE and Explorer crash constantly, and on W2K at least, this NEVER brings down the entire system, it just automatically restarts. So if it is true that IE runs in the kernel, then it is at least done very well.

>Explorer loads in a flash because it's part of the Win2k kernel.
>If konqueror was in the linux kernel - it would load just
>as fast (if not faster) than Explorer does in Win2k.

Under MacOS, IE also loads really quickly and it certainly doesn't have any priviliged status there. It loads much faster than Konqueror running on the same hardware under Linux, and far, far faster than Mozilla on MacOS.

Don't be retarded. Of course, Explorer isn't a part of the Windows 2000 kernel! Explorer is a userspace application just like everything else. Even if it gets preloaded at startup (like Office supposedly does) it shouldn't make a difference after the first run, because the (superior) Linux filesystem cache will have kept the data in RAM. KDE-2 is great, but its really bad speed-wise (of course, so is GNOME). Just resizing windows (using the very fast DotNET theme I might add) is enough to give my poor 300Mhz/256MB computer fits. In Windows, even complex apps like visual studio resize faster than I can move the freaking cursor. And don't blame X either. ROX-Filer (which uses GTK+) also performs really well. And its not Qt. Qt on Windows performs almost as well as native Win32 widgets.

if resizing windows is too much for your system, turn off the setting to show contents in windows when resizing (in Look 'n Feel -> Window Behaviour IIRC). it's that simple. and i do believe that this is an X related issue.

as for Linux caching files in RAM making for faster subsequent execution, this does not (e.g. can not) affect run time issues such as linking.

this is a valid point - windoze (any version) can resize windows a whole lot faster than kde.
also: try to take a window an move it around, quickly. i am running on solaris here, and it sure does not respond at all. at home und linux it is a lot better, but it does not even come close to snapiness windoze shows.

on the other hand try the RMB-menu in win2k - its dogslow.

basically my experience (ie. feeling) is, that startup times are slower in kde while application responsiveness (after app load is completeted) is faster.

and graphics (like moving or resizing windows) is (a lot) faster in windoze.

Try to disable Hardware Acceleration in windows (Somewhere in Display Properties) and you will see that moving/resizing is not so fast. And I think there is no hardware acceleration for stuffs like that with X/KDE, at least not with my graphics card (Neogagic 128XD). But I think that sooner or later XF will provide efficient hardware accelerations for stuffs like scaling, and hopefully softwares such as KDE will greatly benefit from that.

yeah - that is what i think too. and i am glad that there are great guys working on making X even better.

but for a "normal user" all the technical stuff does not really matter - he sees how "slow" (or not-hardware-accelerated) window resizing in KDE is, and concludes KDE (or Linux for that matter) is slow.

i mean: it's not like i resize my windows the whole day, right?!

often its more a question of how it "feels" - and long startup times and slow window resizing / moving make a system feel slow.

Acually, XFree86 does have hardware acceleration. There is just a problem with resizing windows. Something interesting to note, though, is that Windows XP is *much* slower at resizing windows than previous versions of Windows. It is almost as slow as KDE.

let me reassure you... they did not wait to look for that.
kde2.1 was noticeably faster than 2.0 for me, 2.2 even more.

so, they did not "forget" (!!) about that issue... (by far). if you have ideas on how to make it even better, though, you are welcome. but kde developers are already doing a lot.

PS: such messages are useless and rather irritating for developers (i remember a message in which some kde developer got really upset "what do you think we are doing? rolling our fingers????"), and they are not wrong either.

so thank you already for the improvements and looking forward to see even more of them!!

first of all, as somebody already stated, the developers most deffinitely do not "forget" about speed when coding. to say such a thing is an unjustifiable slam on the people working on KDE.

now, for some answers (hopefully =) to your questions. ..

during the software creation process, correctness is first and optimization is last. if you pay attention to correctness, not only do you get robust apps (which is more important than speed) but you also get programs that can be optimized much more agressively later on, since the algorithms in use are efficient and designed for future optimization. are these optimizations occuring? compare the speed of 2.0 to 2.2.1 and i think you'll have your answer.

with kde3, the libs are being streamlined in ways that have not been possible since 2.0, since binary compatibility is now allowed to be broken. there are even hackers that have been hired full time solely to work on efficiency issues in kde3 (lubos lunak comes to mind; thanks SuSe!). kde3 also marks a new point in robustness and completeness for kde, which means the optimization phase is just begining for some components/apps.

finally, it is not a myth that the GNU linker is to blame for most of the start up time in a KDE app. with prelinking apps start up much faster. an alternative would be to statically link all the apps so there are no relocations due to shared libs, but this increases the total memory usage of the desktop as a whole, probably by several times for most users. this in turn causes new types of efficiency problems (including, but not limited to, slowness due to swapping). so static linkage is not a win. instead, efficent shared library linking (which includes prelinking) is the answer for that part of the speed puzzle.

I do agree, but notice that KDE is progressing to a new version that can (or not) be slower than 2.x because of tons of new features BEFORE getting 2.x at a decent speed.
I really don't want to blame developers, just point that the're "putting the car before the bulls".

well, i can't notice it since that isn't what is happening at all. 3.x is merely a port of 2.x. 3.x is a step towards making 2.x run at a "decent speed". personally, i'm fine with the current speed on my pII 400 w/256MB RAM, but it can be better, and the developers are working towards that.

this perception that developers simply pile one tons of features without also doing optimizations, cleanups, etc is complete nonsense. each release of KDE2 was faster, and if you actually track the devel lists you will know why.

as a point in fact, a commit was made to CVS just today to speed up klistview so that traversing a 5k long tree dropped from 800+ ms to <30ms. that sort of stuff happens rather regularly and is deffinitely what i would call "optimizations to make things run at decent speeds".

heheh, you're right.
What people simply wanted was that KDE team stopped for a while focusing more on speed than new features.
And I am not confortable with KDE speed on a k6/2 500Mhz with 192 RAM, probaly because of the lack of obj-relink on redhat's rpms :(
But let's wait and hope that KDE3 comes with lots of speed inprovements and not link 2.0 - the way 2.2.1 is way faster than 2.0 show that the speed work was done too lat IMHO

actually, it shows the speed work was done at the right time: post-stability. you can't do it before then, really. and yes, 3.0 will be faster than 2.0 was. 2.0 was a completely new architecture, which means tons of new code that lacked lots of real world testing and very little in the way of optimizations. 3.0 is 2.2 ported to Qt3 plus a few new features, so expect the sort of improvements we would normally have seen with a 2.3.

konsole is a great example of work being done to speed up KDE apps. in 2.0 konsole took quite a while to start up. a lot of work went into speeding it up and it really, really shows. load times dropped from >4s to <2s on my system. truly impresive. thanks for the hard work, it really paid off!

The best thing you can do to reduce program size is not stuff so much cruft into the programs! I'm sorry to say this, but the high-point of program evolution was reached in Windows NT 4.0. It provided just the right amount of features (object embedding, etc) at just the right "bloat level." These days KDE offers NT-4 features at WinXP bloat-level, and I don't see it getting much better from here on in. I think the BeOS coders had the right idea. Force all developers to work on slow machines so everyone will be pleasently surprised to see how fast things run on normal systems.

PS> And no, features and speed/bloat are not inherently at odds. One just needs to be willing to take a little time out of implementing features to make things fast. It makes development go slower, but as Linus himself will tell you, that's often a good thing. Something like an alternating feature/quality cycle would be great (in essance, isn't that what major/minor releases are for?) That way you would do a major release with tons of new features, and then spend a year making the code top quality, then repeat the whole process.

Why does everyone think that the KDE developers are totally oblivious to performance issues and must be told how to develop KDE or else they'll get it wrong? The developers know that performance is important. They are always improving the performance (as can be seen by the progress from 2.0 to 2.2). They do not need to be told that they should develop a different way to get more code speed. If the progress is not fast enough for you, then buy a C++ book and get hacking! The current KDE developers already are going as fast as they can, and the progress is visible.

"Something like an alternating feature/quality cycle would be great (in essance, isn't that what major/minor releases are for?)"

In essence, isn't that how every major/minor KDE release has been? What is the problem?

"That way you would do a major release with tons of new features, and then spend a year making the code top quality, then repeat the whole process."

Yes. That's the idea of KDE's release schedule. They don't need to be told this, because they are already doing it!

1) DCOP. It's really cool and nifty and all that, but it is in essence just an ICP mechanism. There is a lot of overhead in DCOP, involved mainly in turning messages into function calls, that don't exist with a pure messaging system. Most ICP systems can easily pass a hundred thousand (small) messages per second around on a modern system, while DCOP is somewhere in the tens of thousands. By switching to a pure messaging model, you might lose some of the niftyness aspect (everything is an object and I can make remote function calls!) and it might not be quite OO in terms of programming, but you'd get rid of a lot of cruft.

2) aRts. There is no real need for a sound server these days (except for networked audio, and that's a special case). Sound servers are throwbacks to days when most Linux audio drivers didn't do hardware mixing, and a program had to do it in software. These days, almost everything has an ALSA driver (which can emulate software mixing for cards the cards that don't have programming specs for their mixer) and newer Open Sound System drivers (like the SB Live!'s) support hardware mixing as well. Plus, aRts doesn't provide a nice interface to hardware DSP features and such. Now, the other part of aRts, the media-framework, is quite nice. However, it could be implemented much more efficiently using a generalized messaging system and shared memory instead of MCOP.

3) I/O slaves. I/O slaves provide an interesting functionality, but they could better be implemented as shared modules with async I/O.

1) DCOP. It's really cool and nifty and all that, but it is in essence just an ICP mechanism. There is a lot of overhead in DCOP, involved mainly in turning messages into function calls, that don't exist with a pure messaging system. Most ICP systems can easily pass a hundred thousand (small) messages per second around on a modern system, while DCOP is somewhere in the tens of thousands. By switching to a pure messaging model, you might lose some of the niftyness aspect (everything is an object and I can make remote function calls!) and it might not be quite OO in terms of programming, but you'd get rid of a lot of cruft.

2) aRts. There is no real need for a sound server these days (except for networked audio, and that's a special case). Sound servers are throwbacks to days when most Linux audio drivers didn't do hardware mixing, and a program had to do it in software. These days, almost everything has an ALSA driver (which can emulate software mixing for cards the cards that don't have programming specs for their mixer) and newer Open Sound System drivers (like the SB Live!'s) support hardware mixing as well. Plus, aRts doesn't provide a nice interface to hardware DSP features and such. Now, the other part of aRts, the media-framework, is quite nice. However, it could be implemented much more efficiently using a generalized messaging system and shared memory instead of MCOP.

3) I/O slaves. I/O slaves provide an interesting functionality, but they could better be implemented as shared modules with async I/O.

RPC is not an optional item for a modern desktop to function effectively, as are many of the rest of DCOPs features. the ability to transmit 1000s of msgs per second is quite fast enough for its application: a graphical desktop. if it were a large transaction server it would be too slow, but that isn't its purpose. in fact, DCOP was created as a faster and more stable alternative to the CORBA tools they were using early on in KDE2's development life.

creating a DCOP client takes extremely little time and only requires ~100K of RAM across the entire desktop, so it isn't a start up time or memory problem. and it does all of this with a rather clean interface that integrates well with the rest of the KDE framework.

the overhead in DCOP you refer to is not really due to RPC but in the use of strings in the msgs and the fact that it is mediated by a server (vs direct process <-> process comm). but even then it still isn't bad enough to be an issue in a desktop setting, and the benefits of doing it this way outweigh the problems. for some decent info on the performance issues with dcop take a look at: http://www.arts-project.org/doc/mcop-doc/why-not-dcop.html

2: aRts ... i'm not into multimedia so i shouldn't comment on this one...

3: I/O Slaves. well, they are shared out-of-process components that provide async I/O (and recently Charles Samuels released an early version that does synchronous if you need that). so i don't see what your gripe is there as it does exactly what you suggest. making them shared libraries wouldn't improve anything.