The much-anticipated release of KDE 3.1, originallyscheduled
for this week, has been delayed, most likely until early next month.
On the positive side, the delay could not have been for a better reason.
Dirk Mueller, the KDE 3.1 Release Coordinator,explained
that the delay was caused by a security audit of the 3.1 CVS tree. The audit was prompted by the identification of a class of vulnerabilities by
FozZy from the "Hackademy Audit Project" (thanks to FozZy and all others
who help identify security issues in KDE, and a big thanks to Dirk
Mueller, Waldo Bastian, George Staikos, Lubos Lunak and the others
who are leading or helping in the current security audit).
After discussing the issues with the packaging engineers and KDE
developers, and in
light of the upcoming year-end Holidays, the decision was virtually
unanimous to wait until early January for the official 3.1 release.

While the decision was a difficult one, and is sure to disappoint quite
some people, hats off to the KDE Project for making the right decision
and treating security with the utmost importance that is warranted.
The security fixes will be backported to KDE 2.2.2.

In the meantime, what was to have been KDE 3.1 (with some, but obviously
not all, of the security audit completed) has been re-tagged asKDE 3.1 RC5 and is now available for testing. The KDE Project
hopes that with this release more bugs will be found and reported by the
community so they can be fixed while the security audit continues.
Stay tuned.

Okay, I too think that KDE should wait indeed till all severe bugs are fixed, before releasing a final version. But I think bugs would be found much faster if RPMs were made for the release candidates.

In the thread about KDE 3.1rc2 on this site, on the question if RPM's were made available, someone said:
"no, because kde 3.1 final is right around the corner.. too much work. this is meant for people willing to test anyways, and that means ppl who compile from src"

Two times wrong: this was posted on november the 5th, so 'around the corner' was a little to optimistic, and I am willing to use a release candidate and report bugs, but I really don't have the time to compile the source myself.

"Okay, I too think that KDE should wait indeed till all severe bugs are fixed, before releasing a final version. But I think bugs would be found much faster if RPMs were made for the release candidates."

I have to agree with this. I tried compiling KDE from source once, and I'll *never* try that again. If RPMs were put out, I'd download them, install them, and it'd take up about four hours of my time--mostly in the downloading, which I can automate and spend elsewhere. But if I try to compile on my 450MHz Celeron/96M RAM (which can handle Red Hat 8.0 with KDE 3.0.5 just fine--I'd never even *think* about Windows eXPloit on this box), I'd be waiting for it to finish compiling when the next release came out. Testing RPMs--whether they came from KDE or from Red Hat--would definitely be a major convenience (hint, hint).

You can 'nice' it and it won't interfere with doing other things if you install it in: "/usr/kde3/" and then change your KDEDIR and PATH after the whole thing is installed.

It is a real pain the first time you build it, and from time to time I do screw up something on my system and it won't build again. But, if you don't install other things from source, you won't have that problem.

The idea is about building RPM's for those that use that package manager (I use Gentoo, so...). An RPM person (and I guess a DEB too) could install it in about 5 minutes (minus DL time, of course).

That is what the dissucion is about. Again, I'm on Gentoo. I EXPECT things to take a while. Sometimes even a couple days to finish. But an RPM user that *WANTS* to help the KDE project by helping debug it, will want RPMs (or DEBs, of course).

"Okay, I too think that KDE should wait indeed till all severe bugs are fixed, before releasing a final version. But I think bugs would be found much faster if RPMs were made for the release candidates"

But surely these sort of bugs, security bugs, are (as has just been shown by them being found by an audit, but not by actual use) /not/ likely to be found faster if binaries are available? Sure, bugs *in general* will be, but these?
That's not to say, of course, that security bugs aren't found if the source isn't available - IIS is proof of that!

If KDE-3.1 had to be delayed because of _security_ bugs, the RCs should be _immediately_ deleted by the users that decided to compile them when they went out. So, what makes RC RPMs necessary in all this?

If KDE-3.1 had to be delayed because of _security_ bugs, the RCs should be _immediately_ deleted by the users that decided to compile them when they went out. So, what makes RC RPMs necessary in all this?

In initial message: "The security fixes will be backported to KDE 2.2.2."

I conclude from this sentence that these security issues are not new at all, but that people just became aware of it now. So why delaying the release? Almost everyone has these bugs already in his system...
Why don't they get the security fixes ready for KDE 3.1.1 or something like that and release KDE 3.1 now?

I don't think so. Take a look at the states. You've got 670MB in the page cache, 45MB free, and 40MB in buffers. All that is available memory which is serving as a file cache. So you've got 750+ MB free. That's only about 250MB used, which seems about right. I've been running for few hours now, and I've got 175MB used.

I think you are reading the numbers wrong.
I'll bet you are only looking at this bit: "Mem: 1033592k total, 989508k used" and then you think all your memory is used up - NOT TRUE!
Most of that memory is being used for buffers, cache etc. If a program on your box was to request a 100MB block of memory then there would be no problem satisfying that demand.
The reason you see such a high number as "used" is simply becourse Linux will always try to use as much memory as possible for cache and buffers to improve system performance, but as soon as some program needs the RAM, it's imidiately released and handed over to the program.

This does not signify a memory leak, it only signifies that you do not know how Linux works.

If you want to go hunting for memory leaks, I sugest you take a look at the actual memory usage of a specific application, not of the system as a whole.

3) Turn the eye candy down a bit. Stuff like transparency, antialiasing, animated menus, a huge background image etc all take a chunk out of your performance.

4) Make sure your X configuration is making the best possible use of your graphics card. There are plenty of ways to improve general X performance.

5) Go over your system configuration and make sure you are not running a lot of programs you don't need/use anyway. Many distributions run lots of daemons and other stuff by default that you may not need - disable some of that to free up some CPU cycles and RAM.

I'm sure you (or someone else) can come up with other good ideas, but now you have a place to start if you want to improve performance :-)

And how do you expect that compiling with "-pipe" will improve the performance of KDE? As far as I can tell, it only controls the communication between the various build tools during the build process and has no effect on the contents of the output files. From "info gcc":
"
`-pipe'
Use pipes rather than temporary files for communication between the
various stages of compilation. This fails to work on some systems
where the assembler is unable to read from a pipe; but the GNU
assembler has no trouble."

> 16. Add your hostname to /etc/hosts (if it's not already there)

What about all the people who do not get a static PI-addres/hostmane from their ISP but only a dynamic IP-adresses? Should they have anything more than "127.0.0.1 localhost" in that file? How and when does the contents of that file affect the performance of KDE?

> And how do you expect that compiling with "-pipe" will improve the performance of KDE? As far as I can tell, it only controls the communication between the various build tools during the build process and has no effect on the contents of the output files. From "info gcc":

Can you tell how to achieve these optimisations please? Or point me to the apropriate documentation.
I´m running a Dual P3 (Katmai core) with 256Mb memeory and I´ve never checked if the things I compile are optimized for my CPU.

My recommendation is to always build kde with --enable-debug=full and then strip the installed binaries and libraries. When you need to report a bug you don't have to rebuild, just reinstall the binary and the libraries it uses.

I assume that strip --strip-all will remove all the overhead produced by --enable-debug=full except for debug output (that is seen when running applications from Konsole).

> 4) Make sure your X configuration is making the best possible use of your
> graphics card. There are plenty of ways to improve general X performance.

And what would that be except for decreasing resolution/colors, which I do not want to sacrifice?

You clearly don't understand what Konqueror is or how it works. It can easily be removed, as can all of its plugins and components. This has been demonstrated time and time again. If you can honestly find a reproducible way to trigger a leak then file a detailed bug report and we will fix it. Valgrind is a great tool for that.

I can 100% reproduce a leak in Konqueror. I just use it to browse the web, and slowly but surely its memory usage increases. However, if I run Konqueror in valgrind it is unusably slow, so that's out. Konqueror does leak memory, and KDE developers denying that isn't going to help. :)

I have reported two scriptable memory leaks in 3.1rc3 and these seem to have been fixed. However, I think the Konqueror/KDE developers should be putting some serious effort into tracing memory leaks (or switch KDE to use a garbage collection strategy)

It's normal for the process size to grow somewhat with time due to fragmentation of the free memory. However, over time the memory use should stabilize, e.g. under normal workloads, if you have used it for 2 days, it should hardly increase the 3rd day.

If that's not the case then there is something wrong indeed, the key is then to identify the site or construct that causes it :-}

please do not troll on linux with such stupid argumentation.
Linux without KDE is fast on low memory machines.
KDE requires a huge amount of RAM, and this is definetely not the OS fault. Rox is much much faster/lighter than KDE or Gnome and does not require that much. So is XFCE. Linux/FreeBSD/whateverOS is not responsible for the amounts of RAM that both Gnome and KDE require.

KDE does require 256 Megs of ram to run well. That is NOT the OS's fault.

However, it *is* the OS swaping virtual memory to disk that results in KDE being slow. This actually does not effect the speed when running an application. However, when starting an application or switching to a different application, the first thing that happens is a VM disk swap. This is what slows it down.

In todays market, 0.75 Geg of memory is not expensive. So, I don't think that the 256 Megs (0.25 Geg) of memory that you need run KDE is a huge amount of RAM.

i suppose the key (and variable) term is "runs well". i have it on a few 64MB systems around here and it runs just fine with recent releases on recent distros (e.g. MDK9 and SuSe8.1). no fancy wallpapers or other memory-consuming items and only 3-4 apps running at a time, but it's snappy enough for daily use ...

And by the way, is there anywhere a doku/faq how to compile kde3.1?
I searched and searched...but the guys from KDE have obviously forgotten? to put some instructions on the website. And the FAQ's there are only for old versions...