If this is your first visit, be sure to
check out the FAQ by clicking the
link above. You may have to register
before you can post: click the register link above to proceed. To start viewing messages,
select the forum that you want to visit from the selection below.

"this bug has long past the point where it is useful.
There are far too many people posting with different issues.
There is too much noise to filter through to find a single bug.
There aren't any interested kernel developers following the bug."

It is not even a bug report, it is just a random flame fest.

Yeah, but it's only one of the reports. And it seems you didn't even read. Jens Axboe is still working on it. There's his patch also. Your reaction is very funny

This is a problem in Linux with scheduling I/O and many cores. One process gets all the bandwidth and others can't get a word in edgewise.

It seems it's not:

Fair queuing would allow many processes demanding large levels of disk IO to each get fair access to the device, preventing any one process from denying the others.

Even SFQ allowed this and CFQ moved even further. However, I don't expect from you to know this (if you even don't understand kernel versioning...). You and your friend already proved this in another thread.

My desktop applications freeze when there's heavy I/O. Of course others might pretend everything is OK simply because they don't understand the issue and think these freezes are normal/acceptable/unavoidable or they don't get them at all.

I do. There's a problem with Linux I/O and graphics. I don't know who's at fault. The problem is there. I always had it, with every PC I ever used. Windows does not have this problem; the GUI is always fluid no matter how heavy I/O load there is.

My desktop applications freeze when there's heavy I/O. Of course others might pretend everything is OK simply because they don't understand the issue and think these freezes are normal/acceptable/unavoidable or they don't get them at all.

I do. There's a problem with Linux I/O and graphics. I don't know who's at fault. The problem is there. I always had it, with every PC I ever used. Windows does not have this problem; the GUI is always fluid no matter how heavy I/O load there is.

Yes, there's definitely problem with I/O on some configurations. It's science 2.6.18 like mentioned in bug report (but it's due to bug not due to design like some trolls want to profff; however long standing one, but not everyone is affected). Graphic is another case

Easy way to check you're affected is to copy file which is bigger then your RAM. System becomes unresponsive for some amount of time.

Yeah, but it's only one of the reports. And it seems you didn't even read. Jens Axboe is still working on it. There's his patch also. Your reaction is very funny

It seems it's not:

Even SFQ allowed this and CFQ moved even further. However, I don't expect from you to know this (if you even don't understand kernel versioning...). You and your friend already proved this in another thread.

I work with bugs every day. When the developer marks the bug as closed, that means "I'm not working on this any more"

So check date "closed" and when Jens uploaded the patch. There are also other reports like this one. If they'll close all reports there will be new, because bug is still there... Believe or not, but I'll probably switch to FreeBSD or Solaris, because of this (if it will really piss me of). However, I don't copy big files too much and I have Windows installed, so it's hard decision.

I copied big file from ntfs partition using ntfs-3g to my home directory, ran "top -d 0.2" (to notice eventual slowdowns) as root in another vt and then I started copying file from home to ntfs partition (so both files were copied simultaneously). There were no single visible latency! (I can do the same with previous kernels, but after some time system becomes unresponsive).

It seems rc5 behaves much better or bug is even fixed. However, I need to try this in some DE, because it can be hard to catch latencies in vt.

Regression testing?

"CLOSED NEEDINFO" and "CLOSED WORKSFORME" doesn't mean there's no problem. It just means that "The Bazaar" failed.

Apparently "The Bazaar" does not do regression testing, either.

How do bugs like this make it into "RC" kernels? Does not "RC" mean "we have tested this and we think it is good"?

This is one reason why Linux has crummy market share. There are so many regressions. Normal non-hacker type people do not want to deal with regressions. They want to turn their computers on and get to work.

I wonder what can be done to deal with the regressions. Linux has no central testing lab and no formal process.

With a formal process, you are not even half done when you fix the bug. Next you have to write the regression test for the bug and then test the regression test. This usually takes more effort and more resources than fixing the bug. And then you have to run all the regression tests all the time. This requires an automated framework to run the regression tests and report the results. This is all enormous work but it needs to be done if you want to ship a quality product every time.

When you look at the bugzilla.kernel.org, there is not even a bug status for "needs testing". When something is "fixed" it gets marked as RESOLVED and that is that.

RedHat etc. have to do this testing and their kernels have hundreds of patches to the stock kernels to fix the problems that are not caught by the "Bazaar" process. When you look at these patches you see that most of them are fixes for regressions, things that used to work and then stopped working for some reason, and the regression was not caught. Or else they are driver patches for new drivers that never worked right in the first place because they did not get tested well. Some of the distribution patches stick around for years because they do not get accepted upstream for one reason or another. These patches need to be maintained as the code changes and that requires even more effort.

I don't know how it can be fixed. Nobody "owns" linux, so nobody wants to take the responsibility to do all the regression testing that should be done. The distributions "own" their kernels, but if they all do their own regression testing then there is enormous duplicated effort.

I worry that Linux is going to turn into even more of a chaotic mess as it gets bigger and gets more features. It is not the slim and trim kernel that it was back in the 90's.