Slashdot videos: Now with more Slashdot!

View

Discuss

Share

We've improved Slashdot's video section; now you can view our video interviews, product close-ups and site visits with all the usual Slashdot options to comment, share, etc. No more walled garden! It's a work in progress -- we hope you'll check it out (Learn more about the recent updates).

eldavojohn writes "We've heard a bit about the completely fair scheduler previously, but now Kernel Trap looks at the implications this new scheduler has for 3D games in Linux. Linus Torvalds noted, 'I don't think any scheduler is perfect, and almost all of the time, the RightAnswer(tm) ends up being not one or the other, but somewhere in between. But at the same time, no technical decision is ever written in stone. It's all a balancing act. I've replaced the scheduler before, I'm 100% sure we'll replace it again. Schedulers are actually not at all that important in the end: they are a very very small detail in the kernel.' The posts that follow the brief article, reveal that Linus seems quite confident that he made the right choice in his decision to merge CFS with the Linux kernel. One thing's for certain, gaming on Linux can't suffer any more setbacks or it may be many years before we see FOSS games rival the commercial world."

Neither the summary nor the FA did a great job of summarizing the issues.

I agree to some extent. Notably the test specified in the article is "open a game and then sit there without hitting the keyboard." In my mind, this means the game isn't responding to any I/O, so gets pushed to the background, so adding more tasks just means it gets 1/tasks timeslices. Seems reasonable. I'm not sure why the CFS would keep the game running more often than SD if there was no I/O. An interesting comparison would be to see not only the FPS/CPU usage for the game but also for the "loop" tasks. (Those tasks also are not I/O bound.)

Fundamentally I think the name CFS is a little bit odd - how does one define "fair"? In fact, I probably don't want my scheduler to be fair at all - I want it to run the stuff I want fast, and the other stuff it can run slow. That's not very fair.

So, I would say there is not enough information given in the article to tell exactly why the systems had different FPS performance for different schedulers - just looking at that number doesn't tell how it's splitting the time among all the processes.

Fundamentally I think the name CFS is a little bit odd - how does one define "fair"?

Fair, in this context, means that the scheduler will give all the running tasks CPU time in proportion to their priority (nice level). It follows from this that all the tasks in a given nice level are given equal amount of CPU time, and a higher-priority task (lower nice level) is given more CPU time than a lower-priority one.

SD scheduler (but not CFK, AFAIK) also had idle priority, which means a task that only runs if nothing else at any nice level wants to run. Very useful for running FoldingAtHome.

In fact, I probably don't want my scheduler to be fair at all - I want it to run the stuff I want fast, and the other stuff it can run slow. That's not very fair.

All schedulers attempt to do that. However the old scheduler design collected statistics based on what process was running at each clock tick. So if a process always yielded right before the clock tick the scheduler would mark it as having used 0 ticks worth of clock time. So it would always stay near the top of the list, and could monopolize the CPU.

A fair scheduler basically times the actual CPU usage. It starts timing when it gives control to the process, and stops timing when the process yields or the scheduler decides to interrupt the process. it tracks processes not by ticks but by actual time used. (This post is based on my understanding of the issue. I may be incorrect.)

Also (I believe) if you give up your time slice early, say to block on IO. Your process will be scheduled earlier when that block is released. So while you will get your percentage of CPU time, you should also get reduced latency.

For instance, Windows will give either the foreground application and/or programs the scheduler things are interactive a priority boost. (I forget exactly what it does.) In theory, this means that the program you're working with at the time gets the attention. It's conceptually like a window manager renicing the processes you're working with when you change focus.

The performance of the "patched SD" mentioned near the bottom show SD to be slightly better than CFS on the test system. However, a few FPS one way or the another really amounts to testing "noise" -- it doesn't mean anything. If there are any problems with 3D, it obviously isn't common to all systems, which suggests to me that the scheduler isn't the problem. Not unless the problem systems have some background job running that the others don't, which is messing up CFS in some way (and this seems unlikely).

Why not actually write a schedule that has a performance option tunable. Basically, pass a real time instruction to it to optimize how its working so it can switch between SD and CFS on the fly. Seems very possible.

FPS is a poor measure of the feel of a game. I know it's what all the graphics card benchmarks use, and it does do a good job of measuring the total processor and video card throughput, but that's not the most important thing.

The most important thing is the time between you pressing a key and the changed game state being reflected on your screen and how consistent that delay is.

One of the arguments that CK has made about kernel development is that kernel developers have become obsessed with throughput to the exclusion of all else and that this leads to very poor desktop performance because throughput is a poor measure of 'interactivity'. Someone posting 3D game framerates as evidence of one scheduler being better than another is exhibiting exactly this bias.

IMHO latency is a better measure, but still not perfect and it can be hard to measure in some cases.

I don't know enough about the scheduler to know which one is better or which one exhibits particular properties. But I can see that the throughput bias is evidenced in force in the thread the article points to.

And CK is also right that big iron shops care more about overall throughput than any measure of 'interactivity'. IMHO there ought to be some kind of pluggable scheduler system that allows you to completely change the algorithm to reflect the preferred behavior of the computer you're using.

I believe that you can swap the scheduler out. I might require you to rebuild the kernel but there is nothing stopping a distro, you or anyone else from using a different scheduler.

I think you're thinking of the IO scheduler, which you can select at compile time. The CPU scheduler is not a choice--you must apply a patch and change the kernel's source for that. And while distros do extensively customize compilation options, the patches that they apply are generally small (besides Gentoo, which is very proud of the patchset it applies to its kernels). For almost any distro, it would be too much work to support multiple kernels (where one is based on unmaintained code).

Linus rejected that because it meant more schedulers to support, and also because he was annoyed that C.K. wrote it mainly to prove that his scheduler was better. It was called "plugsched"--google for it if you want.

FPS is a poor measure of the feel of a game. I know it's what all the graphics card benchmarks use, and it does do a good job of measuring the total processor and video card throughput, but that's not the most important thing.

I disagree. Responsiveness is important, but I've never encountered a situation where the frame rate was good and the machine couldn't read my keyboard clicks fast enough.

You are on to something here. CK uses arrays and has a lower context switch time whereas CFS uses red-black trees. What this means is that doing a bunch of "while:; do done" loops that always use their full timeslice is a test that favors CFS as much as possible really.

As an aside, what kind of retard benchmarks a scheduler using a game that is doing nothing and 100% cpu tasks? Put some disk access in there, maybe a set of folders that will easily fit in cache and then find . them. Have some fixed-seed random busy / sleeps of different ratios. Have the game play a demo reel on repeat and record avg *and* min/max fps. Come on Ingo must be somewhat familiar with CK so he must know that these tests where CK is roughly the same are biased toward CFS to begin with. If you are going to say 'look I did these benchmarks and it's a wash' and use that as a justification then at least do good benchmarks.

I think this more than anything else confirms my impression than Ingo is just hacking shit until it kinda works ok. Note that this is exactly the same kind of rationale Linus gave for diss'ing Con so flame off.

Besides, the biggest barrier to 3d games in Linux is video card drivers (ATI, I'm looking at you!) as 3D drivers in Linux, even the proprietary ones, have tended to be unstable.

I would think that the biggest barrier to 3d games in Linux would be the inherent paradox concerning the lack of 3d games.

No gamers = No profit for game companies = No games being produced.No games = No gamers = No profit for game companies.

The one thing that I would agree on is that video card support brings game developers and gamers closer to a certain extent. Having better drivers might get both gamers and developers to consider Linux a *little* more. However, even if Linux had terrific video card drivers that were just as good or better than the Windows drivers I still wouldn't consider Linux for games just because there's very few good games available.

Better drivers can only help. But I can't consider that the "biggest" problem. The biggest problem is that there are too few people who use Linux. So video card manufacturers don't care about Linux. Game developers don't care about Linux and lastly (most) gamers don't care about Linux.

I realize there was a lot of bad management decisions involved, but look at what happened to the last company that tried to make a business out of porting titles to Linux (*cough* Loki *cough). I have just about every Loki title that was developed and I really wish they had stayed afloat. Maybe it was bad business decisions and maybe it was just that there was no profit in porting titles to Linux. The situation might be different today and I hope that someone has the desire, balls and money to step up and try what Loki tried 7 or 8 years ago. But Loki's fate did send a clear message. There's no profit in Linux games. John Carmack also said back then that releasing Q3A for Linux saw no profit.

Hopefully as more desktop companies, like Dell, jump on board and push Linux then maybe both the game developers and video card manufacturers will start to see the potential for profit and a result gamers will jump on board. But even Mac has suffered from the same problem for 20 years, and there's way more profit in developing for Mac than Linux. And it shows. There are more commercial Mac games than there are Linux. But both Linux and Mac have next to no games at all when you compare to the titles available for Windows.

Just a little nitpick, but Loki was not the last company doing commercial Linux ports. The new standard-bearer is Linux Game Publishing [linuxgamepublishing.com], though they've taken a low-and-slow approach and haven't done many big-name games yet.

I made the switch from w2k to redhat v8 or 9 about the time that XP came out. At the time I a serious q3 player practising about 5-6 hours a day, playing in leagues, etc. One thing everybody playing did was lower their resolution and raise the refresh rate up to 120 or 125hz, you get smoother view of the game. In both operating systems my machine would easily sustain the 125fps you need in q3, but there were subtle differences in the game.

In w2k at 125hz other players would appear to be moving smoothly. In redhat they would have a constant stutter, like the other players positions were only being updated every 2 or 3 frames, rather than every frame as they appeared to on w2k. This made a difference when playing the game, I ended up moving around distros until I found the preemptive, and low latency patches made the stuttering go away.

Aren't going to happen until artists in the medium, 'good' artists rather, decide to start working for free the same way coders do. Some artists will work for publicity alone, bu they seem to be by far in the minority. On a technical level, I've not seen much problem with linux. Ogre, for example, runs quite smoothly for me.

I'm no artist -- In fact, I can barely sign my name -- but I do think 3D artists work for free, just not on games. Instead, they tend to focus on high-polygon renders of things, like cars. They want something that is fun and challenging, just like open source programmers. Low-polygon video game characters aren't going to cut it.

Maybe I'm wrong but aren't many of the low poly characters in games just simplified from original high poly models. Doesn't normal mapping take that high poly high quality render, the simplify it, then using lighting tricks give it the appearance of the high poly model, while being significantly similar?

Maybe I'm wrong again, but making those high poly models isn't easy either.

Actually, creating a low polygon count model that looks good is quite a challenge. With today's tools, it's almost getting trivial to make something look good, provided nobody cares if you use billions of polys.

I think that with the exception of a few select coders the FOSS developers are mediocre at best, but luckily our computers are more than fast enough to make up for poorly implemented code. Unfortunately, a mediocre artist doesn't have the same latitude that a mediocre coder does. If an artist does a half assed job we will notice it right away, if a coder writes a routine that takes twice as long to execute as it should chances are the end user will never notice.
That's not to say that there are not a lot of

Maybe not, but with the FOSS way of becoming known and hired by companies.

The first thing you get asked when trying to apply for a position as an artist is to show them some of your work. Which is kinda hard if you never participated in any project. And to get a project, they first of all want to see some of your work...

Art is not modular like code. You can mash-up code from 20 different coders and still get something that works well. If you did that with art it would look terrible because you would not have a consistent style.

> Are you saying all of "Snow White" was drawn by one person? All 24 fps, 83 minutes worth?

Bad example. The team of animators on Snow White and the Seven Dwarves was managed to an unprecedented degree, with an obsession for consistency and quality control. This is not the sort of thing you get from hobbyist contributions.

Coders are a dime a dozen, minus the dime. Modelers, sketch artists, musicians, and actors are the fellas you're just not going to easily get on a FOSS project.

Aren't going to happen until artists in the medium, 'good' artists rather, decide to start working for free the same way coders do. Some artists will work for publicity alone, bu they seem to be by far in the minority.

Of course they're in the minority. For them, there's nothing to be gained in providing their services for free.

The publicity for working on games is almost nonexistent. For example, can you, name any artist that worked on any one of the most popular games? I can, but I know a bunch of artists that work in the games industry.

Besides, artwork doesn't work as FOSS. Unlike code, artwork for games isn't inherently "sharable" - it's designed for the purposes of that game and that game only. Game engines can be used for multiple different kinds of game. Artwork almost always can't. It may be used for sequels (but generally isn't as the requirements change from game to game) but it can't be used across different types of games.

Of course I can't name the artists. I'm not in the game biz. And I doubt many artists care whether I know them as long as game studios do. The studios will hire them, I won't.You know a bunch of artists, and I have to assume that you have some ties to the game industry. Now, how many IT security researchers do you know? Probably a few, if that. Doesn't matter though. They don't care if you know them, as long as people who're in the IT sec biz know them, know their name and know their value. You won't hire t

Aren't going to happen until artists in the medium, 'good' artists rather, decide to start working for free the same way coders do. Some artists will work for publicity alone, bu they seem to be by far in the minority.

I really don't think that's fair or accurate for either programming or art. A lot of artists give away at least some of their work on the Internet, and a lot of programmers don't do that. Whether it's more prevalent in one field or another, that's a question that can't be definitively answe

I'm sure they do - in a shape where they create what they want to create.Creating free for a game means:a) It needs to be a set, often of considerable sizeb) It's usually set by the developers what you need to createc) You have to create it with the game's restrictionsd) They all need to be consistent in style

In short, it's not free experimentation or creativity. I imagine the average art designer-inspiring-to-be-artist would rather create art than designing up a large set of resonably similar graphics - to

Free and open source is a horrible model for any non-subscription based games. Think about a game like Oblivion, for example. If Bethesda had released that under the GPL, where would they make their money from?
Unless you make money from subscriptions, like Second Life or World of Warcraft, FOSS games make no business sense.

Artists usually have a huge archive of unused material, either done in their free time for practice and fun or for games that never used it due to a redesign or cancelation. Even some of the dummy objects most artists produce, to give the coders something to work with, can be better than actual graphics made by a hobbyist.
I always wondered why they just wouldn't contribute at least some early works to the open source community? Is it maybe just the lack of a good website where stuff like this could be indexed or isn't there a good enough standard license model to release something like that for free? I thought the Creatice Commons license [creativecommons.org] would be quite suitabel for it.

Everybody who spent some time finding a good textured and rigged low-poly character model, preferably with basic animations, on the net for use in an open source game, knows that there is next to nothing available. Well, at least not when I could have needed one about two years ago.
It really doesn't have to be that professional or finished - even that untextured rat someone made a decade ago to have something to shoot at, later to replaced by some creatures, could maybe be of use to someone; and if he textures it, and maybe do a simple animation and perhaps record some sounds, and then uses it in his project, he should give the additional stuff he made to other developers as well.
Soon there will be a nice looking 3D rat with some textures to choose from, various sounds, walking and death animations, etc. and everybody did his part. That's the open source way - why does it seem it's not very common among artits and only coders?

Anyway, I think it could be really just the right website that is missing - some Sourceforge-style page with a nice upload-frontend, where stuff gets properly indexed based on categories, tags and styles and with a feedback option, where contributers can see which projects are using their works. Add some voting to rank it, karma, apply a fair license to it upon upload and I think something like this could really take off.

Is this really true? Most major games are written using pre-made engines like Unreal, which typically are cross-platform. If they're hand-rolling an engine, making it cross-platform is rather easy compared to, say, a UI application. Games don't exactly use a lot of platform-specific coding technicals.

Is this really true? Most major games are written using pre-made engines like Unreal, which typically are cross-platform.

They might be cross-platform but they still cost 100's of 1000's of dollars to license. Game Engines top some of the most expensive proprietary software out there.

Moreso it is false to say "most major games" (my emphasis) are built from these engines. Quite a few are yes, but sometimes the technical baggage of the engine is so great - in that steers design directions - that it's chea

There is also typically a lot of customization done to licensed engines by the licensees, at least according to my game dev friends. Even if the engine is more or less cross-platform out of the box, it seems unlikely that it will remain that way for long unless it's a specific goal of the developers working for the licensee.Given how many complaints I've heard about the Unreal engine in general, I'd have to imagine that with the apparent headaches of getting licensed games to run right just on Windows and t

1. Linux machines2. Development setup3. QA4. Support5. Platform-specific APIs added to, say, make shortcuts or turn off screensavers or whatnotPart of this can be solved by rogue developers, but not all. And believe me, once a developer is accustomed to the system, he'll realize his effort is in vain because no management would let him release untested and unsupported code, nor would management want to test or support something with so little ROI (return on investment). You could invest that money and tim

People seem to forget that someone actually tried to build a company [wikipedia.org] on Linux games. It was a disaster. The trouble wasn't the OS. The games ran great. The trouble was that no one (but me, I guess) bought them.

Why not just write a cross platform game instead of a Linux or Windows game? It is actually easier than writing a DirectX game, because of the advanced libraries. Try writing a game with Irrlicht-library for example. You can even select whether to use DirectX or OpenGL as a rendering engine for the same code you write and it takes only about 20 lines of code to write a working program that will load a 3D-object from file and display it.http://irrlicht.sourceforge.net/tut012.html [sourceforge.net]

I would think that 3D games would be considered a 'real time task' (i.e. you must draw the next frame within 1/30th of a second or else it won't look right), and therefore you would want to run them using the real time scheduler (SCHED_FIFO or SCHED_RR). Given that, it wouldn't much matter what fancy scheduling algorithms the non-real-time tasks were using.... your game would always get the cycles it needs when it needs them (up to to the CPU's capacity, of course).

I would think that 3D games would be considered a 'real time task' (i.e. you must draw the next frame within 1/30th of a second or else it won't look right), and therefore you would want to run them using the real time scheduler (SCHED_FIFO or SCHED_RR).

Absolutely not, you want your games to have a high priority but not that high of a priority. Real time tasks have a high potential for locking up the system, if there's a bad loop in a normal priority application then you can stop it - if there's one in a r

I think the difference between hard and soft real-time has been discussed countless times. My impression at least was that the penalty for making the guarantee is so great, you normally only want to use it if you REALLY need it, like say a capture application that must capture or the frame is lost. In any case, I think IO priority is more important these days. With multi-core I've found myself also multi-tasking a lot more, but having some background IO task running is usually devastating to real-time IO. Y

One of the reasons I like the linux kernel is because it's very modular. Why can't both schedulers be included in the kernel, and the person compiling the kernel set which one they want? Kinda like how you could select between Alsa or OSS, or a myriad of other feature that are different but serve a similiar purpose?

There's no reason you can't go to the mans website, download the code, and then compile it into the kernel. However, including it in the kernel by default, even as an option, means that it must be maintained and cared for.

Linux is like a car...:) A tail light or the license plate might be mostly modular, but the drive train isn't. The scheduler is very fundamental to the kernel, and must be running at all times. No doubt a system could be devised to make it work, but the pointer that Linus and others have been trying to make is that the most important reason for selecting one scheduler other another at this time is the dedication of the scheduler's maintainer and developer, not whether 3D games experience a slight decrea

Actually the quote from the in the summary from Linus is part of a larger email where hes dismissing the idea of using the plugscheduler (I can't seem to recall the exact name) that would make the CPU scheduler plugin based like the IO scheduler currently is. His reasoning against it were largely BECAUSE what they learned from having the IO scheduler plugin based and its something Linus as well as the subsystem maintainers DON'T want to repeat.

Anyways, you can still just apply Con's patch to the kernel to use his scheduler instead of the old scheduler (and if he keeps maintaining it, you'll be able to use SD instead of CFS). Don't forget that we haven't even had a kernel released using CFS!

That is why Linus should have listened to Con Kolivas when he tried to introduce a pluggable scheduler system. With a modular system we could have CFS and the staircase scheduler and both problems solved.

Schedulers are actually not at all that important in the end: they are a very very small detail in the kernel - TorvaldsActually, I'd like to see the OS kernel consist entirely of only the scheduler and the thinnest APIs to secure drivers granting access to the HW. Everything else, including IPC, could be in userspace.

That would make distributing the OS a lot easier. And the simplicity could be a lot easier to secure, to develop for, to customize a deployment for minimum HW (like eg. a "self-winding" 10mW Bluetooth ring with "accessory" features). Practically every device could run the same "OS", with modules bolted on for increased functionality on heavier HW.

That's called a nanokernel. And you don't even need the scheduler in kernel space either -- the whole notion of "processes" is not something the OS necessarily has to concern itself with. All a nanokernel has to do is make hardware available on demand.You can more or less engineer the kernel concept out of existence until it's nothing but an interrupt handler and a call gate. However, since the reality of commodity CPUs is that they're designed with hardware contexts and even C stacks (or perhaps I shoul

That's what a kernel is, but not the Linux kernel. Linux is a monolithic kernel, including all kinds of stuff that isn't the scheduler, isn't the driver API (or just the drivers too). That's why Torvalds correctly said that the scheduler is a tiny part of the kernel.If customizing Linux to the specs I mentioned were so easy as downloading source and compiling (you skipped the hard part, factoring and looping back the extra codepaths), then all the distros that try it (probably starting with the "Linux on a

Alright, Just got done with some testing of UT2004 between 2.6.23-rc1
CFS and 2.6.22-ck1 SD. This series of tests was run by spawning in a map
while not moving at all and always facing the same direction, while
slowing increasing the number of loops.

CFS generally seemed a lot smoother as the load increased, while SD
broke down to a highly unstable fps count that fluctuated massively
around the third loop. Seems like I will stick to CFS for gaming now.

--
Kenneth Prugh

Sayeth Matthew

My experience was quite similar. I noticed after launching the second
loop that the FPS stuck down to 15 for about 20 seconds, then climbed
back up to 48. After that it went rapidly downhill. This is similar
to other benchmarks I've done of SD versus CFS in the past. At a
"normal" load they're fairly similar but SD breaks down under
pressure.

The only other thing of interest is that the -ck kernel had the WM
menus appear in about 3 seconds rather than 5-8 under the other two.

Linux does not need to support gaming. Windows does that quite well. Anyone that wants to game can dual-boot with Windows, or buy a console. Linux will not support gaming, for the same reasons AIX or Solaris are not chock full of gaming goodness. It isn't required or desired, and the OS is far more suitable for other, often more "serious," applications.

While I agree that what I am about to describe can very much be considered a "niche environment", pretty much every single kernel developer develops Linux for his own desktop. Linus created Linux in the beginning so that he could have a full Unix-like OS on a 386.

So Linux' entire existence is for the desktop. It has proven to be a very great server OS as well. And a lot of people develop it for that purpose. But Linus himself, when responding to Con's clai

I completely agree. Every OS has it's strengths and weaknesses. I see a lot of ego among Linux users and developers. They want their OS to do EVERYTHING under the sun. I'm a Windows guy primarily, but anything that I do that relates to computer security I do on Linux (Snort, nmap, etc). It's all about flexability and choosing the right tool for the job.

When there was more than one OS that ran on different HARDWARE, games could be differentiated and selling points could be made for buying an Amiga, or an Atari over the Mac and PC of the time.Other than Security, to the average user who is using Windows there just is'nt the "Whoahh" that people used to get when somebody back then saw the Amiga or Atari. Compelling reasons just don't exist for the average user to switch from windows to LINUX except maybe fear of viruses and malware.

I keep wondering...X is a single threaded server, communicating with a (generally) single threaded game. Worse, wine inserts the wineserver process, so I have three single threaded things trying to synchronize to get interactivity. A low latency event like a keypress might require all three processes to be scheduled in succession, to get a response on the screen. A poor man's way to do this is with the kernel's scheduler, but a far superior way to do it is to have multiple threads in the X server. Scheduling an interactive event isn't hard. Getting crap on the screen in the same scheduling timeslice is hard (impossible?) since it requires a second scheduling point. As I understand, this is how BeOS achieved substantial interactivity in the presence of load -- my having a multi-threaded graphics server *and* kernel.

So, how much can be gained by rewriting X, or going to a different graphics server? Or do I completely misunderstand the effect of X?

[[I don't know how many of the calls need to go back to the X server, but my guess is only the ones dealing with windowing and user input.]]

Well the GP was talking about pressing a key, so that's definitedly user input..And the latency for the reaction to user input is quite critical for a game, but does the kernel --> X server --> games context switches induce really a measurable latency for a player?I don't know..

Both SD and CFS are superior to the old one. Between the two, the one that gets merged into mainline will be the best eventually.There were a lot of testers when SD came out, because it clearly beat the pants off the old one, and that was exactly why Ingo went ahead to throw his own version of a fair scheduler - otherwise his code would not survive.

Which one is better, SD or CFS? Technically, it was hard to say, but it's not about technology - it's like the browser war, the one with the bigger market share

Usability research has shown, that variation in waiting time is actually a bigger irritation for users than waiting time itself.

I have seen several projects, where user interface response time problems have been "improved" by making adding a minimum response time. The average response time increases, but variation decreases, and the user often reports the program as having become faster... the logic to this seems to be, that the user wants the user interface to have a predictable response.

I think the reference for this is Søren Lauesens books about usability programming, but I cannot remember for sure right now.

Yeah they are too busy bitching about the difference between CFS and SD.

And then the news post here, says "Linux cannot suffer any setbacks in gaming". I think you'll find that compared to the original scheduler, CFS pretty much rocks for gaming. As much or less than SD, who the fuck cares?

It's better than the original scheduler, so where's the setback?

If it's not as good as SD, oh well, cry me a river. I don't agree with Linus' "there is no maintainer" idea, but more the concept that CFS removes more lines from the kernel than it replaces, and does a better job, whereas SD adds complexity for roughly the same effect. What could be a perfectly good technical reason in previous LKML posts got turned into politiking.

Difference between SD and CFS.. fractions of a frame per second. WOW. That really means Linus made the wrong decision! The impact on games, where 1/500th of a second really MAKES A DIFFERENCE is too high! Put the old scheduler back you fucking crazy-ass Finn!!

I have Windows XP and Gentoo Linux running side by side, and strangely, Gentoo scores 10 to 12 FPS faster in World of Warcraft, Warcraft III and even Doom 3. Granted they are commercial games, but if they can run in WINE that fast, I wonder what a direct Linux implementation would do. I just love seeing folks buying the headlines instead of blazing their own paths.

That's why the world is in the shape its in... the majority is always waiting for someone to save the day. You want desktop Linux? Then make it your desktop. Otherwise stop bitching and post some valid comments.

Gentoo scores 10 to 12 FPS faster in World of Warcraft, Warcraft III and even Doom 3. Granted they are commercial games, but if they can run in WINE that fast, I wonder what a direct Linux implementation would do. I just love seeing folks buying the headlines instead of blazing their own paths.

Doom 3 is a native Linux game, as are most, if not all, id Software games.

I get a few FPS more in RTCW: Enemy Territory in Linux (natively), though I generally have fewer background apps/services running than in Windows. But that's just an old game that I still like to play. I'll have to see how the much higher spec'd ET: Quake Wars handles when it comes out.

At least when it first came out, Doom3 under Wine was faster then the "native" Doom3 for linux. The port was quick and dirty, with all the inline ASM stuff not handled by GCC, so it was dropped. Doom3 under Wine, compiled with the VC++ (I guess) compatible ASM inlines was faster. That said, I still play(ed) the native version.

Lots of the development tools built into the doom3 executable didn't work at all under Linux, either. Not sure if they did under Wine at day 0, or if they do at all today.

Performance improvements could come from i.e. unsupported and thus unvirtualized eye-candy DirectX features which would have a negative impact on performance under normal circumstances. Anyway, afaik all the mentioned games have OpenGL renderers, but I assume that you use the stanard DirectX renderer in World of Warcraft.

On the other hand, performance drawbacks because of wine's virtualization are very small but naturally they do exist. Adding an extra layer of wrapping takes time. Of course, maybe wine's handling of win32-specific calls and systems is more efficient than Microsoft's implementation in their operating systems;-)

An efficient scheduler gives processes in general the most bandwidth. A fair scheduler gives processes in a priority class the most equal bandwidth shares. A real time scheduler gives any given process the most predictable wait for bandwidth.

Each of these notions is somewhat different. Achieving a high frame rate over the course of a test on an unloaded system tells you nothing about the scheduler, other than perhaps it is not truly awful. On a moderately loaded system, the scheduler may be giving your game more than its share of CPU time, but if from time to time your game seizes up for a fraction of a second, it would be an irritation, even if on average it's getting enough bandwidth to give you a good playing experience. At the same time, this situation would be fine for data processing applications like image analysis, where an operation might take several seconds, or even minutes to complete. As long as the process gets plenty of cycles over the course of the operation, it's ok, even though your operation might have "frozen" for up to a second in the process.

A perfectly reasonable question, but the answer may well be "about the same". The NE in wiNE istands for "Not an Emulator". In a sense, WINE *IS* a native Linux graphics implementation albeit aided or hindered by using the Windows API interfaces. If I recall the WINE documentation correctly it says that WINE is sometimes faster than Windows on the same hardware and application and sometimes slower.

I'm not a gamer at all, but for years I've said that Quicken was the only thing keeping me from switching Linux full-time. (Yes, I know there are FOSS bookkeeping packages out there. Quicken is excellent, though, so any switch is a step down in my mind.)Then I realized it doesn't matter which operating system I run as long as I can do what I need to do. I was only trying to switch to Linux for ideological reasons, not for any practical reason. The act of switching over was going to involve a lot of time, ef