I agree with that. I've noticed a big divide in developers of the 21st century coming out of college; there's a lot less focus on lower-level development and hardware interaction in schools/colleges than there were in the 80's and 90's. I think there's a more general focus on high level languages at best (e.g. python, web frameworks, al la.Net, java-this-and-that, ect.) that work 'on' an hardware/development architecture, not 'with' it.

I also wouldn't say there is a lack of support for the Linux kernel, but Linus is still a full-time driver of changes/additions in the kernel and with him, comes his ego and experience. Rightfully so, but we've seen it drive away brilliant maintainers and contributors in the past decade.

The bigger problem appears that CS programs now focus on teaching tools and how to Google as opposed to thinking or problem solving, in order to meet perceived industry demand. In industry, I've had to teach too many youngling graduates about basic data structure and database concepts, memory and hardware addressing, protocol encapsulation, AAA, synchronous vs asynchronous operation, and other fundamentals which would be needed to understand why things such as kernels are implemented as they are. The way in which FOSS support forums and listservs generally respond to noob developer and user questions--some variation of RTFS without providing a way of understanding which documentation to read--does not invite exploration of concepts embedded in the current software architecture, let alone ways to identify entry points into interesting sub-components. This is a barrier since popular CS program tools to which students are exposed, such as RHEL, gcc, etc. are provided as finished products in the same way as Access or BOS.

Separately, I have a dream where all of the Alans Cox get together to write an operating system.

Maybe it's because there are basically ZERO jobs in most places for real hard-core CS. What few jobs DO exist require the ability to produce actual usable products...applications programming, especially web where the ability to hand-code a balanced search tree won't help at all. Ask a Flash "developer" about registers and instructions on the stack.

My college changed the first 3 CS classes to Python instead of C++ because it's easier. Then when they get to Data Structures, they've gotta learn C++ for the first time, at the same time.

Now queue the arguments about the logic skills that HR doesn't care about. Colleges all over the nation (and other nations I'm sure) already crank out degrees no job market asked for....I'm not sure I can blame the CS depts for trying to stay relevant.

A data structures class shouldn't only tell you how to create a linked list, or a binary search tree, or a hashtable. It should also teach why why and when. Part of understanding why and when involves understanding how exactly those things work. Granted, that part is common sense and/or comes naturally to some people, but it is a entry level course, don't expect to be challenged that much.

You certainly can teach students how to create various data structures in python, however I expect the general response from the students would be very poor. It seems somewhat silly to have students implement things that a language natively supports. You're likely to get many "why do I have to reimplement this if python already supports it" type complaints.

I did however read the comment I was replying to as "why should we need to learn things in Data Structures that python can already do". Perhaps I misread it.

Really though, a good class would be completely language agnostic. In my data structures class the professor demonstrated concepts with Pascal, but expected assignments to be completed using either C, C++, or python, at the student's discretion. Most students opted to use C.

I don't see this happening really. Students do assignments because they're told to. They somewhat EXPECT that the code they're writing is just to learn material. Even C++'s STL provides a pretty decent amount of data structures built in. We just implemented them for academic purposes (with our instructor flat out telling us "When you get out of this class - just use the built in structures afterwards. They're very finely tuned and it's HIGHLY unlikely that your homemade versions will outperform the provided ones."). And it worked. I wrote my crappy hash table, linked list, binary tree, b-tree, and other classes in C++ for my assignment that will never get used, but it taught me how the things work, so that now even though I can just use Microsoft's built in version provided with.NET and it's great, I still can recognize when and how I need to use the structure itself.

FWIW, I started in Fall 1999. We did Java for our introductory classes (101, 102, 201, 202), with a required intro to C in the second year. Most of the stuff in year 3 and 4 transitioned to C++ instead, though towards the end some of our classes literally became language agnostic. As long as we could write a Makefile to build it on the departmental Solaris machines and compiled to the executable name that he asked for, he didn't care. Final project in that class was to write a LISP interpreter, which I found particularly enlightening. Nothing to teach you the ins and outs of a language like having to parse and execute the code yourself.

I know they're still doing the C/C++ towards the end, but I'm not sure if they're still teaching Java for the intro stuff.

There's nothing unsafe about pointer arithmetic, unless you do things in a lazy way and don't think things through. As long as you design your objects ahead of time, and make sure to cover all of your edge cases, you'll be fine.

I think a major problem with modern CS education is that students are taught that pointer math is "unsafe", and end up afraid to even try it. Maybe this is why so much software ends up getting written in java or c#, and my 3 GHz Core 2 doesn't feel any faster then the 2ghz Athlon I had 8 years ago...

This is a very good point. What's more is that if there is ever a time when you shouldn't be relying on your language to keep you save, it's in a data structures class. You shouldn't ever need your language to keep you safe if you are doing it correctly.

There's nothing unsafe about pointer arithmetic, unless you do things in a lazy way and don't think things through. As long as you design your objects ahead of time, and make sure to cover all of your edge cases, you'll be fine.

Not to be cynical, but my real-world experience is that the case in which the developer isn't lazy, is knowledgeable enough, and thinks everything through is itself an edge case.

Weird, I've got 10+ headcount open for hard-core CS guys. I'm lucky to see one candidate/month who is even in the ballpark. 95% fail when asked the basics, e.g:

1. What is a hash table, why would I use one, what's the expected cost to insert/find/delete, how might you implement it?2. Write a bug-free binary sort in the language of your choice.3. Here's real-world problem XXX, sketch out a solution and describe the algorithms+data structures involved.

Had to tell HR to stop filtering resumes: they are set up to look for specific skills, not talent. Also had to explain to them they are not in the compensation package deciding business: we do that, HR gets the process working smoothly. It's no wonder that if HR talks to college placement folks, pretty soon the college profs start feeling the pressure to teach the wrong stuff.

2 questions.. 1.) are you just looking for CS knowledge to weed out the riff raff, or do you actually have jobs available that require such knowledge? And 2.) if the answer to #1 is the latter, why haven't you hired me?

1. I have jobs available that require such knowledge. Open source contributions, good grasp of functional programming, graph algorithms, etc are a plus. Being interested in big systems also a plus.2. Because you didn't send me your CV yet. mmn@bellatlantic.net

No comparison-based sorting algorithm has complexity less than nlogn. However, there are sorting algorithms that are not comparison-based and thus can have "better" complexity. For example, counting sort is a sorting algorithm with complexity O(n + M) where M is the size of the range of values.

> Maybe it's because there are basically ZERO jobs in most places for real hard-core CS.

There are close to zero jobs for hard-core anything, but that's not the goal of earning or producing a BA/BSc in CS.

The most important goal of a university degree program is to teach students how to think critically, how to evaluate and apply information, and how to perceive and act professionally outside themselves. Theoretical knowledge is helpful not because industry does a lot of relational algebra or computability analysis directly, but because CS graduates should be able to make a critical business case that an Oracle implementation would be better/worse than a Sybase implementation for a particular use.

Local technical/trade school graduates can and should easily out implement degree holders in identical fields. If the post-secondary education market is segmented ideally, CS graduates should be have the knowledge and conceptual tools to run rings around technical graduates in terms of understanding and designing principles and solutions in a broader corporate or societal context. As it stands, recent crops of technical graduates seem to understand more about both what they're doing locally, how they got there, and why they're doing it in the community, than the CS graduates who seem weak with both theory and implementation. Some of my colleagues have been seeking out CCNAs, biologists, and ex-military officers, and training them to program because they have better tools to understand systems and business context than recent CS graduates (who/should/ be able to apply the "science" part of their tools to figure it out).

Relating this back to the Linux kernel, good luck finding a recent CS graduate who understands that MPLS exists, let alone one who can grok the concept (let alone specifications) well enough to understand why Linux will want to support MPLS if it wants not to be locked out of an important part of the enterprise market relating to the current network neutrality debate. (Incidentally, I've pointed some international relations friends at the Wikipedia page who understood the consequences immediately.)

Perhaps there should be a return from quantity to quality in undergraduate CS programs.

And the biggest problem is that very few people grow up with a C64 or an Amiga nowadays. They don't have any reason to learn how a computer actually works (or what a computer is), if they're just put in front of a computer that can load a game into the OS with a double-click. No reason to actually learn anything if that "just works".

And the biggest problem is that very few people grow up with a C64 or an Amiga nowadays. They don't have any reason to learn how a computer actually works (or what a computer is), if they're just put in front of a computer that can load a game into the OS with a double-click. No reason to actually learn anything if that "just works".

And the next generation of gadgets kids will grow up with won't even have root access to their computing devices. *cough* iPad *cough*.

And the biggest problem is that very few people grow up with a C64 or an Amiga nowadays.

As someone who grew up with Ataris and Apples, I actually think that is a good thing to some degree. For every 1 coding genius that came out of the 8-bit home computer scenes, you also had 10 other self-taught Visual Basic hackers making a mess of everything.

Generally speaking, kids coming out of school nowdays are much more professional and exacting in their approach than young programmers were in my day, probably because they didn't learn programming until they were taught "the right way to do it".

Well personally, I'm more annoyed by old coworkers who have limited knowledge of such basic things and write large chunks of unmaintenable and inflexible code and dump maintenance on you when they retire or move to another project.At least the young ones you can do something about.

Separately, I have a dream where all of the Alans Cox get together to write an operating system.

Yeah, they'd spend 5 years perfecting every single line of code for the USB subsystem. Meanwhile, nobody cares because the optimizations from those 5 years were completely leap-frogged by hardware improvements, and normal people want an OS that they can use to get shit done.

Sometimes I think the concept of "pragmatism" is entirely absent from this field.

and now they've graduated they find the world has moved on and everyone wants C# developers. Ho hum, ever was the way.

See, if they're taught them computing fundamentals, they'd be able to enter any job and quickly learn whatever language/framework/system that company uses. Every company I've ever worked at had their own frameworks in place anyway, so no matter what you learn in college, you still have to learn something new when you become employed.

Otherwise, if colleges really wanted to teach graduates something that would allow them to get entry-level jobs, they'd teach Word, Excel, Outlook and Powerpoint.

My computer course was maths, but also Prolog, Simula, Pascal, Concurrent Euclid, C, and assembler. I think the idea was to teach a range of languages suited for different tasks. I commend this idea to colleges around the world, though in keeping with modernity, I'd do C (of course), Python, Javascript, C#, assembler and openGL.

If some college kid can get better results than coders who have been working on the kernel for 20 years, then that's great.

Thing is, that is very rare at this stage of Linux maturity.

Hence fewer and fewer young new developers working on the Linux kernel each year. At this point, most of the new kernel developers who actually contribute are going to be experienced developers from other areas who have decided to work on the kernel, and young developers will need to work on smaller, less complicated projects to build experience.

Linus was able to start the Linux kernel because he was bright and nobody else was doing it. He got it to work, and work pretty well, but it was nowhere near as good as it could have been. Every year since then the experience needed to be able to work on the kernel has grown. This is not some arbitrary level they are setting; as the quality of the code improves, the quality needed in order to contribute to the project increases. Quality code generally comes from experience in dealing with the myriad of programming pitfalls one experiences throughout the years. Linus and the other early kernel developers have simply grown with the project; they are much better programmers than they were when they started out, so they move right along with it.

I still use Emacs proudly. I find big bloaty IDEs like Eclipse get in the way

Really, the only thing you are saying here is that you like YOUR big, bloaty thing over someone else's big, bloaty thing. There's really nothing insightful about that at all.

You have a set of tools you are comfortable with, and others have theirs. Each have their merits and each have their drawbacks. What is new is not necessarily an improvement and what is old is not necessarily the best. To discard either out of hand on their "whiz-bangedness" or "tried-and-truedness" rather than on its merits is the mark of a fool.

You are confused. One minute you claim long winded text entry for 30 years beat tools designed to replace the drudgy or trivial stuff is better, then you claim you write noddy script to toy server duties. A developer, as it a real developer and not some perl dweeb, doesn't touch server admin. So what is it? I suspect you're a low skilled UNIX package user that's been using packages for your job, but you dabble in code for trainee level work.

IDEs have a place, they can get the cruft out of the way, if you're development time is cheap, you're not a real developer. Get back to your 10 line "scripts".

A developer, as it a real developer and not some perl dweeb, doesn't touch server admin.

The best C developers I know, are or were admins at one point.

Writing software for servers is a fuckton easier when you actually understand what a server does and what goes on from the admin point of view.

A decent developer that understands administration if FAR better than some hot coder that doesn't have the security insight of a gnat.

You know at those 'BIG' sites you see on the Internet... facebook, wikipedia, myspace, google... guess what... All of their senior level developers... regularly play admin as well to deal with large problems.

I'm going to have to wager that you are a developer with no admin experience and little actual development experience since you don't recognize something thats pretty common.

Recently, I was able to rewrite a piece of SHIT in several days which an 'old school' developer wrote in VIM for half of year. It was absolutely unstructured (all files in a same package), full of commented-out 'print' statements used for debugging and generally ugly (who needs a debugger?).

Modern development tools _really_ give a productivity boost. If you know how to use them.

That's certainly a big part of it. The Linux kernel is moderately mature, so there's not a lot of low hanging fruit for a new developer to pick. Things like embedded hardware drivers require special hardware that most people don't have/want/need, which tends to put a damper on people doing work in that space (again, unless they work for Montavista or somebody). The graphics subsystem requires fairly specialized knowledge that most new devs (and most old devs who don't work for ATI, NVIDIA, or Intel) don'

My best guess [and I am not trying to be facetious] is that unless you were in on kernel development in the very early days [so that you had some hope of learning it when it was still tractable], then the thing has gotten so big now [what is it - like 20,000 files which get compiled in the basic kernel?], and the learning curve has gotten so steep, that no new developers have any realistic hope of grokking it anymore.

Seriously - at this point, just learning the kernel would be akin to a 6- or 8-year PhD project [in something like a Department of Archaeology, studying ancient Egyptian hieroglyphics].

...the thing has gotten so big now [what is it - like 20,000 files which get compiled in the basic kernel?]

But that's counting each and every file system, each and every architecture and most significantly each and every hardware driver. The amount of code you need to understand to be able to, for instance, write a new network driver, is substantially less than the totality of the Linux kernel source.

Out of ~25k *.[ch] files I count ~9k in drivers alone, plus ~1k in sound. There's ~1.5k in fs and kernel has ~200. Although arch has ~10k only ~700 of those are for x86. Yes, this is a very rough and ready, not to mention incomplete, set of figures, but you get the idea.

I'm not a kernel developer, but I've poked around specific parts of the kernel for various reasons. You do not have to even think about the existence of most of the code to work an a particular segment. Hell, I've created small kernel modules that compiled against a kernel without even having kernel source on my system.

It might be strictly monolithic in overall architecture, but from a development standpoint much of it isn't meaningfully that different from a modular implementation. Most of the differences manifest at runtime, not at development time.

My best guess [and I am not trying to be facetious] is that unless you were in on kernel development in the very early days [so that you had some hope of learning it when it was still tractable], then the thing has gotten so big now [what is it - like 20,000 files which get compiled in the basic kernel?], and the learning curve has gotten so steep, that no new developers have any realistic hope of grokking it anymore.

Not to mention the lousy documentation. The kernel docs for Linux are stunningly poor, verging on non-existent --- most of the design appears to live only in people's heads.

I have a project that involves lots of grubby work with the Linux system call interface (<plug> LBW [sf.net], a tool for running Linux binaries on Windows <plug/>). The man pages are of very little use here. Not only do they not go into enough detail, but they're frequently horribly out of date. futex(2) now bears very little resemblance to what the futex man page actually does. I eventually had to resort to groping through the Linux kernel code simply to try and figure out how what structures were used where --- and to determine the layout of struct stat I actually had to start comparing hex dumps to find the binary layout (tip: gcc's alignment attribute does not work the way you think it does).

What's worse is that there appears to be very little recognition that this is a problem. Asking on the newsgroups about futex(), for example, I just got pointed at a years-old PDF entitled 'Futexes are tricky'. I don't believe that any proper spec for what futex() does actually exists. Without prescriptive and definitive documentation, how do you know if it's working correctly?

Compare to BSD culture: OpenBSD's man pages are a joy to behold --- everything is documented in copious detail, including internal kernel functions!

Yeah... sorry. In-depth documentation written by a novice is worse than no documentation at all. Not only that, it will take about one hundred times the amount of time to write. Kernel developers should be writing the majority of it. It's simple work ethic, little different from maintaining code quality, and is beneficial for other reasons, such as aiding in design (documentation before code).

I don't quite agree with the AC, but he has a legitimate point: design documentation *should* be written by the developers who authored and understand the design, and not by someone reverse engineering the system after the fact.

When we are talking about a large uncodumented system-level codebase, any newcomer is by defintion a 'novice', will spend an inordinate amount of time trying to determine *intent* out of the code, and will likely get many such guesses wrong. On most software shops, this would be discarded as a horrible idea because you simply have better options.

Still, I'd disagree with the poster that for OSS documentation-by-API-user "is worse than no documentation at all".

Having a novice contribute such documentation could force developers more experienced on that area to review it, encourage them to correct it or even replace it with their own, and at the very least highlight the fact there is a documentation gap.

It's not that bad. If you are interested in getting into it the easy way, here is a very nice book [oreilly.com]. O'Reilly has another good book about drivers, if you just want to write drivers. The kernel is also well organized so if you want to work on the USB section, for example, it is not hard to figure out where to look. I've seen projects with 20,000 lines of code that were harder to understand.

It's true that there are ~30,000 files in the Linux kernel, but 25,000 of those are either driver code or architecture specific code, so if you only care about the x86 and aren't interested in drivers, you really only have 6,000 files you need to worry about. If you are interested in a specific part of the kernel, it is even easier: for example, if you are only interested in the ext3 filesystem, that's around 160 files. Which is very manageable.

Seriously - at this point, just learning the kernel would be akin to a 6- or 8-year PhD project [in something like a Department of Archaeology, studying ancient Egyptian hieroglyphics].

This is totally off-topic, but Egyptian hieroglyphs are actually substantially easier to learn than modern written Japanese or Chinese, at least for Middle Egyptian, which is the version of the language one usually starts with. (Late Egyptian in many ways devolved into a deliberately complex secret code for the priesthood, but still involves knowing fewer signs than the average Japanese office clerk.)

I only mention this in case you're actually interested in learning ancient Egyptian -- I was put off by its complexity for many years until I started to tackle written Japanese and realized that I was attempting something much more difficult than Egyptian, at which point I attacked Egyptian with fresh vigor. That said, fluency in Japanese will get you further in a Shibuya nightclub than fluency in Middle Egyptian will get you anywhere.;)

"My best guess [and I am not trying to be facetious] is that unless you were in on kernel development in the very early days [so that you had some hope of learning it when it was still tractable], then the thing has gotten so big now [what is it - like 20,000 files which get compiled in the basic kernel?], and the learning curve has gotten so steep, that no new developers have any realistic hope of grokking it anymore. "

One of my problems with Linus Torvald's "Linux Rules Everything" goal was that this is p

I scrolled down a ways. I don't think anyone has hit on the underlying reason.

About 15 years ago, MS started giving their products away for free, or very nearly free, to the education system. Schools ate it up. 80%+ of all schools in the US teach kids on MS systems. That goes for elementary, high school, and college. The kids learn how to do things the fast and easy way, and the Microsoft way.

Today's young developers learned the MS way, and they aren't about to go wandering into the open source ways of doing things, unless there is some really big incentive. And, the fact is, Linux really isn't high profile and high dollar, like MS.

Also it would be nice to know if the number of linux developers in total is staying the same, shrinking or growing? In other words are older developers attracted to Linux or is Linux just keeping its current developers who are aging?

Another bit of data I'd like to see if anyone out there has it is where the young developers are going?

Knowing information about where developers are developing and why would be very useful to many many people.

would also imply more experienced developers. And that's not (necessarily) a bad thing.

The suggestion that "the Linux kernel no longer has the same appeal to young open source developers that it did 10 years ago" is a statement that shields an assumption: the Linux kernel is losing young developers to other, similar projects.

The article points out the iPhone has had more success attracting younger developers than the Linux kernel, which makes me wonder how anyone possibly thought that might sound anything other than idiotic along side concepts such as "declining sense of community".

Writing apps for the iPhone is not the same as developing an OS kernel's code base in practically any way. It often draws a furore when I point this out, but all areas that fall under the umbrella name "Computer Science" are not equal. A web developer is not an Apache developer. An iPhone app developer is not an iPhone OS kernel developer (or any other kernel developer). Of course, I'm not making a broad statement that web or iPhone app developers don't have the capacity to be skilled programmers in other areas. I'm just pointing the obvious: iPhone apps do not have the learning curve and project scope that a kernel project has, and yes, less talented programmers can find a place more easily wherever learning curves and language complexities/nuances are lower/less/fewer.

10 years ago, there weren't as many developers, nor were there as many platforms, toolkits, IDEs, SDKs, etc. Apple, behind the iPhone, has every reason to do as much as possible to make it as easy as possible for as many people as possible to produce software for their platform (subject to their approval, of course), since available applications are so important to young platforms and/or OS's. The Linux kernel project does not have cause to do the same, and let's hope it never does. The only proper way to judge the interest in the Linux kernel project with young developers is to compare it to how well other kernel projects are attracting young developers. Or, at the very least, compare it to projects that deal with a similar area and scope.

I know a few people who have been turned away from Linux whenever seeking help online from linux users. The whole "you're stupid if you can't figure it out" attitude by some users is really off-putting.

This. You ask for help they direct you to a generic one page man page without the information you requested. You offer a suggestion and they tell you to provide a patch, fair enough, and then when you ask for assistance with making a patch you cannot get any (even documentation).

If you are lucky enough to actually provide a patch they often don't even want to import it into the main codebase because the feature isn't useful to them or it would just take too much work. All you get is "branch it off."

Frankly and depressingly I find closed source developers to be much more helpful and even willing to accept suggestions and help than elitist open source jerks.

Every time you ask a question, you are asking someone to donate time to you. A lot of people are either volunteers, or they're working for a company with their own priorities and schedule. So turn it around. Why do you expect people to just give you time for free?

If you'd ever seen e-mail after e-mail of someone wanting to contribute something / get into coding on a project, and spent hours of your life (via e-mail) trying to help them hobble along, only to find out that they are completely incapable of doing simple debugging, or sometimes even of interpreting a very plain gcc warning ("It says, variable X may not be initialized." [I glance at the 20-line function.] "What happens if Y is false? What will variable X be set to?" "Oh, good catch!") you'd understand why people are short on mailing lists.

I genuinely want to help people become developers for my project. But I don't have the time or emotional energy to teach basic OS primitives (like, what a spinlock is and how to use it), much less teach people basic debugging skills. Often you'll spend a lot of effort trying to describe something (say, 20-30 minutes writing an e-mail) and the person asking for help will only write 2 lines back asking for more, without any evidence of having spent at least 20-30 minutes trying to get it working themselves. So where I am now is this: spend no more than 5 minutes, and give them just enough hints to get them to the next question. If they manage to sort out how to do X on their own, and to ask the next question, I'll give them another 5 minutes. If they've shown evidence that they're really stuck and have tried a bunch of different things, I'll spend more time, but not more time than I think they've spent.

But the fact is, the vast majority of time, the interaction eventually shows that the person is not (at this point, perhaps ever) capable of contributing to the project. And rarely does the person asking acknowledge the time they're asking me to commit to helping them. I'm a natural optimist, and I naturally love to teach people. So at the moment, hope (plus a handful of positive interactions) keeps me trying, even in the face of overwhelming defeat. I can easily understand why people of a different character come to despise those kinds of questions.

My experience is, if you make it clear that you respect someone's time, and have spent a reasonable amount of effort trying to figure it out yourself before asking for help, people are more than willing to give you a hand.

To be fair, in the real world (i.e. working for a living), if you come across an obscure function written in a "clever" way it is sometime a far better use of everyone's time to ask the person that wrote it what it does.

Otherwise you risk missing some subtlety about why it needed to be that way (and not another) and spending lots of time and effort writing a solution to the problem in hand that may work superficially but not in detail.

The Linux kernel isn't fun any more. It's corporate now. It's mature. It has nowhere left to go. We need something new.

I know I shouldn't be feeding the troll (who is showing exactly the kind of attitude I try to avoid), but in case anyone is genuinely confused by what I said before, I want to clarify: I've taken a look at only small portions of the kernel code, and have not (yet) make a concentrated effort to know more about it. The interface complaints are just what I keep hearing from programmers on various tech sites, including Slashdot, every time a story comes up regarding filesystems or somesuch. If kernel development doesn't, in fact, suck, then maybe there needs to be a better PR campaign to get the word out, because I've heard nothing but bad things.

I've seen a lot of promising college-aged open source devs that seem to have an overwhelming reverence and awe towards the kernel, thinking it far too complicated for them to work on with their own programming abilities. In reality, most of them could pick up the kernel and figure it out quite quickly, but they'll never convince themselves of that.

I celebrated my 26th birthday yesterday. Some years ago I was involved with Linux and open source, so I guess that made me a "young developer". I mostly worked on Wine, because it was a technically demanding project but which had a pretty mature and ego-free set of developers who were willing to tutor me (even though I didn't have a good grasp of C when I started).

I wrote a kernel patch once too. It was a waste of time. The code wasn't too hard to figure out, but the general nature of kernel development with its constant reboots was annoying. And the patch I sent predictably got some snarky comments and then vanished. With Wine, it was clear who made the final call - Alexandre. He wasn't always informative, but when push came to shove you could jump on IRC and talk to him about it. How the hell does one even contact Linus? The kernel project has this complicated structure in which some stuff is "owned" by some guys, but it's never really clear whom, and everyone weighs in with an opinion even if they don't own that area. Very frustrating.

Anyway. I long since lost interest in Linux after it became clear that it missed its opportunity to have an impact on the desktop. OS X blew it away, and now computing seems to be moving onto whole other paradigms based on mobile or web operating systems. What motivation is there to do kernel hacking anymore?

that 4 million lines is spread across about 22 different architectures, a couple dozen file systems, and thousands of device drivers. The actual amount of that which one needs to understand to work on something is vastly smaller.

If you're talking high school and university students.. then yeah.. probably..

If you're talking people working as programmers.. then I think a big part of it is the scarier and scarier policies big (and sometimes even small) dev shops are putting on what people do with their free time.

And if it's not that.. it's the fear of legal action and who owns ideas and skills. There is often a lot of overlap in what people do at work and what people contribute to at home.. and this is becoming a thinner and thinner r

Could it be that the Linux Kernel isn't state of the art anymore? Linux is boring... it's bloated... it's no wonder that young blood aren't interested in developing it, they want to do something really cool and cutting edge to light their careers on fire!

Could it be that the Linux Kernel isn't state of the art anymore? Linux is boring... it's bloated... it's no wonder that young blood aren't interested in developing it, they want to do something really cool and cutting edge to light their careers on fire!

I can't speak for the "young blood", being about to turn 40 in a few months and well past the age when I thought "lighting my career on fire" was a worthwhile goal, but I'd certainly agree that the kernel is boring. Part of it is definitely the emphasis on business applications; my interest in free software was always driven by what I wanted to do with it on my own time for my own edification, not to pursue wealth for myself or my employer. An even greater part of it, though, is that operating systems just aren't that damn interesting by themselves as long as they do what they're supposed to do, which is to provide a platform for actual applications. No one owns a computer to run an operating system any more than anyone owns a car to use tires. The OS is incidental to what users (and most programmers) want to use a computer for.

To be perfectly frank -- and to expand the scope beyond the operating system -- the thing that I have found increasingly unattractive about FOSS in general is that it all too often becomes an exercise in cliquishness and faddishness to the exclusion of actually serving users, to say nothing of just plain rudeness. The lkml is notorious for its rudeness (though it's a garden of civility compared to its OpenBSD counterpart). Any number of application projects are focused more on being proving grounds for a particular design methodology and/or programming language of the week than on delivering a good application to end users -- witness the gazillion projects whose name prominently features its implementation language, a detail that only the developers or would-be developers could possibly care about.

The end result is that FOSS projects all too often go out of their way to diminish their value and degree of interest to anyone outside their current circle of developers. Add to this the other common flaws of FOSS -- lack of decent (or any) documentation and poor or eccentric user interfaces -- and it's no wonder that, despite considerable strides over the last twenty years, most FOSS projects, Linux included, remain niche products at best.

Scratching an itch is fine, but when that itch is so narrowly defined as to be your itch and no one else's, no one else can be blamed for not giving a hoot. Follow that with an insistence that it would scratch someone else's itch if only they were hip and smart enough to itch like you, and you have a perfect methodology for achieving irrelevance.

...the thing that I have found increasingly unattractive about FOSS in general is that it all too often becomes an exercise in cliquishness and faddishness to the exclusion of actually serving users...

(emphasis mine)

You've just hit the nail on the head - FOSS comes from a Unix culture, and Unix has never been concerned about the end user. In the Unix world, the System Administrator is the end user, so the entire thing is geared toward making things easier from an administrative point of view. This is why everything is command line based, everything is kept in plain-text config files, etc. Linux obviously inherited this from Unix, and FOSS has inherited this from Linux. Only the rare project like OO.org and others that are plainly and obviously intended for people who are not going to be willing or able to modify the software have any kind of focus on serving the user.

Case in point, look at GUIs in Linux. KDE gives you a billion options, GNOME gives you three. For heaven's sake, is there no middle ground? And neither one of them look as nice as OSX or Windows, though they do now seem to be competing with a version of Windows that is almost a decade old.

Seriously, this is why only nerds and masochists use Linux. For anybody who doesn't feel like spending all of their time tweaking the operating system, they just use Windows or OSX. There is nothing Linux can do that either of those can't, am I supposed to torture myself just to save 50 bucks on the price of a computer? Get real.

That turned into a bit of an anti-Linux rant, but it all comes down to the fact that people are going to develop for the systems they use. If more people want to use Windows, more people are going to develop for Windows. Add to that a barrier to entry of 4 million lines of code, and it's no wonder new developers are shunning the Linux kernel.

Probably not. Of all of the open source kernels available, Linux is the one that I'd be least interested in working on. For interesting features and clean design, I'd look at FreeBSD. For code quality, I'd look at OpenBSD. For interesting research type things I'd look at something like Coyotos or HURD (which does exist, is doing cool stuff, can run most POSIXy code, but isn't a mainstream OS), or something more esoteric like SqueakNOS. For something with a beautiful design that's relevant to modern platforms, I'd look at Symbian (nice kernel, shame about the userland).

Linux? It's become the antithesis of the UNIX idea of doing one thing and doing it well. For any given problem, Linux is probably okay. It's probably not the best solution, but it will do, and it has the advantage that it's a workable-but-not-ideal solution everywhere you want to use it. But exciting to work on? Absolutely not. The code is good in places, but horrible in others. There's no overall coherent design, bits are tacked on, different architectures implement the same thing in different ways without bothering with any kind of platform-independent abstractions.

Could it be that since Linux has become somewhat mainstream kernel developement is considered a "solved problem" to young programmers looking for an interesing project? Maybe new programmers are tackling other open source problems instead.

In a way but, I think it is a case Linux isn't as "Sexy" as it use to be. Back in the days of the 90's When Linux first came out. A lot of its young developers were looking for more of an adult OS to work with. DOS and Windows 3.1 were quite flimsy in the OS Department. Even WIndows 95 and 98 were just a slight better. For real computing you needed Unix or VMS. Linux offered us a way to use a Unix like system, and we found that compared to the Windows and Mac platform at the time it was that much better in terms of stability, security and performance and it being free (as in beer) helped. And so they Loved their new found OS and wanted to support it and make it grow.

However nowadays Windows for consumer use is running with the NT Kernel, making it much better in stability and performance even in security, windows is now a grown up OS. (if it is better or worse then Linux is an other debate)

As well durring the past 10 years. When Microsoft was still stuck with XP. Linux development didn't really do to much to get ahead. They had a chance to trounce the evil Microsoft and in my opinion they blew it. The timing of the GPL 3 was one thing, For linux to grow and be more popular we needed TiVoization we should have a bunch of Linux enabled smart phones, Linux developers didn't understand end users... They made it good for Grandma and Advanced Users but left out a big middle. Still troubles with Sound, and Video Drivers heck even Cut and Paste are still a problem.

When commercial development came across to Linux such as IBM it changed the shape and direction for Linux. It moved to a cheap way for big companies to run their servers, and get some good press at the same time.

Young programmers are interested in Games, Mobile Devices, Internet based applications and social media... Young programmers during my day were interested in Games, Web Sites, Server Side applications, and Desktop software. So my generation we a better fit for Linux then the current one is.

Also with all the stuff that is we are bombarded with on the media getting outraged that software is free or not just doesn't seem like a big thing to put a fuss about anymore. And in a bad echonomy kids are focusing on jobs that will make them money... If they want experience don't wast your time supporting open source get an internship heck you may get paid and it looks better on your resume.

Employers today don't care to much about open source development... At my place we often use it as a way to weed out people for hiring, (And this is from a C level boss who chose to have Linux as his only OS) Why because you have to do code that isn't fun or interesting all the time.

Linux is at a local maximum. You can't really make it much better at being a Unix-workalike general-purpose system in any hugely interesting ways, and if you want to do really interesting operating-systems work you have to go for a radical redesign that breaks with the Unix Way and abandons backwards compatibility.

Could it be that there's not as many young coders that have the skills required? We've been trying for years to dumb down development and this may be part of the result. Perhaps if the kernel was written in PHP and javascript...

Back when that kernel was first written, it was done by a bunch of young coders that learned kernel development by developing the kernel. They didn't have some huge insight right off the bat, they learned from experience, like the rest of us. As the kernel became mature, coding maturity and experience became more of a requirement (in theory) as well.

There are plenty of young coders with a lot of passion, intelligence, and problem solving abilities that haven't been spoiled by the admittedly poor quality formal education system. Are they developing a Linux kernel? No, but they're in the garage tinkering with their language of choice, becoming smarter.

As the field of software development has opened up, there are a lot more dummies that joined the ranks that need their hands held, but that certainly doesn't preclude very smart developers from joining in.

Could it be that the massive code base and declining sense of community from corporate involvement has driven young open source programmers elsewhere?

Nah, they have all just decided to get paid, rather than work for free... (end.sarcasm)

In all seriousness, a lot of the new generation of programmers are starting out in large corporations, as a means to repay student debt, get themselves established - and are able to do that code work in the open-source world, as corporate acceptance and utilization of OSS for application development grows. This, unfortunately, comes with a flipside - those same developers are not available to do the work the hobbyists were doing a few years back, leading to the perception that the OSS movement is losing developers. The movement actually isn't losing developers - more and more of them are just being absorbed by NDA's:)

Either that, or they have all decided to start writing flash games for Adult Swim.

Linux is now mature and nearly unchanging. A young programer isn't going to be able to leave any mark on it. Mobile is the active space where new things are being designed and developed. In enough time that will mature and they will move somewhere else.

Contributing to and old and large code is much more difficult than contributing to a small one. Getting your head around a large code base is no small task and documentation is often lacking. Even if the code is well commented it could be very difficult to understand the overall design of the software and how things interact with each other.

I know from my limited work patching the kernel, this is the biggest barrier for me (and I'm an experienced C programmer). The code is clean. Individual parts are relatively self-documenting, but there's little documentation about subsystems. There's little documentation about why things are done certain ways. Many kernel systems (e.g. network drivers) are part of larger abstracted systems designed to reduce the amount of duplicate logic, but these abstract systems either aren't documented at all or, due to the rapid pace of kernel development, have out-of-date documentation. Furthermore, when people do have questions, they're directed to the kernel mailing lists, which are overwhelming and, dare I say, unfriendly to the new developer. The mailing list archives are littered with unanswered questions and reprimands from older developers to newer developers just trying to contribute.

Some on the kernel development team may like it that way because it keeps out the uneducated and let's them maintain their way of doing things. I think that's why we're also seeing the fragmentation of Linux development as larger corporations that count on Linux pull the source in house where they can introduce new staff to it in a more friendly way. Of course, when they do that, oftentimes, the work doesn't make it back into the mainline kernel, so that's really a detriment to the kernel itself.

I'm an intermediate-level programmer, with more than 4 years of practical experience coding C, even more experience learning theoretical computer science concepts, have been using linux for 9 years, AND my top leisure-time activity is to devote time and energy towards learning more about computers and important software systems.

The linux kernel is super complex. This is not due to poor design (comparatively to other popular OS's, anyway) but a programmer must still contend with this. The level of uberness one must achieve (still considerably above my capacity) to participate in kernel hacking is intimidating to say the least.

Documentation, while plentiful, is almost always in ascii form (vastly inefficient to illustrate such things as dependencies and the form and use of data structures) and mostly found in the code. There are decent enough books, but most of those I've read always provide a narrow window into individual concepts, as opposed to bird's eye views, or surveys of overall architecture. Maybe I'm out of the loop with the best books (pretty likely) but the interest has certainly been there, and every time I've bent towards the possibility of playing around with the kernel, the sheer complexity of the task and difficulty in finding information to answer my questions has made me shy away and towards simpler things. It's not 'too' hard, but it certainly is 'definitely' hard, even for enthusiasts with a healthy mind and great curiosity.

If effort was as widespread in making documentation as it is in making top code, I'm sure many more people would dabble, and talk about it.

25 years ago when I started it was a literal *TURN* in technology. We got personal computer (Amiga, Comodore, Thomson, Atari) to not only to play , but also to *program* and show off other. Heck even on my first PC I cracked Ultima 5 because the disk stopped working , and found out which instruction NOP to go on (it had a very weak encryption using a XOR increased by 3 every byte). I digress but let us see basically many nerd, and by that I mean a lot of nerd, even non-nerd, started programming took a taste of it, then went on open source etc... Alot of oldies from mainframe are also part of that group. Nowadays ? *ALL* system are either closed , or too complicated to really go on (remember how easy it was to use CGA or even later mode 10h?) , and among the young nerd I know not many really start programming. There you have it. That in my opinion is alone to make people which would be interested into programming less numerous. And tehrefore less young people interested into open source. Naturally I might be wrong and just be a grumpy old man "it was better inmy old day, now off from my lawn".But it looks that way to my anecdotal viewpoint.

Nowadays ? *ALL* system are either closed , or too complicated to really go on

It's nowhere that bad, you just have a case of selective memory. Recall "Hello, World!" in Windows 3.1? It's 100+ LOC, with derived window classes done by hand, in C. Remember RS-232? You had to do it *all* by hand, probably in assembly language. Sure, a simple console application was easy - but it's still easy today; it's just nobody wants them.

And recall how marvelously easy was it back then to put together a multithreaded UI with shaped windows and dockable/floating toolbars. Windows 3.x had only one thread for everything, and had no RAD tools; VB was the first one, IIRC - you had to code pixels and create your widgets programmatically. Today - in WPF or in Android - you do that in XML.

What happened is that the inner workings of the computer are farther away, better hidden. Libraries got developed that changed software from an individual, unique, handcrafted project to a generic, common, and heavily automated process. For example, if you need an application that receives a challenge over the network, shows it to you, gets your response and sends it back, it could be put together (not even written!) in minutes. So things changed indeed, but not all that change is for worse.

that's all folks - the youngsters are much more socially connected and skilled than we were at their age; also, they get the clue of the social context much better than we did 15 years ago.
And what they see is a career in an unregulated domain, totally havoc and chaotic, where the abuse and the overwork is the norm. there is no career path marks to follow and nobody can tell you where
you going to be in 2 or 3 years. A continuously changing professional knowledge baggage is not attractive, its consequence is obvious - your whole time life should be allocated for keeping up.
The dreamland of computing is not anymore there - the harsh reality has taken its place and young people are not stupid; they want to be able to enjoy their life normally instead to enslave to the corporate.
15 years ago the Linux and the Open Source was started with lots of fuel from people keeping strong to a beautiful idealism - this is gone; they are not to blame - myself I have respect for a generation who has the
power of the dignity and the will to say NO! STOP! this is my life! - we should all do the same.If a profession takes away your life - forget it, it's just not wort it.

I wish I had mod points right now; this is the most interesting post I've seen in a long time. It gives credit to social networks and current 20-somethings in a way I hadn't quite considered... thanks. Now that I think about it, this "conscious slacker" slacker attitude is a very green one as well. It fits with current realities (concentration of wealth, resource exhaustion) while turning many of its downsides to advantages.

First, it takes a certain amount of financial security before most people are willing to contribute their time to any effort. I think this is true for everything from the Linux kernel to Habitat for Humanity projects.

Second, this greybeard phenomena is occurring throughout not only the entire s/w industry, but other technical fields in the USA as well. Not enough CS majors, engineers, scientists, etc. Math literacy is suffering and practically every company is screaming for more H1B visas. Or just sending the work offshore.

Finally, some of the noteworthy exceptions to this trend (Microsoft, for example. But also many other big corporations) have an ulterior motive behind keeping their staff green. Hire CS grads straight out of college, put them on a couple of projects and get them built. Once your developers start to get some industry experience and a peek at the big picture of the company, they'll start to second guess management decisions. Out the door with them and bring in some fresh meat.

I think it's the fact that students these days are now first taught to program in Java, and very few spend any time gaining experience in C. I'm TA'ing a class in database internals this semester, and the class project is to implement a simple DBMS in C/C++, and about half the class is having a hard time because they're unfamiliar with the C++ programming. (And if you ask them to eschew OOP to program in straight C, there are probably even less people who could handle it.) The skills just aren't as common as they once were.

One only has to remember what things were like with Linux 10 years ago, in the year 2000, to know why the interest just isn't as strong today.

At the time, it had a massive advantage over the Windows 98 platform, which was the common desktop at the time -- it crashed constantly and required formatting every few months, and was vulnerable to total crap like TCP/IP flooding, running unlimitedly powerful.vbs scripts, typing "con con" into a console, and giving IE basically Admin access to your system through ActiveX. Doing anything from zipping a file to hex editing to writing code to making simple video and sound files required outright piracy and the use of horrible freeware -- friendly, open source, cross-platform apps and web apps weren't common. Winamp was a shining example of a great, free program back then, and it wasn't open source and came bundled with AOL crapware.

Linux, on the other hand was rock solid. It didn't crash, it had anything you needed readily available and installable. Need a web server, an IDE, a hex editor, an image editor more advanced than mspaint, PERL, an audio player, an IRC client or anything else? It was there, no running keygens or installing adware. Same with using existing things like ICQ, IRC, the web, usenet, etc. And they were actually competitive in terms of friendliness compared to what was on the Windows platform. You could also script them no problem from a totally OP command line.

But it was a terrible pain to install for a young amateur compared to just popping a LiveCD today. Have fun partitioning your HD with raw fdisk (cfdisk if lucky) and setting up XFree86 by hand to see any graphics. Try setting up non-PNP ISA devices with screwy drivers -- often you had to go hardware swapping for something specific, like a $10 Crystal Sound card. Try rebuilding the Kernel with an ALSA patch to get that to run. Try not using a packaging system for anything -- RPM was terrible at the time, you were better off just compiling things.

But socially, if you could pull it off, you were pretty elite. You had a solid, invulnerable, insanely powerful OS with every tool you'd want at your hands. It was rebellious against the suits and it had the promise of an open source world. The programming was much better -- OpenGL was way, way easier to write for than DirectX 6, which was just nasty, and was cross-platform to boot. The internet population was far more technical at the time and also respected it. Social networking / multimedia was years away from being mainstream at the time. Anyone who ran Linux wasn't a 'n00b' or a 'lamer' on primitive web forums, Usenet, IRC, etc.

Today? Windows XP/Vista/7 has been comparatively stable and isn't nearly as vulnerable, unless you're just stupid. There's mountains of OSS software out there for every task that runs under Windows, if it wasn't built to run under Windows. No one cares that you run Linux, and will just get frustrated if you can't run the 10% of things a PC can. Ten years ago, the biggest PC game -- Quake 3 -- ran great under Linux, but try getting MWF2 to run under it today.

So there's no real motivation to get into it now -- it doesn't have the appeal comparatively it did 10 years ago.

I see it everywhere, in every aspect of life. Back when i was a teenager (18-20 ish years ago) there was still the illusion that you could 'make it' if you pushed it hard enough. you know, good education, good career, a decent, higher middle life etc. and it was true by then here too (turkey). there were many career paths open, there was a demand for many high profile jobs in many sectors. it was good back then.

but naturally, after 20 years, the market saturated. there isnt a noticeable demand for any high profile engineering, computing etc jobs. not enough to meet the supply that is being pumped out. youth was noticing that as time went by in that two decades. salaries got lower, evened out, promotions and management positions lessened. they also discovered that everyone couldnt be managers, or entrepreneurs and so on.

so, they have increasingly let go. they are trying to find ways to 'make it' or live a life that will not necessitate them to exert themselves too much while getting little back.

from what i see, this is no different in other countries in the west too. similar situations, as dog eat dog corporatism pushes forth and sectors are consolidated, more work is being done by less people. and ironically, people who are employed are made work more and more - back 20 years ago it was natural for workday to end at 17.30 or 18.00, now everyone is being worked until at least 19.00 even in top profile jobs. working on saturday became a norm, with the exception of europe - weekends still a reality in usa though.

so youth see these prospects, and get disillusioned. noone wants to slaver away their life with pitiful number of management jobs, promotion opportunities with little time to spare for themselves.

this is a direct result of the system we are in and its irreversible for any sector, unless system and our approach changes.

one exception though, is scandinavian countries. in these countries where there is strong reassurances of future due to a solid social security system working for over 40 years, youth are going for whatever they want to chase. and they are productive too. leaving aside the ones that go to africa or similar places to volunteer for feeding the people etc for u.n., there is a good deal of contribution to both linux kernel and other open source projects from these countries. there are a lot of web apps that are coded and released open source too.

the contrast clearly proves history right again ; back near the end of roman republic, big farm holders consolidated farm sector by flooding market with produce by employing slave labor, lowering prices, and causing the small farm holders to go into debt. in the end these small farmers had to sell their farms to big farm holders and move to cities. since they didnt have anything to root for in life, any aims, middle class of the society wasnt so keen on the country anymore, they just let it go for free bread and circus games. in the end rome declined and declined in culture, leading to many weaknesses that led to its downfall.

today is no different. big companies consolidate sectors and make people work for endless hours for slavering wages. in the end, youth either let go, or just refuse to enter the system and become drones in the first place.

Browsing through this thread I can't help but recognise the same old arguments: "It is too hard to use for casual users", "it is no fun", "windows is good enough", "developers want to make money", "the gui is ugly" and so on. These arguments are not new, you would find the exact same arguments if you look at old slashdot stories.

The reality is (as usual) quite different, and the old arguments have nothing to do with the kernel anyway. Look at the latest statistics of who actually writes the kernel: http://www.linuxfoundation.org/publications/whowriteslinux.pdf [linuxfoundation.org]. From this paper it is clear that the rate of changes has increased quite a bit, and that the latest Linux release probably had something like 1800 different contributors. If you go back 5 years that number is just 400, so the assumption that there are "no new developers" is clearly false. What the first article is really about is that there are "no new subtree maintainers", but that should hardly surprise anyone. The Linux kernel is a huge pyramid (similar to a big corporation in a way), the people on the bottom of the pyramid are not the ones who get sent to the kernel summit, and the people on the top tend to like it there. I doubt that the _average_ age of all the Linux kernel developers have changed all that much in the last 5 years, it might even have gone down a bit, as more of the development is done in China, Japan and India these days.

Linux on the desktop might not be growing as quickly as some might hope, but it keeps growing faster and faster in almost every other market segment. When was the last time you heard about a new mobile phone, set-up box, web-service or computer science project which was not based on Linux? Sure, Microsoft and Apple might launch their new products now and then, but they are tiny compared to the rest of the market.

Most software development on Linux is driven by developers - they are the guys in charge. If UI designer tells them to do such and such, and they do not like it (excuses vary; e.g. "this is dumbing down to idiot level", or "this is not elegant"), they tell him to GTFO. Knowing this, UI designers often don't even bother.

You know, I can hate Linux with the best of them, but.. I know that in Ubuntu, when you are in Firefox, and the first time you browse to a Flash site, it goes, "hey, want to install flash?", and it installs Flash once you click yes.

Yup. Developed in 1969; making it 41 years old. Linux was developed in 1991.. Linux today is a far cry from Linux back then.

To start Linux even old people like me need to know some history of XENIX, UNIX, SCO, NFS... some of those things remain unformatted text base, console type (not VT100). GUI is good, but the back is still those things, that why Mac OSX hide them all. Linux need to clean up those history and simplified those things.

I don't know if you are referring to using a Linux distro or programming on the Linux kernel.

I was born in 1988; 21 years of age. I've been using Linux since 2001 or so.

My fiancee also uses Gentoo Linux, as I got fed up with supporting WinXP and all the junk that accumulated on it. She's been using it fine for the past few years, running a very similar setup to mine. We are the same age.

I don't do any kernel programming, however I do various application- and web-level programming. Never anything past user-space... and that is simply because that is where my interests lay... I've always been more into building programs that do stuff for me, rather than kernel programming / hardware interfacing (at the kernel level).

I can do so many more things in a lot less time at the command line than with a GUI - even web browsing (love links).

1) You can do a tiny subset of the things you can with a GUI in less time2) But the things the GUI can do that the CLI can't, you can't do at all3) And the things you can do on the CLI faster, you can still do in the GUI pretty damned fast (assuming you're adept at using one)

When people say "oh the CLI is great, it's all I need", that's a good way to tell that that person doesn't compose music, edit photos, layout pages, edit video, etc etc etc. If all you do with your life is copy and rename text files, then sure: use the CLI. But that's a pretty sad life.

The Linux GUI is still ugly. There's still a "non-graphical mindset" in the Linux community. This is totally alien to anybody under 30 (40? 50?)

Eh.. Pardon me? What exactly has the GUI to do with the kernel? Most kernel development I've been involved with in embedded devices and all kinds of different kernels and microkernels used a serial console for development... to hardware-test stuff that was written on an emulator. Even writing to something like memory mapped video (e.g. 0xb8000 on PC-like hardware)

One of the side effects of open source development is that you get a slightly different driver for every device, instead of generic drivers.

This is the case for every OS. Some things are done generically (i.e. AHCI driver can support a number of SATA chips without a lot of drawback), others that could be done generically are generally crappy when accessed via generic interfaces (i.e. VBE graphics vs. specific GPU drivers), and others are simply impossible to write generic drivers for once firmware services stop (i.e. most SAS controllers, sound cards, etc). WHQL does *not* limit the number of drivers, and simply formalizes the 'write once, de

WDM does divide drivers into class drivers that handle the common device functionality. You then can develop minidrivers or filter drivers which provide device specific functionality. This creates a set of generic driver classes and sort of controls the types of drivers you can create, but you aren't restricted from creating monolithic or legacy drivers which don't fall under the that model. I'm sure about WDF, it may be more standardized.