Category Archives: Microsoft

I was asked by a journalist to comment on the NSW government decision to distribute Windows 7 “mini notebooks” across schools. Here’s my reply:

I used to work with satellite networks, providing Internet access to
most of NSW before wired broadband was widely available (and it still
isn’t in a lot of places). We had many rural schools and local
councils as customers. The difficulties of getting computing and
Internet resources to remote areas (with associated infrastructure,
training, etc.) cannot be underestimated.

Firstly examining from a business perspective, how is this to be
funded, given that NSW is in a poor financial state and the government
has been axing projects left, right and centre? What alternatives were
considered? How were they evaluated? Was there an open tendering
process?

What matters most is what we can achieve with this programme. Simply
throwing a computer to every student won’t cut it. There needs to be a
clear plan and set of outcomes defined, as you would have with any
reasonable business arrangement. This press release doesn’t touch upon
any of that.

What is the opportunity cost of funding this scheme? Could the
resources have been spent on better facilities for the children or
better teachers’ salaries?

The phrase ‘new era’ implies some sort of major change. Has this been
adequately planned for?

Teachers have a hard enough time keeping up with technology. Will they
be given training and continued assistance?

How will these devices be integrated into curricula? How can they
become effective teaching aids and not just expensive appendages?

Will the focus be on teaching or training? I am a firm believer that
schools should teach children to be clever and think for themselves,
creating the basis for a flexible workforce. They should not simply be
trained to memorise the functions of a particular version of a piece
of software. Rote-learning like that will be worthless when they
graduate and enter the workforce.

Will there be any additional costs required to properly use the
equipment? Are classrooms adequately equipped with appropriate
electrical wiring and capacity to charge all of these? What about
network connectivity? What will it take to maintain the infrastructure
required for these, including hardware and software for servers,
routers and so on.

In fact, there is no mention of supporting infrastructure at all. What
are the costs of the entire life cycle of these devices, the software,
maintenance, infrastructure and so on?

Who will own the notebooks? Will students be free to explore and learn
about their computers, or will they be locked down? Can they install
whatever software they want? Will they be tied to particular
applications and file formats?

There is no mention at all of what software will be installed on these
computers. An operating system without applications is useless. Will
the included software be enough to empower and teach our children?
Have deals been struck with other software suppliers? Will there be
additional costs to acquire the software for particular subjects? Who
bears this cost – the school system or parents?

Has open source software been considered at all? There’s plenty of
open source software that works happily on top of Windows. Microsoft
may have discounted Windows, but did they include an office suite?
OpenOffice would do the job just fine.

Even if you believe the tired-old argument that the state MUST
purchase Microsoft Office for each and every student (which works out
to tens of millions of dollars), wouldn’t it be better to choose
OpenOffice for free, and spend those millions on new library books or
hospital beds?

I’ll admit that OpenOffice isn’t exactly the same thing (it’s better
in some ways, not as good in others), but it’s so similar that it
doesn’t really make a difference. It is worth tens of millions of
dollars just to get the Real Thing? Does learning MS Office 2003 in
school really prepare you for using Office 2007 (with its completely
new interface) once you hit the workforce? Refer to my earlier
comments about teaching versus training.

Are they including graphics software for the art and design classes?
Are taxpayers going to have to pay for a copy of Adobe Creative Suite
for everyone? How about we save the hundreds of dollars per student
and use the GIMP and Inkscape instead? Examples such as these abound,
and there are plenty of other open source applications that simply
have no good parallel in the proprietary world.

I find it strange that the country’s largest state would tie the
education of its children to a totally unproven operating system. A
smart purchaser – especially one purchasing at such a grand scale –
would wait until the software had been out for a while and had been
thoroughly tested by consumers around the world. Internal testing is
one thing, but you cannot beat real-world experience.

A point-zero release is sure to have rough edges, and it would have
been far wiser to wait for at least the first service pack like most
organisations do. Can you imagine the fury that would have been
unleashed if the NSW Government had decided to kit out the state with
Windows Vista before its release? Sure it sounded good before it came
out (“The wow starts now!”), but it lost its lustre very soon after
unveiling. Many people today still cling onto Windows XP, and others
have switched to Linux and Mac OS X, in response to Vista’s abysmal
state.

The OLPC Project has already identified and addressed many of the
issues that may be faced. They have done this through developing a
combination of hardware, software, infrastructure, training,
procedures and learning material. It would be wise to learn from their
experiences.

The whole mini notebook revolution started with Linux. Starting with
the OLPC XO laptop, Linux has proven to be a flexible and capable
operating system suitable for small devices. Its resistance to viruses
and other network nasties is legendary. The last thing I’d want is for
my child’s computer to get infected and start showing kiddie porn.
Anti-virus and anti-malware software are band-aid solutions. I’m not
going to build a castle on a swamp.

Commercially, devices like the Asus Eee PC could not have existed if
it were not for Linux. It forced Microsoft to actually compete for
once, by resurrecting Windows XP and slashing its price to a more
reasonable level.

The press release claims that this scheme is ‘unparalleled in
education globally’. There is considerable risk in being first off the
block. I’ve already explained the risks of using an unproven operating
system. It would be more prudent to learn from other large scale
rollouts in education.

Take the Republic of Macedonia, for example. Despite being one of the
poorest nations in Europe, they are the only nation to have one
computer per student. They achieved this through the use of Edubuntu,
a variant of the popular Ubuntu GNU/Linux operating system that is
specially tailored for education and learning. With that, they got a
vast library of open source educational software, which was all
translated into their native language.

Similar stories abound in places like Brazil, Russia, India and China.
Collectively known as the BRIC countries, they are considered to be
the up-and-coming nations to watch over the next few decades. Their
economies have been growing at breakneck rates, partly because they
have been clever in their investments. These nation states recognise
that education is the key to long-term economic success.

You might say that these countries are poor and that is why they are
choosing to use open source software. It is true that they don’t have
plenty of money to throw around, but does New South Wales? Does
Australia? Where would you want your tax dollars spent?

Bill Gates was interviewed by the BBC’s Money Programme. As he prepares to significantly reduce his direct work for Microsoft Corporation, Bill reflects upon what got him started in the first place and what kept him ahead of the ‘competition’. The video provides a brief glimpse into the character that founded and guided Microsoft. Regardless of whether you love him or hate him, he is indeed a fascinating character.

Skip ahead to the 40 second mark, to the segment titled “How the teenage Gates and his friend Paul Allen got access to a computer”. The story according to Gates was that he and his friends were allowed to hack on a company’s computer “like monkeys” at night to find bugs. He spent hours reading manuals and experimenting to figure out this “fascinating puzzle”. However, they were stuck at the “tinkering” stage until they stumbled across the source code in a rubbish bin. It was only then could the monkeys evolve.

I don’t think the producers of the show realised the significance of this admission, since they quickly cut to another segment. Reading between the lines, Gates is essentially confessing that he would not have progressed had he and Paul Allen not found the source code. Without this knowledge, and without this opportunity to understand and experiment with how the internals of a computer worked, Gates and Allen would have been severely constrained in their ability to found a software company and develop products

I would go so far as to say that Microsoft owes its very existence to this access to source code.

To anyone with a passing familiarity to how things worked back then, this comes as no surprise. Source code was expected to be free, and this in turn nurtured a generation of computer hackers. But whereas Richard Stallman saw the amazing potential of this freedom and wanted to preserve it for all, Bill Gates appears to have perceived it as an advantage for himself that he must deny to others.

Microsoft claim that their UAC security prompts in Vista are designed to annoy you. I’m trying hard to take them seriously and to not laugh them off… but did they really think it’d work? OEMs and users have been disabling it in droves. Other users have probably taught their muscle memory to automatically click the Continue/Allow button without the slightest acknowledgement or thought. I think Microsoft need to get their act together when it comes to UIs. Some of their recent efforts have been frustratingly inconsistent.

A major reason given by Microsoft in their UAC scandal was to encourage developers to avoid privilege elevations as much as possible. A noble cause, especially in the security-inexperienced world of Windows development, albeit poorly executed. It reminds me of Apple’s perpetualopposition to the multi-button mouse. One stated reason is to enforce more ‘sane’, ‘usable’ and consistent UI design, and overall I think they’ve done well. They don’t ban multi-button mice (‘XY-PIDSes‘?), but given the simple one-button default there’s less need for them. I might prefer using a conventional 3-button scroll mouse, or even Apple’s own Mighty Mouse (a cleverly-disguised multi-button mouse), but I don’t lose any functionality by not using them.

It goes to show how much the graphical interface can be influenced by its physical input, something a lot of us don’t acknowledge in today’s world of >100-key QWERTY keyboards, multi-button mice and multi-finger touchpads. The real innovation in that space seems to be happening in the mobile and embedded sector, the iPhone being a good example. Players of games on both desktop computers and games consoles might notice the difference in ‘look and feel’ between games designed for keyboard/mouse versus control pad. Particularly for action and strategy games, ports from desktop to console (or vice versa) often aren’t successful. The software was designed with the assumption of particular input devices, and anything that deviates from this will also alter the feel of the game.

Anyway, you can get the video and slides here (the links in the original announcement are no longer functional). It’s been pointed out to me that the slides in the video vary slightly from the PDF, but the difference is minimal. It’s three months old now — so don’t expect any revelations — but it’s still an interesting watch.

Sam Varghese over at iTWire asked me a couple of days ago for input on whether FOSS would be affected if the Windows source code was released. I started drafting a response, expecting to be finished quickly, but the ideas just kept flowing. The end result was a touch over a thousand words! I was expecting Sam to maybe quote a token sentence or two in his article. To my surprise, he basically reproduced (with a little paraphrasing) the whole thing! 🙂

Here is my complete response to Sam. As you can see, very little was left out of the article.

The impact on FOSS would depend on what circumstances the code was released under. Windows code is already available under Microsoft’s ‘shared source’ programme. In this state, you must sign a restrictive NDA to see the code, and after that your mind is forever tainted with Microsoft’s intellectual property. Write anything even remotely similar to the code you were deigned to see, and you leave yourself open to litigation. In other words, taking part in shared source is a sure-fire way to torpedo your career in software.

Microsoft have for years been experimenting to find a licence that they can convince people is ‘free enough’. Fortunately they haven’t succeeded. The danger if they did would be to shift the balance in the open source world away from free software and towards a model that is more restrictive but still accepted. They have enough code to seriously upset the balance, ignoring for the moment the complexity (which includes also legacy cruft, bloat and so on) and hence difficulty for anyone to actually comprehend the code and participate in development.

Quality (or rather, lack of quality) aside, Microsoft’s code could be useful to see how formats and protocols are implemented. Linus Torvalds once wrote, “A ‘spec’ is close to useless. I have _never_ seen a spec that was both big enough to be useful _and_ accurate. And I have seen _lots_ of total crap work that was based on specs. It’s _the_ single worst way to write software, because it by definition means that the software was written to match theory, not reality.” It’s one thing to have documentation (as the Samba team have recently managed to acquire), but there’s nothing to guarantee that there are no mistakes or deviations (intentional or otherwise) in the actual implementation. The WINE project is a classic example – consigned to faithfully reimplement all of Microsoft’s bugs, even if they run counter to documents you might find on MSDN.

There are many ‘open source’ licences. Too many, in fact. Many of these are incompatible with each other, and a ludicrous volume of them are just MPL with ‘Mozilla’ replaced with $company. What keeps open source strong are the licences that either have clout in their own right or ones which can share code with those licences. The GPL is right at the centre of this, and we should be proud that the core of open source’s superiority is Free Software. Microsoft could try and release code that meets the Free Software Definition but is intentionally incompatible with the GPL, as Sun did with OpenSolaris and CDDL. It still remains to be seen if OpenSolaris is of any success, and I think GPL incompatibility is certainly a factor there (for example, they can’t take drivers from Linux, so its hardware support remains poor). OpenOffice.org, on the other hand, is a prime example of a large proprietary project that has been released under a GPL-compatible licence (LGPL) and has gone on to be successful as a consequence. That success would not have happened if code could not be shared with other FOSS projects, integration could not be made (direct linking, etc.) and mindshare not won (FOSS advocates to write code, report bugs, evangelise, etc.).

The big stinger here is patents. Sun have addressed this in the past with a strong patent covenant, and more recently they’ve been trying to do it properly by for instance relicensing OpenOffice.org as LGPLv3 (hence granting its users the inherent patent protections of that licence). Would a mere ‘Covenant Not to Sue’ suffice for Microsoft? In the case of Microsoft’s recent releases of binary Office formats documentation, their covenant only covers non-commercial derivations. Similarly, their Singularity Research Development Kit was released a few weeks ago under a ‘Non-Commercial Academic Use Only’ licence.

It is be vital that companies have as full rights to use the code as non-commercial groups. Otherwise, the code would be deemed to be non-Free (Free Software doesn’t permit such discrimination). The contributions made by commercial entities into the FOSS realm is immense and cannot be ignored. To deny them access would be a death sentence for your code. Microsoft would be stuck improving it on their own, and in that case what was the point in releasing it in the first place? Don’t malware writers have enough of an advantage?

Don’t trust what a single company says on its own. Novell was for a short while the darling of the FOSS world… then they made a deal with Microsoft. I’m glad that many of us were sceptical of Mono back before the Novell-MS deal, because I’m sure as hell ain’t touching it now. .NET might be an ECMA ‘standard’, but like OOXML it is a ‘standard’ controlled wholly by Microsoft. Will such a standard remain competitive and open? We’ve seen this in other standards debates, a good example being the development of WiFi. Companies jostled to get their own technologies into the official standard. The end result might indeed be open, but if it’s your technology in there you already have the initiative over everyone else. If Windows is accepted as being open source, Microsoft will continue to dominate by virtue of controlling and having unparalleled expertise in the underlying platform.

To raise the most basic (and in this case, flawed) argument, free software is fantastic for all users no matter what. Free (not just ‘open’) Windows means that Free Software has finally achieved global domination – a Free World, if you will. By this argument, we should simply rejoice in our liberation from proprietary software and restrictive formats/protocols.

Of course, I have already demonstrated that this cornucopia likely will not eventuate even if Microsoft released the Windows source code as open source (even GPL). The software on top will remain proprietary (the GPL’s ‘viral’ nature aside). We’ll still have proprietary protocols and formats – and even digital restrictions management (DRM) – at the application level. In the grand scheme of things, the end consequence on FOSS of Windows source code being released might possibly be zilch.

As promised, Microsoft have released documentation on their old binary formats by February 15. I haven’t taken a look yet, but the comments on the article don’t look too encouraging: some people contend that elements are missing and incomplete. It’ll be interesting to see how Microsoft respond to this feedback. Hopefully the kinks will be smoothed out with little fuss. As far as I am concerned, a complete spec needs to cover full formatting, embedding, scripts, macros, formulae, schemas, images, binary blobs, password protection and DRM (and I’m sure I’ve missed some other important stuff too). It should also list exactly which patents are covered, in a manner similar to the Samba/PFIF deal.

Additionally, Microsoft have announced a binary-to-OOXML translator project. How well this will pan out is anyone’s guess. They say that the “project is developed and released under a very liberal BSD-like license (sic)”. IANAL — is this licence GPL-compatible? Could it be used to create a GPL binary-to-ODF converter (using OOXML as an intermediary), that we can embed into applications like OpenOffice.org or Xena?

Obviously these moves are focused on getting OOXML approved by ISO, but I’m also hopeful (though not optimistic) that it is a sign that Microsoft are willing to play more fair with the public and industry. We need to take advantage of this predicament they’ve put themselves in, and pressure them into opening their formats as much as possible. If OOXML is ever going to be approved, it should be so open that it’s no longer an issue. I don’t seriously expect this to happen, so I still hope it fails 😉 .

But standard or not, we’re still going to have to deal with it. Office 2007 has its own variant, lovingly dubbed MS-OOXML by some. The more they open up the format, the more independent and complete implementations there will be, hence there will be more inertia for MS to go with the flow and not deviate any further. Then at least it’ll be a de facto open standard. Maybe I’m dreaming, but it’s at least an interesting theory 🙂

“The Linux community has matured from my university days. … It seems like the linux community has a much more sensible, pragmatic approach now”

“Geeks are geeks, no matter what OS they use. I think this often gets lost in the religious divides and flamewars. All that geek-anger would be much more useful targeting lawyers and investment bankers.”

“The crowd was pretty friendly and they took us out to a Chinese restaurant afterwards. In an interesting act of irony, the FLOSS community paid for our dinner.“

For those wondering about the video, we just have to wait on a few things before we can release it. I’m sure we’ll get this sorted soon, so no conspiracy theories please 🙂 .

I mentioned in my write-up of the Microsoft visit to SLUG that Microsoft are going to release the specifications to their binary file formats. I wasn’t aware at the time that this had already been announced: the specs will be released on February 15. Groklaw has decided to look a little closer at the pledge.

Is this a win for information and software freedom worldwide, or just the next step towards a new stage of vendor lock-in? It remains to be seen, but it does show that our keeping Microsoft’s nose to the grindstone is generating some effect. Don’t stop now, we’ve only just begun! 🙂

A note about the video: it will be released as soon as we are able. We’re at linux.conf.au at the moment, so it’ll more likely be out next week. I’ve currently got 20GB of glorious HD video sitting on my hard drive, which we need to edit and convert to something more Internet-friendly. The transcoding alone will take a while!

This unsurprisingly caused much consternation and controversy within the Australian FOSS community in the weeks leading up to the event, and I (being its organiser, and hence the target of much vitriol) ended up spending much time gauging and responding to the opinions and ideas raised.

We wanted this to be an open community-led Q&A session, and to their credit Microsoft were obliging. Admittedly, I would have saved much sanity and hours of work if people had posted to the wiki as asked, but having to transcribe from the mailing lists to the wiki allowed me to think more about the questions and how they should be worded and ordered. I need no reminder of Microsoft’s transgressions, but I made sure to keep IBM in mind (as a company that was once considered an anathema to software freedom but has now largely reformed) and take an optimistic approach.

Pia was of great help here (as always!). With so many questions and only an hour and a half in which to ask them, we decided to cull the non-constructive, accusative and just plain trolling questions. By the end, Pia had compiled a list that was fairly encompassing of the major issues concerning supporters of competition, technology and freedom.

As I arrived at the venue, I found that our guests had beaten me and were actively helping to get the furniture into place. This allowed us to get better acquainted before the meeting. It was clear (and they openly admitted) that they had been following our open discussion process on mailing lists and the SLUG wiki. Really, they would have been daft not to do so 🙂

I handled the introduction, then turning the microphone over to our guests to introduce themselves. Sarah Bond launched into a presentation on OOXML, in the process answering several of the questions we had on the wiki. I left Pia to officiate most of the meeting, but I chimed in on occasion with both pointed and irreverent questions and comments that were not on the list.

We will be releasing the video of the meeting as soon as we are able, so I shan’t explain its contents too much. Some interesting points though:

In the list of rules for the meeting, I put ‘Asking “Why do you eat babies?” doesn’t help anyone.‘ I initially felt bad when I met Sarah and realised that she is pregnant! She was a good sport about it though, and we all had a good laugh 🙂

In her presentation, Sarah mentioned that Microsoft will be releasing the specs to their binary Office file formats in mid-February (UPDATE:it’s confirmed!). I’m still not sure if I heard this one right (it’s a lot to swallow!), so if someone can confirm this I’d appreciate it. They made no bones about this being part of their drive to promote OOXML acceptance.

Not new, but news to us, is the fact that Windows 2003 has a DRM infrastructure which they call RMS, short for Rights Management Services. I did cheekily ask them if the name was deliberate, and their attempts to seriously and politely address the question was priceless 🙂

Like with any other SLUG meeting, we went out for Chinese food afterwards. Three of our guests joined us (it’s a shame that Sarah couldn’t come, but being pregnant isn’t easy). Did we have dinner with the Devil? It certainly didn’t feel that way. Once we put our differences aside, we realised that we have an awful lot in common. We are all geeks at heart, and some of the MS people have and continue to dabble in Unix and FOSS technologies such as Python.

Were we successful? It depends on how you look at it. From my perspective of trying to build trust and understanding, without dwelling too much on (but certainly not ignoring) the past, I think so. Asking loaded questions and making our guests feel uncomfortable might have brought some short-term satisfaction to some of us, but would it have achieved anything? There were some inappropriate comments from the audience going in both directions (one of the loudest people actually seemed to be pro-Microsoft), but those people were easily outnumbered by the more sensible majority. My original fears of the crowd devolving into a senseless rabble dissipated rapidly, and I am very pleased and proud of our community for that.

I was initially disappointed by our turn out, but that feeling changed as the meeting progressed. Due to it being January, linux.conf.au being just around the corner (which siphoned a lot of our best and brightest) and the sensitive nature of the subject matter, we had a crowd that was smaller than expected, but felt more conversational and manageable.

If you were at the meeting, please let me know what you thought of it by posting a comment.

Sarah will be speaking again at LUV on February 5. If you’re in Melbourne for linux.conf.au, it might be worth extending your trip by a few days to see it. I would also suggest that you take inspiration from the list of questions that we have compiled. If our video is out by then, watch it to avoid repeating the questions that we’ve already asked (or pose follow-up questions).

My warmest thanks go to:

the rest of the SLUG Committee (Lindsay Holmwood, Silvia Pfeiffer, Matt Moor, Ken Wilson, John Ferlito and James Dumay), for their support throughout

Intel have for years pushed the line that megahertz (MHz) equals speed. Apple used to call this the ‘Megahertz Myth‘. Intel competitors AMD and Cyrix were for many years forced to resort to using a ‘Performance Rating‘ system in order to compete. The fact is that computing performance is far more complicated than raw clock speed.

As the marketing droids at Intel gained political superiority within the company in the late 1990s, its architectures devolved into marketectures. The Pentium 4’s NetBurst is a classic example. Unleashed in 2000, in the wake of Intel’s loss to AMD in the race to release the first 1GHz chip, it was widely panned for being slower than similarly-clocked Pentium 3s in some tests. While less efficient clock-for-clock, it was designed to ramp-up in MHz to beat AMD in sheer marketing power.

In recent years, Intel have been hitting the limits of their own fallacy. Higher clock frequencies generate more heat and consume more power, and start pushing the physical limits of the media. You may have noticed the shift in Intel marketing from megahertz to composite metrics like ‘performance per watt‘. What they are trying to indicate is that they are innovating in all parts of the CPU — not just the clock speed — to deliver greater overall performance. Through greater efficiencies, they are able to improve performance per clock cycle, whilst also addressing heat and power usage (which is especially important in portable devices and datacentres).

You should also notice Intel’s sudden emphasis in recent years on model numbers (e.g. ‘Core 2 Duo T7200’) rather than just MHz (e.g. ‘Pentium 4 3.0 GHz’). They are trying to shift the market away from the myth that they so effectively perpetuated over a series of decades. My laptop’s Core 2 Duo T7200 (2.0 GHz) is clearly faster than my Pentium 4 desktop running at the same clock speed. Reasons for this include (but are not limited to) the presence of two cores (each running at 2GHz), faster RAM and a much larger cache.

It is interesting to note that the design of the current Core line of CPUs (and its Pentium M predecessor) owes far more to the Pentium 3 than to the marketing-driven Pentium 4.

Now, Stuart makes the mistake of presuming that Intel’s CPUs are not getting any faster since they have not increased in megahertz. Instead of berating Intel for finally being honest, why can’t we praise them? Addressing real performance (not some ‘MHz’ deception), including the previously-ignored factors of power consumption and heat generation, is of benefit to us all.

If there is anyone to criticise, it is the hardware vendors. They have successfully countered Intel’s message by continuing to market their systems using MHz as a key selling point. The general public (and evidently most of the press) are left to believe that computers aren’t getting any faster. Given the convenience of a single number as an indicator of performance, who can blame them?

When end-user experience is taken into account, software developers fall under the microscope. Windows Vista is the obvious posterchild — I’ve seen dual-core 2GB systems that once flew with GNU/Linux and (even) Windows XP, now crippled to the speed of contintental drift after being subjected to the Vista torture.

Update: The article’s content seems to have been edited to remove any criticism of Intel, but the sceptical title (‘Intel’s new chips extend Moore’s Law, or do they?‘) remains.

Update 2: Now that I have explained that megahertz on its own is only of minor consequence to CPU performance (leave alone overall system performance), we can see that it is often not even a conclusive way to compare different CPUs. A Pentium 4 can be slower than a similarly clocked Pentium 3. This inability to compare becomes even more stark when scrutinising completely different processor families. Apple had a point when they trumpeted the “Megahertz Myth’ back when they were using PPC CPUs. Clock-for-clock, a PPC CPU of that era was faster than the corresponding (by MHz) Intel chip, often by a considerable margin. Apple countered Intel with benchmarks demonstrating the speed of their CPU versus Intel’s. Benchmark quality aside, their intent was to show that a seemingly ‘slower’ PPC chip could outperform its Intel competition. It is a shame that the promotion didn’t convince more of the general populace.

These sorts of articles come out all the time, and they are always written by people who have not used Linux much and therefore don’t understand how it works and how it is developed. The article is not without merit, but it does display many misunderstandings. Most telling are the omissions — the fact that the real strengths of Linux are ignored and the deficiencies of Windows overlooked. It gives undue weight to proprietary software development and totally forgets about the free alternatives that are available for Linux. And by ‘free’, I mean the proper ‘free as in freedom’ definition, not the tired-old ‘freeware’ misconception that the author makes. As for the antique ‘too many distros’ argument, people only need to use one, and some quick reading would easily narrow the choices down to a small handful, if not one. I personally find the different ‘distros’ of Windows (including WINCE and so on) to be more confusing.

Most Linux people are very well versed in Windows, so they generally know of which they speak. My experience is that many Windows people expect everything to work exactly like Windows, and they complain whenever something is even slightly different, even if it is better. For some reason, they accept crashing, viruses and poor security as a fact of life, and so aren’t attracted to Linux. In fact, it goes further than that: to most people, Windows is computing. Anything else is just heresy.

These critical articles about Linux aren’t new, but they should not be ignored. Linux has many rough edges to smooth out, but then again so does Windows. At the end of the day, it often comes down to people being set in their ways and being afraid of the unfamiliar.

I’ve seen this happen even with Microsoft products: Windows Live Messenger, InternetExplorer 7, Office 2007 (Word, Excel, Powerpoint, but mysteriously notconsistently in Outlook) and Windows Vista have been widely criticised for adopting odd and inconsistent interfaces. The first three lack a basic menu bar (each using its own weird alternative), and Vista doesn’t have a Start button (it’s a round circle with a Windows logo). It’s a tech support nightmare. Yet despite the resistance, people force themselves so that they eventually accept them. Some even grow to defend the changes. What possessed people to behave in this way? Is it the marketing, or even the cult of personality that Bill Gates has managed to build, as the article proclaims? We are now in a position where it is easier for an MS Office 2003 user to move to OpenOffice.org than to Office 2007. Why aren’t we seeing this happening more often?

Never underestimate the power of inertia and marketing.

The fact that Linux can prove to be such a great system despite its miniscule desktop market share and lack of resources compared to the proprietary world (which is much bigger than just Microsoft) shows the strength of the free and open source software (FOSS) model. One needs only to look at Mac OS X to see a desktop that is almost unquestionably superior to Windows in every way, thanks in part to its extensive use of FOSS.

By the way, the ‘year of the Linux desktop’ thing is not taken seriously by more established Linux users. The phrase is used mainly by journalists looking for attention, or by more recent Linux users. For everyone else, it’s become more of a running joke, much like Linus Torvalds’ faux ambition of ‘world domination‘.