Posts by Robert Pogson

Page:

Thin clients can be rockets

John Brown wrote, "We did a deployment for a client a couple of years ago. Upwards of 800 locations around the UK. As we got near the end of the deployment it was visibly obvious at 9am when everyone was logging in that it was getting slower and slower the more offices were converted. Day to day use may have been ok, but the system couldn't cope with the peak demand times."

That's trivial to deal with. It's a predictable slow down and can be easily solved by installing local caches and faster networks. The designer didn't do his maths. One trivial solution is to remotely set the clients to boot a few minutes before office hours. Add a few servers for log-in peaks. I've seen M$'s OS take 2 minutes to give a usable desktop on legacy PCs. I've seen thin clients open a session in 5s, even less if the session is left in RAM on the server. Every solution has scaling problems. The legacy PC is just about the worst solution for many usage cases. The worst case I've seen of legacy PCs besides complete failure to boot was a lady who would go to her desk and fire her PC up so it would be ready by the time she had her first cup of coffee. After that, it took 2m per click for the PC to respond. How convenient/efficient/effective was that? She was amazing. Despite the performance of her PC she still got her work done but imagine how much better she could have done her other tasks if she was not always wait, waiting, please waiting... She wouldn't permit me to install GNU/Linux until she had completed her contract when the identical hardware turned into a rocket. It was still a thick client but she could have been running software on a brand new powerful server instead of an 8 year old PC as a thin client. Until you've seen thin clients done properly, don't state they can't do the job.

Thin Clients Work and They Save a Bundle

Anonymous Coward wrote, "Thin clients are an IT department's wet dream but terrible for end users. The amount of money you would have to spend on hardware to make the experience tolerable would put a gaming PC on everyone's desk."

That's nonsense. If you are not doing full-screen video all day long (goofing off), thin clients work very well. Nowadays we have gigabit/s LANs and huge RAM and storage and computing power in servers at a reasonable price. Savings realizable in hardware really add up: $50 or so per user in storage, $100 or so in RAM, $100 or so in CPU, and with GNU/Linux a pretty sum not paying M$ for permission to use your hardware. Then there's energy consumption, a few watts per station for the box instead of ~50W. It all adds up. The biggest item is not the hardware but the maintenance. A secretary can hand out a replacement thin client to plug in instead of dispatching an IT-guy to replace yet another infected PC. Thins clients being fanless can last ~10 years with virtually no maintenance except in very dirty conditions.

If you think you need huge powerful machines to run a keyboard and monitor, you're dreaming. Smartphones can do it. Why pay for a huge box full of air and blow-driers when a tiny box will do?

I once worked in environments which did not use thin clients. At one place when there was no cooling and sweat was dripping from my nose, I did the maths. In my room there was 2.5KW of body heat, and 4KW of PC heat. There was 12 gB of RAM and 1TB of storage where thin clients would have required 2gB and none. There were 72 fans running where none were required. There was 1000 pounds of electronic waste being generated every few years. There was 100 cubic feet of clutter near desks. Use your head. Legacy PCs work but they're terribly wasteful and inconvenient. These days, thin clients can be embedded in keyboards or displays, even mice, and forgotten. They just work.

Personally, despite all the numbers, I still think having just one copy of every bit of software in the place is a huge advantage compared to updating a gazillion file-systems. PCs are make-work projects and provide free slave-labour to outfits like M$. If you want to work for yourself, use thin clients. Use GNU/Linux. Don't pay M$ for nothing but permission to use the hardware you own.

Oh, user-experience? That can be better using thin clients because files cached in RAM on the server are instantly available whereas legacy PCs seek all over some hard drive. If you use SSD, thin clients get faster too, if stuff is not in RAM. My users saw 2-5X faster response on legacy PCs converted to thin clients because the last user to load some software into RAM left it there to be shared with the next user. That's performance. Further, if a user did something that would have made a PC's CPU sag, a user of thin clients can have a huge cluster of 64-bit machines at his/her bidding. Besides, many are using web-applications mostly and those are already on the servers... Do you know any user who thinks Google or Facebook are not serving them well? I don't.

Fact-Checking Needed

There are a few obvious errors in TFA:

1)Efficiency - Diesels are more efficient than gasoline engines because of thermodynamics. For a given amount of heat energy released in the cylinder, more mechanical energy is obtained largely because the temperature of the working gas is higher. It has nothing to do with the nature of the fuel except that diesel is less expensive and can ignite at the higher temperatures found in the engine. Diesel fuel would not burn well in a spark-ignited engine with 10:1 compression ratios. Diesel engines have compression ratios near 20:1. The air is heated by compression (as in a bicycle-pump), and the fuel burns as soon as it is injected into the hot air. The expression for the efficiency has a major factor, (1-Texhaust/Tcombustion) where the Ts are the absolute temperatures of the working gas. The higher compression ratio raises the temperature at which stuff happens in a diesel engine.

2)Clatter - The clattering noise of a diesel engine has nothing to do with detonation. It's the noise of the fuel injection system. Typically, hydraulic pressure of the fuel from the pump forces the injector open at the right time in the cycle and this action makes a pronounced click.

3)Burn - diesel engines take in the same amount of air on each cycle and injects a variable amount of fuel into the heated air according to the load. The injection pump controls the amount of fuel by the length of its stroke.

4)Gears - people who drive diesel care about efficiency and more gears helps improve acceleration and efficiency at cruising speed. A diesel engine has a wider range of torque v rpm than gasoline engines simply because of the higher compression ratios and the longer power strokes. Further, a gasoline engine may red-line at 4000 rpm while a diesel engine may red-line at 3000 rpm. High rpm is a great waste of energy in most cases as the viscous forces in the cylinder/lubricants and gas-flows increase as the square of the velocity. The slowest diesel engines are on ships and may exceed 50% efficiency.

Efficiency of diesel is not just about consumption of fuel and work done. Diesel engines last about twice as long as a gasoline engine, saving fuel, manufacturing costs, and many other energy consumptions around the planet. TFA should have mentioned that the NOx problem exists in all engines but is worse in diesel engines because of the higher temperatures. Many manufacturers inject urea into the exhaust to decompose the NOx. VW opted to cheat on tests instead. They still make fine cars and I would not hesitate to buy a diesel Jetta or wagon. They manufacture a light truck in Asia/Pacific which I would like to have: efficient, geared for efficiency rather than pulling tree stumps... The real problem here is the bad rep of diesel engines. This situation will not help that and the world will continue burning more fuel as gasoline instead of diesel as a result.

The right way to use a diesel engine in a car is as a hybrid. These ~2L diesels can be run at optimal efficiency instead of being chained by gears to the wheels. They would perform better in city and highway driving.

Affording Thin Clients

Some anonymous coward wrote, "the extra electricity, cooling and embodied cost of the servers, plus the thin clients, generally outweighs that of the fat client, given sensible power-saving policies".

Oh, the pain... DO THE MATHS...

Compare 20 thick clients in a lab or office using 100W for LCD monitor and ATX box combined. Include the capital cost of a powerful processor, a few gB RAM and many gB storage. That's 2KW to start.

Compare 20 thin clients accomplishing the same thing with a server with 16gB RAM and 512gB storage in a RAID array. The monitor may still use ~30W but the box may use only 10W these days. That's 800W for the clients. The server may run just fine on 200W, for a grand total of 1000W, half as much power. Power costs money. It matters especially if you have to pay to import it and then pay to get rid of it. You can save a few gB RAM in the deal because so many files are cached, but server-RAM may be more expensive so that's about a wash. Storage OTOH, is greatly saved because you may need only two or three hard drives instead of 20. When you consider all these savings in operation and increases in comfort and performance, there's just no reason to use thick clients unless some particular application requires it, like video editing etc. Lots of users just point, click and gawk all day long. They get improved performance with thin clients.

On management: The thin clients may require little or no management. I like to keep a list of MAC addresses and locations. They don't need much more than that. The most work I ever did for thin clients was unplugging the hard drive on converted PCs or setting the BIOS to boot PXE, just a minute or two per machine for the duration. New thin clients last indefinitely with no moving parts. No body would steal one because most folks still don't use them and they don't do much without the server. "Fixing" one could be done by the office-secretary swapping a unit. Yes, it is easier to maintain one copy of the OS and users' files than 20 copies. Oh, wait. You normally keep the user's files on a server anyway. Why not keep a single copy of the OS there too? Live with that. I've used GNU/Linux thin clients for more than a decade and maintenance was practically zero compared to That Other OS on thick clients.

understand?

"I am fine helping the Reg earn money off their reader base and getting a little journalistic research into the bargain but a 6<->56 question bait and switch is too much."

The Register provides a great service to humanity. Giving back a bit of time and sharing wisdom is the least we can do, literally. Further, explaining the benefits of thin clients to folks who may be misled into thinking they are "dumb terminals", "laggy" etc. makes my day. I noticed the questions multiplied like rabbits but they were almost always relevant/useful to someone, somewhere. Thin clients are good for the economy, good for the environment (going and coming and in use), good for human spaces, clean, quiet, compact, cheap, and, done right, faster than thick clients. What a lot of folks miss is that a puny thin client is like a frog that can climb stairs. There's a threshold of performance that can do wonders. The network and the servers are the staircase, a marvellous machine that's efficient and provides great leverage. We should tell the world about them.

this crap again.

Extra wrote, "Thin computing will be out of fashion again about the time you realise that it would have been cheaper to buy desktops than pay the outlandish prices that Support, Hosting and lost productive time due to latency and outages. No thanks."

Uh, ever heard of an Android/Linux smartphone? They are essentially a thin client when folks are searching using Google, moving images to/from FB, fact-checking via Wikipedia, navigating using Google Maps and browsing web servers out there. There are about a billion of these sold each year and folks have them glued to their bodies. They aren't going away any time soon because they work for people. People love to have IT that works for them instead of having users/IT people constantly trying to fix what's wrong with their local thick client. Thin clients are simpler devices that leave the heavy lifting to servers somewhere else, anywhere else that doesn't trouble the user. Folks don't want heavy, tangled messes on their desktops, blowing hot air at them. They want little slabs of silicon, plastic and glass that just work for them, the way thin clients do.

Yes, it is possible to set up a system of thin clients that sucks but it's not a characteristic of thin clients but of the folks who don't understand them. I've repeatedly converted thick clients into thin clients and normal users are just amazed at how they get the performance of some newer machine or some server on that old piece of junk that now appears to beat the latest/newest/most powerful thick client. That's real to users. That's what thin clients can do. Real users don't want to be IT people. They want a box that's as reliable as a telephone and just keeps working no matter how fast people move. Thin clients usually have few moving parts and run cooler. That means they fail less often. People love that. People love that they can drop one thin client and pick up another and they are back in business in seconds. The last place I converted thick clients to thin, folks noticed the browser or word-processor springing to life in less than 2s while it took 7s on a thick client. Same for logging in. What's wrong with that? Nothing.

Thin Clients Don't suck.

"the latency between user action & Server sending back the results is a massive PITA"

This is an indication of a seriously flawed installation. Anyone familiar with that phenomenon hasn't used gigabit or 10gigabit/s NICs on the server. A good server might have 4-6 such NICs leaving almost no lag in the pipes for the usual point, click and gawk of typical desktop usage. Consider Largo FL setup. They have a few humongous servers each running one or a few services, like one for the browser, one for the session, etc. There's very little lag because every file needed to do anything is almost certainly cached in many gB of RAM on the servers. They literally have hundreds of users simultaneously on those machines and they are not maxed out in any way. The capital cost per user on the servers is of the order of $100, so it's very economical. All the resources are where they are needed rather than wasted sitting idle all over the system.

If you really can't stand a few milliseconds waiting for a character to appear on the screen, learn to touch-type. Then you will know what will appear before it appears, a perfectly satisfactory situation.

I've been using Linux Terminal Server Project for ages and it performs much better than That Other OS on a thick client. e.g. You log in... With XP, the hard drive seeks for 30s or so to load needed files and the desktop appears but may not be usable for another 30s or so. With LTSP, most of those files are already cached in RAM on the terminal server because umpteen other users are already logged in and it's just a few seconds to display the user's desktop. I once had a user fall off his chair because of this difference and his habit of leaning back in his chair waiting, please-waiting for XP to show up. We had a good laugh and carried on.

I'll grant that full-screen video may suck on thin clients but there are many users who don't need that to maintain the database, deal with correspondence, plan the budget, ... It's not difficult to run a few local applications on the thin client if video is involved, unless you need to transcode/render/etc., in which case you almost certainly should have a thick client or better, a cluster of thick clients like the big boys and girls use.

I'll give a prime example of where thin clients work and thick clients don't. I was in a room with 24 PCs operated by 24 humans and sweat was dripping down my nose. It was 40C in the room because the air-conditioning was not working and each body shipped ~100W of metabolism and each PC shipped ~150W of electrical power. Add that up... 2400W human power and 3600W electrical power = 6KW! Further, each thick client had 512MB RAM and 40gB of hard drive totally wasted. We only needed 24 copies of the OS so M$ could make more money. One copy would do with GNU/Linux. Imagine what could be done on a terminal server with 12gB RAM... We could have reduced the electrical power to 50W or so, reducing the waste by 2400W and doing everything on a server running 400W with just a couple of hard drives. Capital cost could be cut in half easily and performance improved with much less work than installing 24 copies of an OS.

I had a different experience on Debian

My booting was slowed actually. I ran a bunch of services on my desktop system: mysql, postgresql and Apache. The default behaviour of Debian was to start things in parallel and X came up when it came up. With the move to systemd, every service had to be running before X was started. That was like me waiting for every server on the Internet being up before I could get a working desktop... It took twice as long to get a working desktop, just like the days of XP...

It turns out, I had to tell Debian's sysvinit scripts not to run my services and to tell systemd to start those services after my desktop was up. Otherwise, any tweak I did to systemd was ignored as systemd created virtual configurations based on the sysvinit scripts. This was not really a problem of systemd but Debian's implementation of it. On Debian, systemd creates a bunch of *.service files based on what it finds in sysvinit stuff. Many hours of my life was wasted finding this which was revealed to me in a discussion on the web. As the flagship consumer-distro, I hope Ubuntu does better. A lot of people run databases and such to support complex applications and this will slow their booting.

Re: progressively more disenchanted

Amen. How is any salesman, teacher, techie, OEM supposed to play for the team when no one knows what's going on? I love to search for data. I hate to search for applications. Is that so hard for Shuttleworth to understand? I have a monitor a mile wide. I have absolutely no use for an OS designed for a smartphone here.

Sometimes but not always. I've heard of Hitler, Mussoline, Idi Amin, Gadaffi and some others who had plenty of vision but were just headed in the wrong direction. Shuttleworth seems intent on solving a problem that doesn't exist while ignoring the elephant in the room. The elephant is that the only thing blocking more widespread adoption of GNU/Linux is retail shelf-space. Canonical has made great progress there before Unity came to be. The GUI is not the problem. Trying to stuff a desktop into a smartphone is.

Re: Mark, you keep punching that straw man...

The issue is not just about developers but OEMs and ordinary users. I don't know any who share Shuttleworth's idea that Apple and M$ are absolutely wonderful. In fact, many OEMs and retailers are despairing of selling Wintel and are seeking an exit. When the battle is almost won, Shuttleworth seems to be disbanding his army.

Where has he been the last decade? Even my toddler granddaughter can run XFCE4 on Debian. If she doesn't need Unity, who does? Shuttleworth should stick to supporting OEMs and not designing software. What's Unity as a fraction of GNU/Linux, 1%? Why does he feel so important? He's not and the world will move on without him. While millions of PCs are shipping from OEMs with Ubuntu, millions of others are shipping with other distros and the hobbyists still visit Distrowatch.

Net Applications Data Has a Bias to Business Usage

Net Applications shows California has 9.64% GNU/Linux share. If you ask for California - Sunnyvale you get 2.93%. The difference? Google's 10K employees switching to GNU/Linux a couple of years ago. We know there are whole school divisions and others using GNU/Linux but they count as nothing because they are not businesses connecting from business domains to business sites during office hours. There are more than 30 million people in California. Is that bias or what?

Dalvik is not a Java virtual machine

Despite Oracle's claims, Dalvik is its own virtual machine running its own language, not Java. The apps that run in Dalvik are usually written in Java language and cross-compiled/translated into Dalvik's language. Oracle has no patent on virtual machines so they have no basis to complain about Dalvik.

No licence is required to run your own virtual machine. Everyone does it and it is no business of Oracle's.

What's with all the angst about thin clients ?

I have been using small cheap thin clients for years and the performance is better than most desktop PCs: logins 5s, open big app window < 2s.

As for cost, what's the cost of 100 hard drives versus a few big ones on a server? Power consumption? Freight, space, whatever the measure, thin clients have better price/performance ratio.

It must be that you anguished guys are using that other OS. GNU/Linux rocks with thin clients largely because most files users need to click something are already cached in RAM which is a million times faster than hard drives.

This VDI stuff where the OS and all the data is sloshed over the LAN to get any work done makes the network a bottleneck. If you use good old LTSP and such, the data and the application are together with no latency.

Thin Clients are Suitable for a Wide Range of Applications

TFA: "thin client solutions are not appropriate for a wide range of business activities"

Not so. Only full-screen video with its large bandwidth requirements is a no-no for thin clients. Even then for a few users of full-screen on a server it can be done. For most other uses, bandwidth actually is less for thin clients than traditional PCs with files on the server. It is much less effort to move a few text and pictures over the LAN than a bunch of data-files. Thin clients may increase the average or minimum load on a network but the peak loads will be much less.

We can also look at the functionality of the working parts of a PC to see the waste. Besides energy, look at the wasted expenditure on hard drives. If you have 100 PCs with 100 hard drives instead of 100 thin clients with far fewer drives on the server, the advantage is obvious. Same for CPUs. Why have 100 powerful CPUs idling on thick clients when you could have a low-powered CPU working reasonably hard on the thin client and a few powerful CPUs working hard on the server? Thick clients just make no sense. The presumption should be that all client machines will be thin unless there is a particular reason to go thick.

Moore's Law will allow us to push more processes into the server room but nothing will recover the waste of resources on the thick clients forever into the future. You can break even on the cost of a changeover to thin clients in a year or two in energy but there are immediate returns on investment in lower maintenance and longer life that are so solid this technology should be the norm.

Where I work, the cost of old PCs is so low that we do not buy new thin clients but the performance increase obtained by using old PCs as thin clients of fast new servers is all the justification I need. My users boot and login twice as fast as they used to do with thick clients and I have almost no work to do to keep them running. It is a clear win.

Exactly!

Where I work we have a lot of good PCs 6-8 years old. We had one power supply fail out of 80Pcs. They are a bit slow running XP so we put GNU/Linux on them. We bought a few new PCs and put GNU/Linux on them. When we want performance we use the old machines as X clients of the new PCs so everyone has a piece of large RAM, fast CPU, and fast storage. They make excellent thin clients and we only need to upgrade a fraction of our PCs to stay current.

The "custom" of changing PCs frequently is incredibly wasteful but very profitable for Wintel.

Bloated Office

Only M$ has billions taken from buyers locked-in and can afford tones of research into what cute useless features will amuse users. Only M$ has made an office suite a conduit for malware and a burden to open standards everywhere.

No Problem for Me

If you run on the old-fashioned thick client, you get what you deserve. I use a GNU/Linux terminal server because I like to share... My OpenOffice.org binaries are already in RAM when a user clicks on an icon and using the shared memory features of 'NIX OS, everyone gets to use the single copy so my window pops open in 2s or less. My login to a useful desktop is only 5s, unlike that other OS.

Eat your heart out, folks. This is the 21st century. Do your computing the right way and you can enjoy the benefits of modern hardware.

Go GNU/Linux

I work in places with lots of old systems and no record-keeping for licences. When that proprietary stuff needs to be re-installed, I just replace it with FLOSS equivalents. It simplifies my life greatly. If a certificate of authenticity is missing from a PC, I replace the OS with GNU/Linux as well. I am instituting backups/imaging/record-keeping so this may be less of a problem but FLOSS is much easier to manage. I only need one image of GNU/Linux that will run on all our PCs. I need four images of XP. When the time comes to kill XP, I can use the imaging system to deploy GNU/Linux in an evening. This proprietary stuff is too much work. I want to work for my employer, not some software vendor.

Virtualization is Big

It is a major tool for consolidation of servers and it is magical for thin clients, a very successful form of virtualization. Thin clients really save capital cost of equipment and power consumption because one server can run hundreds of $250 thin clients each using 30 watts or so. Where I worked last year, they could run all their services on one or two machines with virtualization. The saving in hardware and power consumption and space would have been huge. If you get improved security from virtualization it is a huge plus. We do not need more servers as much as we need more services. Virtualization does that very well.

Thin Client Terminal Servers

In this area, virtualization has been paying off handsomly for most workloads, point, click and gawk. I can reduce service calls on hundreds of PCs while concentrating on a few servers. The servers can be fire-breathing dragons with all the modern resources users want while I do not have to run all over the building, finding keys, fitting into schedules and such. For most loads, the load on the server can be large but still responsive and the end-user gets better performance than they could on the usual thick client with slow discs and per-user malware fighting.

The folks who want to cling to XP longer are going to love virtualization. They can protect the virtual machines using state of the art stuff and use XP indefinitely. The folks who migrate to GNU/Linux are going to be in for a treat. You can run many more users on a GNU/Linux terminal server than with that other OS.

Glad To See So Many Posts

It means Ubuntu had lots of installations.

Mine was pretty smooth except my virtual machine did not have a virtual monitor so X was stuck in 800x600 by default. Supplied an /etc/X11/xorg.conf and it is beautiful. No other problems. Writing a book with LyX at the moment. Very nice.

Use Thin Clients

A lot of the advice given above applies to thick clients and servers but seems to ignore the huge benefits of thin clients: lower power consumption, footprint, noise and MAINTENANCE. Fanless thin clients will run until the screen resolution has evolved past their limit. They last several times longer than a thick client and avoid the need to walk around. They also cost less to buy because there is less material in them.

Once you are using thin clients you can concentrate on the servers and do whatever you want to keep them updated. Because one server can operate many thin clients this is a much lighter task. The entire system has much better performance because a server can specialize in some application and be tuned for it. You can cache all the files in RAM, for instance. You can use huge RAID or SSD to boost performance, too, at much less cost than optimizing the many clients.

Use thin clients. The savings are huge. I like GNU/Linux on thin clients and terminal servers to save on per-seat charges for licences.

GNU/Linux on Thin Clients

If it is five years old or older consider using it as a thin client. Replace all dead machines with new thin clients. Use simple X window system for all thin clients on the LAN. Replace all ten year old machines that have managed to survive with new thin clients because their hardware is becoming hard to drive.

Replace/repair/update all servers every few years. That is where the performance is kept.

With this recipe we get state-of-the-art performance at least cost and we do not have to do anything special because Wintel wants it. We may start using ARM thin clients this year.

Don't Even Try

If you approach bosses as an extortionist you should be fired. You can keep records of failure rates but until the boss's PC dies, he will not care.

A much better approach is to persuade bosses to convert to thin clients for half the cost of an upgrade of PC hardware thick clients. Then you can upgrade the software on a few servers and let the thin clients last a decade with fewer problems with bosses. It is much easier to persuade bosses to upgrade/consolidate a few servers every few years than hundreds or thousands of thick clients and the bean-counters will be impressed with reduced power consumption and maintenance cost.

This recipe is doubly cost-effective if you can run GNU/Linux on the thin clients and terminal servers as there is no per-seat licensing costs. There may be a few apps or users who are difficult to move this way, but the overall system will be in much better shape operationally and financially using a mixture than using the expensive solution everywhere.

Corrections

TFA article has a few things wrong. GNU/Linux is in double digits whether people know it or not. M$ admitted to 7% in Steve's presentation to analysts. M$ is not even counting GNU/Linux thin clients accessing M$'s Terminal Services. 10% of the world's PCs are thin clients and a lot of them run GNU/Linux. The truth is closer to 10%. GNU/Linux passed MacOS around 2003 and has had a good rate of growth since. If the reader doubts this, answer this question;"Why did M$ subsidize XP to the tune of $2 billion if GNU/Linux was not breathing down their neck?" No answer? I thought not. Not many ARM netbooks are running that other OS. GNU/Linux works on them and they are ramping up production. The usability issues are gone on a well-configured OEM installation. eeePC showed that. Many others were snapped up by consumers. The current low attachment of GNU/Linux is due to M$ being willing to forgo profit to keep monopoly a bit longer. They will no longer be able to do that after Christmas. The armies of ARM netbooks will take a bite and M$ will not run "7" on an ARM netbook.

TFA completely ignores the fact that GNU/Linux is on fire in the BRIC countries where hundreds of millions will soon buy netbooks when the price drops just a little more. Once ARM production satisfies the need, no one will be laughing at GNU/Linux any longer. Some of the new ARM chips will give Intel a run on notebooks, desktops and servers in a year or two.

Change is happening. Claiming it is not happening may make some people feel better but it is a lie. The monopoly is ended. M$ does not have enough money to buy everyone off.

GNU/Linux does not need to succeed on the desktop but it will happen because people want inexpensive PCs etc. that just work. M$ has failed miserably to fill that role. People do not need DRM, phoning home, and malware. Those make money for M$ but rip off the end-user.

GNU/Linux Can Specialize on Everything

Really. We have 200000 FLOSS projects. We are much bigger than M$. They have more salesmen than coders.

The world of IT is worth much more than M$. They bought out netbooks and it cost them $2 billion. Let them buy out ARM netbooks. That will cost them another $2 billion. As many billions as they have, they cannot afford to buy us all. M$ cannot provide free IT to the world and still be a ripoff monopoly. By the time "7" sorts itself out, M$ will be a normal corporation scrambling to survive.

2009 is the year of GNU/Linux on the desktop. By the end of 2010, M$ will be struggling for share on the desktop. Virtualized desktops can access web applications and do everything without all of M$'s baggage. Red Hat and IBM see that and will sell lots of GNU/Linux desktops to business. OEMs will sell many GNU/Linux desktops to emerging markets like th BRIC countries.

Ask yourself why M$ is bothering to slander GNU/Linux with major retailers. They are trying desperately to hold onto their last stronghold. Everyone who has seen a GNU/Linux netbook will know a lie when they hear it. The jig is up.

Steganography

The region of the images involving the balloons contains a steganographic message for the Technological Evangelism department of M$ in each region. In the USA no evangelism was thought necessary. They are pretty thoroughly locked in. In Germany, the message was too large so they needed a larger region, hence the different image. Munich has finished buying M$'s licences/protection fees so the team was dealt a scathing reminder of the consequences of failure.

Lots of Paper Products but no Ass-wipes?

GNU/Linux is Bigger Than MacOS

TFA is very informative but puts illegal copies and MacOS ahead of GNU/Linux as competition. GNU/Linux has more than double the share of PCs than MacOS. GNU/Linux does not have to improve to grow market share on the low-priced PCs. Emerging markets and newcomers to the PC market are very price-sensitive and GNU/Linux is the clear winner on prices. Do not forget ARM where M$ is not a player.

In Russia and a few OEMs, consumers are offered a clear price for GNU/Linux and can choose it. For cost-concious businesses and schools moving to thin clients. GNU/Linux is already a good choice. That is 10% of PCs, about 2% of production. GNU/Linux is attacking on several fronts.

The big takeaway from TFA is that consumers will not be able to understand the pricing scheme. People like simple choices. This is not simple. GNU/Linux is simple. It is the same price whether you run a mainframe or a netbook, $0.

Two Out of Three Isn't Bad

Larry got it mostly right. Thin clients are 10% of installed base of PCs and they save users a bundle while giving improved performance on GNU/Linux or UNIX systems. The Java thing did not pan out but we have everything we need: speed, power, low costs.

More than 80% of users of PCs could use thin clients very well. They work well in place of the desktop so about half the market for PCs could go thin eventually. I would use thin clients anywhere folks did not need mobile computing or video/heavy graphics. Browsing/editing takes care of most tasks in business, government and education.

Immunization is Like That

You get a tiny pain but long-term protection.

I have migrated many groups. The small ones are easy. You can hold their hands. The large ones are more difficult. You have to explain logins and issue passwords to everyone and you cannot meet most of them.

I like the approach of Extremadura. They swooped in on a weekend and the users had new desktops to work on Monday. Sweet. I will bet that caused consternation but the improved performance afterward makes it acceptable. One needs to give the end-users reasons for the change and show them immediate benefits. It really does help to issue new monitors/keyboards/mice at the time of migration. The end-user does not expect everything to be the same on a new system.

You can go to the opposite extreme like Munich where from conception to finishing the migration is taking 8 years. The equipment will need replacement about the time they finish... chuckle...

Somewhere in the middle is the best compromise between cost of migration and pain. However it is done, it will pay off sooner or later. I like to go with thin clients and GNU/Linux. Often the payoff is instant because the cost per seat is about half. One can either double the number of seats for the same money or buy lots of toys with the savings. It is all good. Many of my clients go from XP on 5-10 year old machines to GNU/Linux running on a quad-core 64bit server and really appreciate the improvement in performance. They do not appreciate switching to that other OS next release and slowing down...

Moore's Law Does Mean Prices Fall

If the number of transistors on a chip double every 18 months or so and the cost of producing a chip does not double, the cost per transistor drops. In a competitive market, the other guy can use this to drop his price per chip so you have to drop yours, unless you have a monopoly. Intel made sure they retained their monopoly, illegally.

Moore's Law permitted going to larger wafers and multi-core chips. More transistors per mm^2 increases the incentive to put more on a chip/wafer. It does lower costs/prices per chip.

Did anyone else notice that when AMD temporarily created a monopoly in AMD64 chips that the prices for their top processors were a bit high? That was a legitimate monopoly. They innovated. Intel innovates but also bribes folks to retain monopoly. There's the rub. If Intel's products were better than AMDs Intel would not have had to do that. No one would want to sell AMD chips.

Install GNU/Linux Once and You are Good

@Gordon Ross

"cheaper alternative, in my experiance, has been to re-install Windoze every 6 months or so. It's surprising how just a Windoze re-install will improve the speed of a PC."

Yes. I have see this too. However, if you install GNU/LInux you can run for years with better performance and no slowdowns. Last night I installed Debian Lenny GNU/Linux on an old PC. It was loaded with undetectable malware which was clogging the uplink with something. Download volume = upload volume. Download speed changed from a few kilobytes per second to more than 100 kilobytes per second and the desktop was very snappy. The owner should be able to run for years on that 1.4gHz CPU.

A few months ago, I upgraded the motherboard and hard drives on my personal PC. I did not need to re-install, just copied all the files to the new storage systems. The OS was Debian Sarge GNU/Linux installed in 2004. To upgrade to Etch, all I needed to do was type apt-get update;apt-get dist-upgrade after switching to the new repository. Debian refreshes itself. No need to do much every 6 months except ask it to do so. Further, when you re-install that other OS, you need to re-install drivers and applications separately. With GNU/Linux, if all your stuff comes out of a repository, the package management software does it all.

If time is money, the time you save on re-installing and re-re-rebooting is worth far more than the cost of a new PC when you run GNU/Linux.

Fuel Cell

A liquid fuel and a fuel cell system can easily go 100 km. One can make methanol, for instance, from biomass pretty easily. I do it as a science experiment for kids (using a fume-hood, of course), Ethanol or any liquid fuel rich in hydrogen can work.

The Good

It is good that IT is an industry dedicated to doing things faster, better and cheaper. The bad and ugly are present but we can work around them. If you look hard, you can always find someone selling what you need instead of what the bad guys want you to buy. By making the right choices, we can cut our cost of operation by a factor of two or three and increase the useful lifetime of equipment and software. The bad and the ugly can only stay in business if we let them push us around.

I Say It's Past The Time When The World Drop M$

It is closed-source software. Only M$ can fix what they made. They won't fix it in a timely fashion and there are too many vulnerabilities altogether. Depending on M$ to provide software for the world's computers is foolish.

It is time to go with FLOSS. At least with FLOSS we can fix anything that goes wrong and because the software is better designed there are fewer vulnerabilities.

CVE-2008-1436:

"Microsoft Windows XP Professional SP2, Vista, and Server 2003 and 2008 does not properly assign activities to the (1) NetworkService and (2) LocalService accounts, which might allow context-dependent attackers to gain privileges by using one service process to capture a resource from a second service process that has a LocalSystem privilege-escalation ability, related to improper management of the SeImpersonatePrivilege user right, as originally reported for Internet Information Services (IIS), aka Token Kidnapping."

For Pity's sake. They cannot get the basics right and then bury stuff ten levels deep in a GUI. It's a house of cards. Get out from under it before it falls on you.

2009 is the Year of GNU/Linux

GNU/Linux is on a roll. There is news that the French have realized huge savings with it. The trolls on the GNU/Linux boards are being drowned out by the weight of rational users of GNU/Linux. M$ is laying folks off and selling more vapour-ware. The recession is forcing many to opt for lower cost after waiting out Vista. Thin clients are steadily advancing and GNU/Linux works well on them. The netbooks continue to be a bright spot. BRIC countries are hardly slowing down in their adoption of GNU/Linux. Large businesses and the retail buyers are about the only customers M$ can rely on these days and the retail customers are seeing some GNU/Linux on the shelves.

The recent study by IDC shows a huge shift in sentiment. A couple of years ago KACE did a similar study. These results are consistent with those and show that GNU/Linux has matured on desktop and server. With virtualization/thin computing GNU/Linux continues to shine. By the end of 2009 there will be more than 10% of desktops using GNU/Linux. Some parts of the world will be at 20%. At these levels of adoption, few on the planet will not know about GNU/Linux and everyone will be able to make a choice. The current regime where M$ pays OEMs to install that other OS is not sustainable. If M$ cannot stop the slide in 2010, they will have lots of downside. M$ is effectively cutting its prices now, but hiding the fact with NDAs. The SEC filings continue to tell the tale.

Good Reason to Use Free Software

Free Software is distributed under generous terms including making copies and redistribution so allegations of doing so are moot. These terms means a low or no licence fee is usual so it costs less, too.

It Actually Saves Money and Increases Performance to be Greener

Any organization with multiple computer seats can be a lot greener by using thin clients. Thin clients are generally good for ten years of use, two or three times the life of a thick client with all its moving parts and high maintenance. That automatically cuts down negative environmental effects by a large factor and gives benefits of reduced maintenance, lower power consumption, less noise and no dust collection, smaller footprint, all big money savers and things that increase productivity.

On the server side, consolidation and virtualization will do more than any change in manufacturing technique.

For software, using FLOSS saves a lot of money and increases the lifetime of equipment because FLOSS is not in the conspiracy to force upgrades. Money saved on licences and longer lifetime of equipment can be invested in other green technology or put in the bank.

GNU/Linux with Apache is the dominant OS on servers on the web according to Netcraft

see http://news.netcraft.com/archives/web_server_survey.html

M$ may hold the majority of file servers because of embracing/extending/extinquishing CIFS but they have no real merit over NFS+CUPS in the 'NIX world.

The world is not enamored of Vista and Vista II, so the dominance on the LAN could end precipitously. Other pressures against M$ are server consolidation. One can consolidate better with a more efficient OS. Any 'NIX is more efficient than M$'s product because of shared memory. Virtualization can work with M$ to consolidate but you can put more processes in a give amount of RAM with GNU/Linux.

In a Downturn Free Software will Survive. Will That Other OS Survive M$'s Downfall?

Pity all the developers hired to make software that costs lots of cash. They will be out of a job when folks decide to switch to free software. Free Software has a growth rate like 50-100% per annum even in a downturn. That other stuff takes a nosedive. If Vista II flops, expect layoffs at M$...

IEFBR14

IBM did it years ago. IEFBR14 could really make your computer run quickly. It consisted of one instruction, to return control to the operating system. That is provable the fastest possible programme so any patent for accelerating programmes should be trumped by prior art.

Sheesh! We need judges who can code a line or two of software before they opine on the originality of code. We have known we can exchange storage for speed since the 1960s. We have known that we can expand loops to inline code or optimize the innermost loops or use different/faster instructions therein. There is nothing new under the sun in software, only combinations yet unused. A permutation is not an invention. Move along, please.

Windows ve Linux

So true. I have read several reports that the typical IT department needs three times as many servers running that other OS as GNU/Linux. One thing these big chips with huge caches will do for GNU/Linux is to allow more users to run on a single GNU/Linux terminal server. With dual-socket-quad-core a fairly large school can run most of the desktops from a single server. That is performance. Hex/octal core chips will be able to do that with a single socket and we can use a second machine for backup for very little cost compared to a bigger cluster of servers.

Scale

This year I ran a lab in which I was able to observe XP, Vista and GNU/Linux running on a variety of hardware:

1)Vista sucked on AMD64 X2 5000 with 2 gB RAM

2)XP was OK on P4 with 512 MB

3)GNU/Linux on thin clients with 64 MB was the best.

I should explain the last item. I ran 24 users on thin clients from an old XEON server with 2.4 gHz clock and 2 gB RAM, 80 MB per user on the server and 64 MB on the client=144 MB and 100 MHz per user on the system. This means Vista is many times less efficient than GNU/Linux. Vista may be designed to maximize profits for M$ and Intel but it sucks bigtime for the customer/user.

A recent server by KACE found that 11% of IT professionals were in the process of switching from M$ this year, more next year and a bunch after that. In two years the M$ monopoly will be down the drain, thanks to Vista and M$'s contempt for users. M$ could fool all of the people some of the time and some of the people all of the time but they cannot fool all of the people all of the time.

M$ should invest its billions re-inventing itself and not the OS. If they do not they will be a dinosaur within five years. Once the monopoly is broken, they will have to compete on price/performance and Vista-like OS will not make the cut. It's time to change to a UNIX-like OS be it MacOS, openSolaris, FreeBSD, or GNU/Linux. I recommend GNU/Linux because it has been doing the job for ten years or more, has fantastic (and still improving) device support, is modular and configurable, and is lean when you need it to run on anything built in the last ten years or more.

In 2006, I built a complete IT system for a school using GNU/Linux. They were able to afford twice as many seats plus toys for the price of a system running M$. M$ makes no sense to anyone who cares about price/performance and who is not locked in. Emerging markets are not locked-in and M$ will be irrelevant soon. Get used to it, Mike.