One thing that occurred to me in reading his article is the idea of discontinuous user training vs. something that is much more continuous. When Microsoft or anybody else puts out a major new release of a desktop product, there is a lot of retraining required of existing users. This causes hesitation when people are considering upgrades and can be a hefty expense as well as a large amount of lost productivity time.

I use Google Mail for my personal email. Every once and a while I’ll notice a new button or something that is in a new place. It’s not major and only takes a moment or two to figure out and then I’m on my way again. If done correctly, these changes can be inserted in a more or less smooth way and the “retraining” happens bit by bit. I guess I would say that I’ve been retrained and I didn’t realize it until a few minutes ago. Very sneaky!

5 Comments

Typical commercial software tends to go for one big upgrade every year or two in order to justify charging customers more $$$. What’s the point in putting resources into a minor release that adds more features when a) customers probably won’t pay you anything for it as it’s a minor release, and b) it reduces the number of new features you can claim for the next major version, giving customers less reason to upgrade and pay you $$$.

But OTOH, like most web apps, open-source projects tend to “release often”, with new features slowly added over a number of minor releases. However, even with open-source and web apps, presumably including Google’s offerings, there is occasionally (say, every 4 years or so) a “big release”, with a whole bunch of changes at all levels.

The difference here between open source desktop apps and web apps is that, with open-source, *you* get to decide when to upgrade, and can retrain on your (or your company’s) own timetable.

Adam, I agree that open source allows you the choice of when to upgrade to the more frequent releases, but it is different if you are managing one or two machines versus managing three thousand of them. Small incremental updates that require minimal training yet improve functionality might be a better option in the latter case.

All this said, it’s great we’re making progress in this area, one way or another.

When “major new releases” are accompanied by inefficient UI changes, it makes it easier for CIOs to say no to their costs. And the hesitation starts with staff who say, “we’re changing… again?” Overall, as Bob implies, it’s a loss of value.

If you’ve got 3000 desktops, you ought to have enough IT manpower to set up your own “testing” and “stable” apt repositories. New apps/upgrades go into “testing” to get beaten on by IT, and then get pushed to “stable” when they’re judged OK. If all the desktops are based off an image that has an “apt-get update && apt-get upgrade” cron job run every night, you shouldn’t have that many problems.
:-)