Brief but awesome story over at ArsTechnica about a Netware 3.12 fileserver named "INTEL" running on an i386 with two 800Mb Quantum SCSI drives that's been in non-stop service for 16.5 years. From September 23, 1996 to March 28, 2013 - that's one amazing run! Read about it here

A Netware server that's been running non-stop for 16+ years

Just goes to show what drives were capable of when engineered such that capacity (and price) took backseat to reliability. Ah...the good old days! (Sometimes they actually were better.)

There's some truth to that. There was a story of a GE lightbulb that had burned for something like 20 years. It became a minor attraction in the town it was in. When it finally failed, it was returned to GE, who desperately wanted to know why.

The analysis determined that too much tungsten (about 4X) had been applied to the filament inside the bulb due to a random glitch in the manufacturing process. This extra tungsten significantly extended the service life of the bulb.

GE's official conclusion: The bulb burned that long because it was..."defective."

Seems that being defective has little to do with function and everything to do with form. Since the bulb was "out of spec" it was therefor "defective" QED as far as the manufacturing engineers were concerned.

There's some truth to that. There was a story of a GE lightbulb that had burned for something like 20 years. It became a minor attraction in the town it was in. When it finally failed, it was returned to GE, who desperately wanted to know why.

The analysis determined that too much tungsten (about 4X) had been applied to the filament inside the bulb due to a random glitch in the manufacturing process. This extra tungsten significantly extended the service life of the bulb.

GE's official conclusion: The bulb burned that long because it was..."defective."(see attachment in previous post)Seems that being defective has little to do with function and everything to do with form. Since the bulb was "out of spec" it was therefor "defective" QED as far as the manufacturing engineers were concerned.

I don't remember a time when disk product designers emphasized reliability at the expense of capacity and price. Indeed, up until the last few years, capacity was a huge concern.

And today, YOU can make the choice between capacity and reliability. The use of RAID technology allows you to decide whether you want to use part of your capacity for additional storage space, or dedicate it to redundancy in order to achieve reliability.

It think that today, putting the tradeoff directly into our hands makes us all better off. If we're all tied to a single decision made by the product designers, then the solution is only optimal for a small set of people. Now we can all optimize.

(I get tired of people pining for the good old days. These are the good old days.)

Not really. The planned obsolescence came later. In the Netware era, servers were supposed to last indefinitely. They were over-engineered - and priced accordingly. When there wasn't a demand for a lot of them, you could easily justify putting the money in. But when demand for network services went through the roof, the old philosophy of "small numbers of high quality and expensive" gave way to "many inexpensive and easily replaced" when it came to hardware.

Each philosophy had its merits and strong points. But numerous, relatively cheap, and 'good enuff' seems to be the way the market and the field has gone. At least for your garden variety networked data requirements like serving web pages, tossing e-mail, hosting social networks, bootlegging media or software, and posting stupid or obscene pictures. But since that's what I'd say 80% of the overall computer use is these days - who really cares? Just get it up and running 'good enough."

So it's not so much "planned obsolescence" (except when it comes to new versions of MS Office ) as it is a question of economics and the "fix vs replace" calculation. The old adage: Speed/Price/Quality - pick any two! has never been so true as it has with computer hardware. The simple truth is you pretty much get what you pay for. And people are not willing to pay too much in the way of a premium just to get reliability. Most would just rather replace something when/if needed. And that lower reliability does keep the upfront costs down. So in an era of financial management where getting your boss to even look as far down the road as the current quarter (as opposed to the current month) when it comes to spending and "making the numbers" is quixotic at best, cheap less reliable hardware wins the day.

From what I've seen, the newer stuff doesn't hold up as well because it simply wasn't built as well. Or tested as thoroughly before it was boxed and put into inventory. And oddly enough, most times it doesn't really matter. If it breaks - they'll send you a new one if it's still under warranty. If not, you just buy a new one. (That's also why reliable backups and continuity planning are more critical than ever.)

Don't try some FUD BS. Because it is total BS. And you darn well know it.

And don't tell me that you haven't bought some product with a 2 year warranty that died at 2 years + 1 month. Be it a toaster or TV or whatever.

C'mon... Stop with the crap. Planned obsolescence is brilliant engineering today. They to a great job of ensuring that you have to replace everything just after the warranty expires.

As for mission critical stuff... that is so far removed from the consumer experience as to be laughable. It's just not affordable at the consumer level.

And for typical IT... average stuff that is... C'mon... It might be available to major corporations, but none of that kind of durability is available for typical IT use by SMEs. Everything fails. I've seen ASP applications just deteriorate after being thoroughly tested. It's like some kind of digital rot sets in. I don't know how, but it happens.

When I was a kid, we had a Motorola TV that was 25 years old. It worked. What's the age of the average TV today?

Please... A little more honesty and not so much total bullshit, please?

2. My comment was meant to be taken as tongue in cheek. (Just thought I'd mention that in case it wasn't clear enough.)

And I agree. Things are generally 100% better these days than way back when. I wouldn't want to go back to them for much of anything - unless we could get our personal privacy back as part of putting up with the nonsense we used to take for granted.

And today, YOU can make the choice between capacity and reliability. The use of RAID technology allows you to decide whether you want to use part of your capacity for additional storage space, or dedicate it to redundancy in order to achieve reliability.

FWIW, RAID does not increase reliability. All it does (and can do) is minimize downtime. Two disks are twice as likely to fail as one disk. Three disks gives you three times the likelihood of a drive failing at any point in time. The difference is that with RAID, a recovery is generally easier to perform and more efficient. (Unless the array failure came as a result of the controller going south.) But that's not the same thing as RAID being more 'reliable" from a hardware perspective.

It think that today, putting the tradeoff directly into our hands makes us all better off.

I'm not sure I can agree with that as a general principle in that the consensus of 300 baboons is no better a decision (IMHO) that the opinion of one baboon. And from my experience, "the wisdom of crowds" is highly overrated at best - and wishful thinking more often than not. Especially when it comes to technology.

However, I do agree that 'choice' should be the exclusive prerogative of the chooser. For better or for worse, I have far too much respect for people (as individuals) to ever presume to micromanage their decisions.

Don't try some FUD BS. Because it is total BS. And you darn well know it.

And don't tell me that you haven't bought some product with a 2 year warranty that died at 2 years + 1 month. Be it a toaster or TV or whatever.

C'mon... Stop with the crap. Planned obsolescence is brilliant engineering today. They to a great job of ensuring that you have to replace everything just after the warranty expires.

As for mission critical stuff... that is so far removed from the consumer experience as to be laughable. It's just not affordable at the consumer level.

And for typical IT... average stuff that is... C'mon... It might be available to major corporations, but none of that kind of durability is available for typical IT use by SMEs. Everything fails. I've seen ASP applications just deteriorate after being thoroughly tested. It's like some kind of digital rot sets in. I don't know how, but it happens.

When I was a kid, we had a Motorola TV that was 25 years old. It worked. What's the age of the average TV today?

@ Renegade - I don't think anybody is trying to BS anybody here. Re-read my previous comment. I think you might have missed what I was saying.

What I was saying is that it isn't planned (as in conspiracy) as it is more that people would rather pay less because they plan on replacing something to take advantage of new advances (like 3D or whatever) when they become available rather than pay a fortune now. Look at TVs. We went digital. An old TV - no matter how reliable - can't receive current TV signals or produce an HD image. Not to say it won't turn on, or work (with some ancillary devices attached) as it won't work as well. Or provide as good a user experience. When it comes to entertainment tech, most people want the newest available - not something that's gonna last for 25 years.

Cars are the exact opposite. They're immeasurably better than they were 30 years ago. They're better built, more reliable, and safer. You can (with routine maintenance) put two or three hundred thousand miles on one of today's cars without a major overhaul. Back in the 60s, having the odometer hit 100K miles without a full engine overhaul was a major event. So where's the planned obsolescence there?

In the case of cars, the market hit a point where the customer's requirements were met at a price level most were able to afford. Same deal with every other non-service technology I've seen. The public decides what it's willing to pay. The manufacturers then try to upsell them. But eventually a point of equilibrium is reached. So I guess I'm saying if obsolescence is part of the equation, it's the consumers who are largely dictating what that part of the equation is going to be - based on what they buy. And there's enough competition out there that there aren't too many business (like Apple) that can dictate things beyond a certain point. The best they can do is try to "differentiate" themselves in the marketplace and hope that the customers will appear.

So no...I don't agree that the theory of "planned obsolescence" is a sustainable business strategy. Unless you have a lock on the technology itself - which is what the motivation is behind the current push for hyper-restrictive IP legislation by many industries. We're not there yet, fortunately.

Please... A little more honesty and not so much total bullshit, please?

And by the same token, a little more courtesy and a touch less spleen please? We can be passionate about something without needing to become angry over it.

Actually, it seems to me that it was Google that pioneered the use of disposable computing hardware. Their server farms are based on consumer-grade computers packaged into rack mountings. And you don't get much more "major corporation" than that.

Everything fails. I've seen ASP applications just deteriorate after being thoroughly tested.

If you think that deterioration can be built into software, in a way that's not detectable, I think your tinfoil hat is too tight. And think about it: why would the software publisher want to invest the extra development cost in order to design that "feature" in?

FWIW, RAID does not increase reliability. All it does (and can do) is minimize downtime. Two disks are twice as likely to fail as one disk.

It's true that RAID just has you multiplying the chance of a *single* failure. But you're also multiplying your chances of surviving a failure, at least to the extent that multiple *simultaneous* failures are geometrically less likely. It's not a perfect solution, but generally speaking, this does greatly enhance reliability: even at home, I can pretty much always rely on my RAID-1 NAS to be available.

I'm not sure I can agree with that as a general principle in that the consensus of 300 baboons is no better a decision (IMHO) that the opinion of one baboon. And from my experience, "the wisdom of crowds" is highly overrated at best - and wishful thinking more often than not. Especially when it comes to technology.

However, I do agree that 'choice' should be the exclusive prerogative of the chooser.

I think we're actually agreeing here. I'm not claiming that a committee can come up with a better answer applicable to everyone. Your latter statement is exactly what I'm trying to say: there does not exist any one single answer that's good for everyone, so the fact that each person can optimize it individually is a good thing.

UPDATE: 40hz and I cross-posted. I agree with everything in his later post.

It's hard to beat those old Quatum SCSI drives. I used to use them on my old system, and they were extremely fast and reliable. I think if they weren't so darned expensive more people would be using them...

As for the tungsten being 4X thickness, that would be a defect. That much more tungsten would cause the bulb to use more wattage and run a lot hotter, with the advantage of being brighter. If it was rated as a 60 W bulb and used in a socket that was only rated at 60 watts, chances are it would have caused a fire.

If anyone has ever used a shop lightbulb, you'll notice that the tungsten is thicker to aid against shock breakage, and the glass is also a little thicker to deal with the excess heat and punishment of being used in a shop-light. They're also a hell of a lot more expensive than a standard bulb.

Of course this is all a mute point considering the old incandescent bulbs are going by way of the dinosaur....