Posted
by
ScuttleMonkeyon Monday November 02, 2009 @02:14PM
from the salesman-ejection-seat dept.

snydeq writes "InfoWorld's Dan Tynan surveys six 'transformational' tech-panacea sales pitches that have left egg on at least some IT department faces. Billed with legendary promises, each of the six technologies — five old, one new — has earned the dubious distinction of being the hype king of its respective era, falling far short of legendary promises. Consultant greed, analyst oversight, dirty vendor tricks — 'the one thing you can count on in the land of IT is a slick vendor presentation and a whole lot of hype. Eras shift, technologies change, but the sales pitch always sounds eerily familiar. In virtually every decade there's at least one transformational technology that promises to revolutionize the enterprise, slash operational costs, reduce capital expenditures, align your IT initiatives with your core business practices, boost employee productivity, and leave your breath clean and minty fresh.' Today, cloud computing, virtualization, and tablet PCs are vying for the hype crown." What other horrible hype stories do some of our seasoned vets have?

The bad news is that artificial intelligence has yet to fully deliver on its promises.

Only idiots, marketers, businessmen and outsiders ever thought we would be completely replaced by artificially intelligent machines. The people actually putting artificial intelligence into practice knew that AI, like so many other things, would benefit us in small steps. So many forms of automation are technically basic artificial intelligence, it's just very simple artificial intelligence. While you might want to argue that the things we benefit from are heuristics, statistics and messes of if/then decision trees, successful AI is nothing more than that. Everyone reading this enjoys benefits of AI but you probably don't know it. For instance, your hand written mail is most likely read by a machine that uses optical character recognition to decide where it goes with a pretty good success rate and confidence factor to fail over to humans. Recommendation systems are often based on AI algorithms. I mean, the article even says this:

The ability of your bank's financial software to detect potentially fraudulent activity on your accounts or alter your credit score when you miss a mortgage payment are just two of many common examples of AI at work, says Mow. Speech and handwriting recognition, business process management, data mining, and medical diagnostics -- they all owe a debt to AI.

Having taken several courses on AI, I never found a contributor to the field that promised it to be the silver bullet -- or even remotely comparable to the human mind. I don't ever recall reading anything other than fiction claiming that humans would soon be replaced completely by thinking machines.

In short, I don't think it's fair to put it in this list as it has had success. It's easy to dismiss AI if the only person you hear talking about it is the cult-like Ray Kurzweil but I assure you the field is a valid one [arxiv.org] (unlike CASE or ERP). In short, AI will never die because the list of applications -- though small -- slowly but surely grows. It has not gone 'bunk' (whatever the hell that means [wiktionary.org]). You can say expert systems have failed to keep their promises but not AI on the whole. The only thing that's left a sour taste in your mouth is salesmen and businessmen promising you something they simply cannot deliver on. And that's nothing new nor anything specific to AI.

<matrix-parody>I'd like to share a revelation I've had with you, it came to me when I tried to classify your programmers. Every programmer on this planet forms a natural equilibrium with the software project, but you PHP programmers do not. You multiply and multiply script snippets until every semblance of readability and logic is removed. And then you simply spread to another project. There is another organism on this planet that follows the same pattern. Do you

The people actually putting artificial intelligence into practice knew that AI, like so many other things, would benefit us in small steps.

Actually, there was a period very early on ('50s) when it was naively thought that "we'll have thinking machines within five years!" That's a paraphrase from a now-hilarious film reel interview with an MIT prof from the early 1950's. A film reel which was shown as the first thing in my graduate level AI class, I might add. Sadly, I no longer have the reference to this clip.

One major lesson was that there's an error in thinking "surely solving hard problem X must mean we've achieved artificial intelligence." As each of these problems fell (a computer passing the freshman calc exam at MIT, a computer beating a chess grandmaster, and many others), we realized that the solutions were simply due to understanding the problem and designing appropriate algorithms and/or hardware.

The other lesson from that first day of AI class was that the above properties made AI into the incredible shrinking discipline: each of its successes weren't recognized as "intelligence", but often did spawn entire new disciplines of powerful problem solving that are used everywhere today. So "AI" research gets no credit, even though its researchers have made great strides for computing in general.

The other lesson from that first day of AI class was that the above properties made AI into the incredible shrinking discipline: each of its successes weren't recognized as "intelligence", but often did spawn entire new disciplines of powerful problem solving that are used everywhere today. So "AI" research gets no credit, even though its researchers have made great strides for computing in general.

Yeah that's when the prof introduced the concept of "Strong AI" (HAL) and "Weak AI" (expert systems, computer learning, chess algorithms etc). "Strong" AI hasn't achieved its goals, but "Weak" AI has been amazingly successful, often due to the efforts of those trying to invent HAL.

Of course the rest of the semester was devoted to "Weak AI". But it's quite useful stuff!

The problem is that, if it isn't that, then what is "artificial intelligence", rather than flashy marketing speak for just another bunch of algorithms?

Exactly. "Artificial intelligence" seems to serve various purposes—at best vacuous and at worst deceptive. How many millions of dollars have academicians raked in for various projects that involve research into "artificial intelligence"?

What makes all this silliness sustainable is the philosophical fog that surrounds words such as "intelligence" and "thinking". Such words easily slip their moorings in our common language, and acquire some very strange uses. Yet, because they are recognizably legitima

Maybe. But I think it's mostly just a disconnect between what the people who work in the field believe the term to mean and what the general public takes the term to mean. Some of that might just be naivete on the part of researchers. And maybe some bravado as well.

When I hear about intelligent anything to do with computers, I just think of a system that learns. That, to me, is the key differentiator. On the other hand, my mom's friend was telling me one night at dinner that her son was taking a cla

A very good example, that. That $20 DSP does nothing but a brute force search on certain sound patterns. This is not in any way similar to how humans process speech.

I am not in the camp that says humans have a certain ineffable something that computers can never replicate, but using brute force pattern matching is not the way to find out just how human perception works and reimplementing it in a machine. Chess, BTW, is an example of the opposite: even humans do a brute force search down the decision tree. Sometimes they're trained enough to prune the tree quickly, but that is no different from the common algorithms currently in use.

As Douglas Hofstadter puts it, the most interesting things happen in those 100ms between seeing a picture of your mother and going 'Mom!', and we're nowhere near understanding that problem space enough to implement it in AI. At least, we weren't a couple of years back. I haven't kept up with current developments though.

It's a fascinatingly complex process. Seriously, read up a bit on Wikipedia and perhaps take a few foreign languages. There are many, many points of failure. I think it's interesting to consider Orwell's argument about language in 1984. When thinking of Orwell, I'm glad that I've had the opportunity to be exposed to as many languages as I have. The more languages I learn, even if only a few words and concepts, the more modes of thinking I open myself up to. A new language to me can sometimes introduce a whole new viewpoint on the world, simply through the specific connotations and denotations. Usually denotations are easy to translate, however connotations can pose such of a problem that sometimes we prefer to just outright borrow a word from another language to express precisely our meaning. Language can evoke all 5 senses.

Personally, I'm fascinated by language, written and spoken. There are words I learned in Germany that I still use today even though I'm no longer anywhere near fluent (use it or lose it). For example, in English we have a "shortcut," but I can't readily think of the opposite unless I use the German word "Umweg." Another example: as I was looking at art in a story today I came across some Japanese characters (because we know that hanging up symbols you have no idea about is so cool), I noticed that the kanji for woman was one of the radicals in a kanji that was translated as "tranquility." It made me wonder who, thousands of years ago, thought about the concept of tranquility and decided that the lower radical should be the symbol for "woman." I could go on like this. Suffice to say, language is perhaps the single tool we use to define our consciousness has humans.

I'd further pontificate that unless we were to create an AI for whom language is as prevasive as in the human mind, chasing strong AI will always result in failure.

The reason why artificial intelligence still seems so distant is because no artificial computer has the brute force of the human brain. The average brain has tens of billions of neurons, each of which can process thousands of inputs a few hundreds times per second.

Although computers have been able to simulate smaller assemblages of neurons very precisely, simulating the full scope of a human brain is still off reach, even for Google.

Unless you lock down the permissions so tightly that the system is unusable, your users will enter bad data. They'll add new entries for objects that already exist, they'll misspell the name of an object and then create a new object instead of editing the one they just created. They'll make every possible data entry error you can imagine, and plenty that you can't.

We'd see a lot more progress in business software applications if all vendors would follow two rules:

Every piece of data that comes from the user must be editable in the future

Any interface that allows a user to create a new database entry MUST provide a method to merge duplicate entries.

Just out of curiosity... did you ever try to find out WHY people were making entries with invalid phone numbers? Is it at all possible that instead of your users being idiots, they HAD to make an entry, but the phone number was one piece of data that simply wasn't available?

If I've learned anything over a lot of years of programming, it's that when your users absolutely insist on doing something contrary to what your program wants them to do, it's time to sit down and listen.

"We had a simple field on a form to "Supply a Telephone Number". The users didn't, so we used JS to validate they had filled it in."

So instead of validation server-side you rely on validation client-side?

"The more you Idiot-Proof a system, the smarter the Idiots become. Not smarter at actually entering the correct data, just smarter at bypassing the protections you put in place."

Hummm... Why are your users entering such telephone numbers as 1111111? Are you *sure* they do it on mistake? Or might it be that they don't *want* to give their telephone number to you for their own valid reasons and you still didn't add the option "I don't have or don't want to share my telephone number with you"?

So having all these idiot users put in 111111 made how many all those records useless? Did you ever send a memo saying "If you put in 11111111 you're wasting your time since this record won't be used by anyone, ever."?

Simple, Mr. Web Guy. I don't trust you with my fucking number. I barely trust you with my email, but getting spam there is sort of a solved problem for me (Thank you GMail). But getting called because you want to upsell me on some $4 widget? No thanks. Stop REQUIRING my phone number. Just because your marketing guy wants it doesn't make it useful to get.That's the problem now... marketing is so good at getting your message across, you try at all costs to get the upsell and get your value back. Meanw

The more you Idiot-Proof a system, the smarter the Idiots become. Not smarter at actually entering the correct data, just smarter at bypassing the protections you put in place.

Sigh.

This depresses me.

The same old lazy "users are idiots" arguments.

Did you bother finding out WHY users were going to such lengths to get around your validation routines? Maybe...just maybe, they had perfectly good reasons for not entering this piece of data. For the most part I have found that users have very good reasons for do

I've never worked for a software company but as the "computer guy" I got to help move people from the "emailing spreadsheets around" workflow to basic MS Access database applications (I know just enough about databases to be horrified about the idea of using Access for critical business functions but it's better than Excel).

As the maintenance manager of a factory I got to help the plant manager make software purchasing decisions. I've come to the conclusion that mid-sized to large corporations should just bite the bullet and hire their own programmers. If it makes sense to design your product and design your own assembly lines and design your own tooling, jigs and fixtures then it makes sense to design your own software. Any cost savings you can achieve by outsourcing to a more specialized company never seems to materalize.

When a corporation passes a certain size, having a packaged ERP is a good idea (for legal compliance).

The problem is, generic accounting packages, ERP packages, etc. work best when you don't have a bunch of exceptional processes (our PO process had 17 variations- from as little as 2 lines on the form- to a full form plus multiple attachments-- people used the same form area for many different meanings-- the 1 page form was really about a 4 page form).

It just when some aspect of symbolic computing is successful, its not really considered AI anymore and the goal changes. Or it was any computing technology to emerge from an AI laboratory was considered AI'ish.

Some researchers divided this into "soft" and "hard" AI. The later would be someone conversational humna-like mentality. The former is any software technology along the way.

Having taken several courses on AI, I never found a contributor to the field that promised it to be the silver bullet -- or even remotely comparable to the human mind.

Not today, after the "AI Winter". But when I went through Stanford CS in the 1980s, there were indeed faculty members proclaiming in print that strong AI was going to result from expert systems Real Soon Now. Feigenbaum was probably the worst offender. His 1984 book, The Fifth Generation [amazon.com] (available for $0.01 through Amazon.com) is particularly embarrassing.
Expert systems don't really do all that much. They're basically a way to encode troubleshooting books in a machine-processable way. What you put in is what you get out.

Machine learning, though, has made progress in recent years. There's now some decent theory underneath. Neural nets, simulated annealing, and similar ad-hoc algorithms have been subsumed into machine learning algorithms with solid statistics underneath. Strong AI remains a long way off.

Compute power doesn't seem to be the problem. Moravec's classic chart [cmu.edu] indicates that today, enough compute power to do a brain should only cost about $1 million. There are plenty of server farms with more compute power and far more storage than the human brain. A terabyte drive is now only $199, after all.

"CASE" isn't entirely bunk either. CASE as CASE might be, but computer aided software design isn't. Perhaps most here are now too young to remember when, if you wanted a GUI, you had to design it by hand, positioning all the elements manually in code and then linking things up manually, in code.

Now almost nobody designs a GUI without a RAD tool of some kind. You drop your GUI elements on the window and the tool generates code stubs for the interaction. That's way, way nicer, and way, way faster than, for example, setting up transfer records for a Windows 3.1 form.

AI already has successes. But, as an AI researcher friend of mine points out, once they succeed it's no longer 'AI'. Things like packet routing, used to be AI. Path-finding, as in games, or route-finding, as with GPS: solved. So yes, AI will never arrive, because AI is always 'other than the AI we already have.'

For instance, your hand written mail is most likely read by a machine that uses optical character recognition to decide where it goes with a pretty good success rate and confidence factor to fail over to humans.

In fact, the Deutsche Post (Germany's biggest mail company) uses a neural network to process hand-written zip codes. It works rather well, as far as I know. Classic AI, too.

Plus, spam filters. Yes, they merely use a glorified Bayes classifier but, well... learning classifiers are a part of AI. Low-

Not sure why virtualization made it into the potential snake-oil of the future. It's demonstrating real benefits today...practically all of the companies I deal with have virtualized big chunks of their infrastructure.

I'd vote for cloud computing, previously known as utility computing. It's a lot more work than expected to offload processing outside your organization.

Yup, even for "just" development, virtualization has been a great gift. With one or two beefy machines, each developer can have an exact mirror of a production environment, and not cause issues on the production side or even for other developers while testing code and such.

because virtualization only works for large companies with many, many servers, yet contractors and vendors sell it to any company with a couple of servers. You should virtualize you email ($2,000 by itself, give or take a little), web server, ($2,000 by itself, give or take a little), source control ($1,000 by itself, give or take a little, and a couple of others. So you have maybe $10,000 in 5 to 6 servers needed to run a small to mid-size company and spend tens of thousands to put them on one super-server running a complex setup of virtualized servers...oh no, the motherboard died and the entire biz is offline.

Having been involved in a business start-up for a year or so now, I'd have to disagree. Virtualization is indispensible for QA testing. Being able to run a virtual network on a personal PC lets me design, debug, and do proof-of-concepts without requiring the investment in actual equipment.
Virtualization isn't just about hardware consolidation: it's also about application portability. Small companies have just as much need for QA testing, hardware recycling, and application portability as the large ones.

How about using VMWare to make sure you are not doing something decidedly stupid. I have VMWare images of every platform our software supports. I can easily verify that everything works as advertised without running all over the place snagging time on different machines. And if I encounter an issue on a particular setup, I can save a snapshot for later or restore the machine to it's pre-install state and try again.

Spoken like someone who invested the technology five years ago, and hasn't updated their information since.

1. If a small business is running more than two servers, then it's likely it'll be cheaper, over the next five years, to virtualize those servers.2. If a small business needs any sort of guaranteed uptime, it's cheaper to virtualize - two machines and high availability with VMWare, and you are good to go.3. Setting up VMWare, for example, is relatively simple, and actually makes remote management easier, since I have CONSOLE access from remote sites to my machine. Need to change the network connection or segment for a machine remotely? You can't do it safely without virtualization.

There is more, but I recommend you check this out again, before continuing to spout this stuff. It's just not true anymore.

because virtualization only works for large companies with many, many servers

You're full of crap. At my company, a coworker and I are the only one handling the virtualization for a single rackful of servers. He virtualizes Windows stuff because of stupid limitations in so much of the software. For example, we still use a lot of legacy FoxPro databases. Did you know that MS's own FoxPro client libraries are single-threaded and may only be loaded once per instance, so that a Windows box is only capable of executing one single query at a time? We got around that by deploying several virtualized instances and querying them round-robin. It's not perfect, but works as well as anything could given that FoxPro is involved in the formula. None of those instances need to have more than about 256MB of RAM or any CPU to speak of, but we need several of them. While that's an extreme example, it serves the point: sometimes with Windows you really want a specific application to be the only thing running on the machine, and virtualization gives that to us.

I do the same thing on the Unix side. Suppose we're rolling out a new Internet-facing service. I don't really want to install it on the same system as other critical services, but I don't want to ask my boss for a new 1U rackmount that will sit with a load average of 0.01 for the next 5 years. Since we use FreeBSD, I find a lightly-loaded server and fire up a new jail instance. Since each jail only requires the disk space to hold software that's not part of the base system, I can do things like deploying a Jabber server in its own virtualized environment in only 100MB.

I don't think our $2,000 Dell rackmounts count as "super-servers" by any definition. If we have a machine sitting their mostly idle, and can virtualize a new OS instance with damn near zero resource waste that solves a very real business or security need, then why on earth not other than because it doesn't appeal to the warped tastes of certain purists?

I disagree. There are some real benefits for smaller companies who can afford to virtualize, more or less depending on the types of applications. Yes, I can buy one server to run any number of business critical applications, but I've seen, in most cases, that several applications are independently business critical and needed to be available at least for the full business day or some important aspect of the company was shut down. So while a single virtual server running everything sucks, you really can get

Actually, the funny thing is, real snake oil actually does what it was originally supposed to do. "Snake oil" comes from traditional Chinese medicine (as a cure for joint pain), and was made from the fat of the Chinese water snake, Enhydris chinensis. It is extremely high in omega-3 fatty acids (particularly EPA), and is very similar to what is sold today as fish oil. Omega-3 fatty acids (in particular, EPA) are now known to reduce the progression and symptoms of rheumatoid arthritis.

Now, in the US, a variety of hucksters took fats from any old snake (if it even involved snake oil at all) and made all sorts of miraculous, unsubstantiated claims about what it would do. But concerning in its original role in Chinese medicine, snake oil likely did exactly what it was claimed to do.

Yeah, I don't think this stuff can simply be called "snake oil". ERP systems are in use. They're not a cure-all, but failing to fix every problem doesn't make a thing useless. The current usefulness of "artificial intelligence" depends on how you define it. There are some fairly complex statistical analysis systems that are already pretty useful. Full on AI just doesn't exist yet, and we can't even quite agree on what it would be, but it would likely have some use if we ever made it.

Today, cloud computing, virtualization, and tablet PCs are vying for the hype crown. At this point it's impossible to tell which claims will bear fruit, and which will fall to the earth and rot.

I agree with your post (not the article) - these technologies have all had success in the experimental fields in which they've been applied. but ESPECIALLY virtualization, which is way past experimenting and is starting to become so big in the workplace that I've started using it at home. No need to setup a dual boot with virtualization, and the risk of losing data is virtually removed (pun intended) because anytime the virtual machine gets infected you just overwrite it with yesterdays backup. No need to s

``Not sure why virtualization made it into the potential snake-oil of the future. It's demonstrating real benefits today...practically all of the companies I deal with have virtualized big chunks of their infrastructure.''

I am sure they have, but does it actually benefit them? In many cases, it seems to me, it's just people trying their best to come up with problems, just so they can apply virtualization as a solution.

I think the issue I have with both virtualization and cloud computing is a lack of concrete assessment. They are touted as wunder-technologies, and while they have their place and their use, a lot of folks are leaping into them with little thought as to how they integrate into existing technologies and the kind of overhead (hardware, software, wetware) that will go with it.

Virtualization certainly has some great uses, but I've seen an increasing number of organizations thinking they can turf their server r

I administer hundreds of virtual machines and virtualization has solved a few different problems while introducing others.

Virtualization is often sold as a means to completely utilize servers. Rather than having two or three applications on two or three servers, virtualization would allow condensing of those environments into one large server, saving power, data center floor space, plus allowing all the other benefits (virtual console, ease of backup, ease of recovery, etc..).

In one sense it did solve the under-utilization problem. Well, actually it worked around the problem. The actual problem was often that certain applications were buggy and did not play well with other applications. If the application crashed it could bring down the entire system. I'm not picking on Windows here, but in the past the Windows systems were notorious for this. Also, PCs were notoriously unreliable (but they were cheap, so we weighed the cost/reliability). To "solve" the problem, applications were segregated to separate servers. We used RAID, HA, clusters, etc., all to get around the problem of unreliability.

Fast forward a few years and PCs are a lot more reliable (and more powerful) but we still have this mentality that we need to segregate applications. So rather than fixing the OS we work around it by virtualizing. The problem is that virtualization can have significant overhead. On Power/AIX systems, the hypervisor and management required can eat up 10% or more of RAM and processing power. Terabytes of disk space across each virtual machine is eaten up in multiple copies of the OS, swap space, etc.. Even with dynamic CPU and memory allocation, systems have significant wasted resources. It's getting better, but still only partially addresses the problem of under-utilization.

Yes, it helps, but it really only helps with under-utilized hardware (and this is really only a problem in Microsoft shops). It doesn't help at all with OS creep; in fact, it makes it worse by making the upfront costs of allocating new "machines" very low; however, it has been and continues to be marketed a cure all which is where the snake-oil comes in.
VMware's solution to OS creep: run tiny stripped down VMs with a RPC like management interface (that will naturally only work with vSphere) so that the V

We need to bring about a paradigm shift, to think outside the box, and produce a clear synergy between cloud computing and virtualization.

Damn it all, man. Your don't produce synergy! You leverage synergy. Please get it right will you? The sooner you do, the sooner you can return to your core competency and synthesize some maximum value for your investors. M'kay?

IT snake oil: Six tech cure-alls that went bunkBy Dan TynanCreated 2009-11-02 03:00AM

Today, cloud computing [4], virtualization [5], and tablet PCs [6] are vying for the hype crown. At this point it's impossible to tell which claims will bear fruit, and which will fall to the earth and rot.

1. AI: Has to have existed before it can be "bunk"2. CASE: Regarding Wikipedia [wikipedia.org], it seems to be alive and kicking.3. Thin Clients: Tell that to the guys over at TiVo that thin-client set-top-boxes are bunk.4. ERP Systems: For low complexity companies, I don't see why ERP software isn't possible.5. Web B2B: He is right about this one.6. Social media: Big companies like IBM have been doing "social media" within their organization for quite some time.It's just a new name for an old practice

And as far as his first comment,

"Today, cloud computing [4], virtualization [5], and tablet PCs [6] are vying for the hype crown. At this point it's impossible to tell which claims will bear fruit, and which will fall to the earth and rot."

There's a pattern here. Many of the hyped technologies eventually find a nice little niche. It's good to experiment with new things to find out where they might fit in or teach us new options. The problem comes when they are touted as a general solution to most IT ills. Treat them like the religious dudes who knock on your door: go ahead and talk to them for a while on the porch, but don't let them into the house.

> 3. Thin Clients: Tell that to the guys over at TiVo that thin-client set-top-boxes are bunk.

Nevermind the Tivo. Web based "thin client computing" has been on the rise in corporate computing for over 10 years now. There are a lot of corporate Windows users that use what is essentially a Windows based dumb terminal. Larger companies even go out of their way to make sure that changing the setup on your desktop office PC is about as hard as doing the same to a Tivo.

> 3. Thin Clients: Tell that to the guys over at TiVo that thin-client set-top-boxes are bunk.

Nevermind the Tivo. Web based "thin client computing" has been on the rise in corporate computing for over 10 years now. There are a lot of corporate Windows users that use what is essentially a Windows based dumb terminal. Larger companies even go out of their way to make sure that changing the setup on your desktop office PC is about as hard as doing the same to a Tivo.

Client based computing (java or.net) is infact "all the rage".

They've been doing that for years. Strangely, even when your desktop PCs are locked down so tight they may as well be dumb terminals, a lot of people will still scream blue murder if it really is a dumb terminal being put on their desk.

So don't tell them it's a dumb terminal. Put a thin client on their desk and tell them they're getting a 6 ghz octocore with 32 gigs of ram and a petabyte hard drive. They'll never know. Most of them, anyway.

2. CASE: Regarding Wikipedia [wikipedia.org], it seems to be alive and kicking.

As a programmer, CASE sounds pretty neat. I think it probably won't obviate the need for programmers any time soon, but it has the potential to automate some of the more tedious aspects of programming. I'd personally rather spend more of my time designing applications and less time hammering out the plumbing. It's interesting to note that a lot of the CASE tools in that wikipedia article I'm familiar with, although they were never referred to as CASE tools when I was learning how to use them. I think the CA

Clouds are actually water vapors. So it literally is vaporware....and since it's water vapour it's no surprise that letting it anywhere near your computer hardware will instantly make that hardware go on the fritz.

I was surprised to find ERP on this list. Sure, it's a huge effort and always oversold, but there's hardly a large manufacturing company out there that could survive without some sort of basic ERP implementation.

The fundamental problem with ERP systems is that they are integrated and implemented by the second tier of folks in the engineering pecking order. Couple that fact with an aggressive sales force that would sell ice to eskimos and you've got a straight road to expensive failure.

Sing it brother (or sister)! As one who is currently helping to support an Oracle-based ERP project, expensive doesn't begin to describe how much it's costing us. Original estimated cost: $20 million. Last known official number I heard for current cost: $46 million. I'm sure that number is over $50 million by now.

But wait, there's more. We bought an off-the-shelf portion of their product and of course have to shoe-horn it to do what we want. There are portions of our home-grown process that aren't yet implemented and probably won't be implemented for several more months even though those portions are a critical part of our operations.

But hey, the people who are "managing" the project get to put it on their résumé and act like they know what they're doing, which is all that matters.

Within limits, expert systems seem to work reasonably well. Properly-trained software that examines x-ray images has been reported to have better accuracy than humans at diagnosing specific problems. The literature seems to suggest that expert systems for medical case diagnosis is more accurate than doctors and nurses, especially tired doctors and nurses. OTOH, patients have an intense dislike of such systems, particularly the diagnosis software, since it can seem like an arbitrary game of "20 Questions". Of course, these are tools that help the experts do their job better, not replacements for the expert people themselves.

Worse, users resented giving up control over their machines, adds Mike Slavin, partner and managing director responsible for leading TPI's Innovation Center. "The technology underestimated the value users place upon having their own 'personal' computer, rather than a device analogous -- stretching to make a point here -- to the days of dumb terminals," he says.

So why does it look good now? Oh right different people heard the setup and a new generation gets suckered on it.

This is a bit OT but I wanted to say that snydeq deserves a cookie for linking to the print version. I can only imagine that the regular version is at least seven pages. I hope slashdot finds a way to reward considerate contributors such as him or her for making things easy for the rest of us.

I don't know of a single IT department that hasn't been helped by virtualization of servers. It makes more efficient use of purchased hardware, keeps businesses from some of the manipulations to which their hardware and OS vendors can subject them, and is (in the long term) cheaper to operate than a traditional datacenter. IT departments have wondered for a long time: "if I have all this processing power, memory, and storage, why can't I use all of it?" Virtualization answers that question, and does it in an elegant way, so I don't consider it snake oil.

I kind of miss the crazy hotties that used to pervade the network sales arena. I won't even name the worst offenders, although the worst started with the word cable. They would go to job fairs and hire the hottest birds, put them in the shortest shirts and low cut blouses, usually white with black push-up bras - and send them in to sell you switches.

It was like watching the cast of a porn film come visit. Complete with the sleazebag regional manager, some of them even had gold chains on. Pimps up, big daddy!

They would laugh at whatever the customer said wildly, even if it wasn't really funny. The girls would bat their eyelashes and drop pencils. It was so ridiculous it was funny, it was like a real life comedy show skit.

I wonder how much skimming went on in those days. Bogus purchase orders, fake invoices. Slap and tickle. The WORST was if your company had no money to afford any of the infratsructure and the networking company would get their "capital finance" team involved. Some really seedy slimy stuff went down in the dot-com boom. And not just down pantlegs, either.

I kind of miss the crazy hotties that used to pervade the network sales arena. I won't even name the worst offenders, although the worst started with the word cable. They would go to job fairs and hire the hottest birds, put them in the shortest shirts and low cut blouses, usually white with black push-up bras - and send them in to sell you switches.

Most of the technologies in the article were overhyped but almost all have had real value in the marketplace.

For example, AI works and is a very strong technology, but only the SF authors and idiots expect their computer to have a conversation with them. Expert systems (a better name) or technologies that are part of them are in place in thousands of back-office systems.

But, if you're looking for HAL, you have another 2001 years to wait. Nobody seriously is working toward that, except as a dream goal. Everybody wants a better prediction model for the stock market first.

I got interested in AI in the early 90's and even then the statements made in the article were considered outrageous by people who actually knew what was going on. I use AI on a daily basis, from OCR to speech and gesture recognition. Even my washing machine claims to use it. Not quite thinking for us and taking over the world, but give it some time:).

Same with thin clients. Just today I put together a proposal for three 100 seat thin client (Sunray) labs. VDI allows us to use Solaris, multiple Lin

We used to play buzzword bingo when vendors would come in for a show. Some of my personal favorites:

IT Best Practices - Has anyone seen my big book of best practices? I seem to have misplaced it. But that never stopped vendors from pretending there was an IT bible out there that spelled out the procedures for running an IT shop. And always it was their product at the core of IT best practices.

Agile Computing - I never did figure that one out. This is your PC, this is your PC in spin class.

Lean IT - Cut half your staff and spend 3x what you were paying them to pay us for doing the exact same thing only with worse service.

Web 2.0 - Javascript by any other name is still var rose.

SOA - What a gold mine that one was. Calling it "web services" didn't command a very high premium. But tack on a great acronym like SOA and you can charge lots more!

All those are just ways for vendors and contractors to make management feel stupid and out of touch. Many management teams don't need any help in that arena, most of them are already out of touch before the vendor walks in. Exactly why they're not running back to their internal IT people to inquire why installing Siebel is a really BAD idea. You can't fix bad business practices with technology. Fix your business practices first, then find the solution that best fits what you're already doing.

And whoever has my IT Best Practices book, please bring it back. Thanks.

That's too much information. Before they know it, their scientists are talking to the competition and trade secrets are leaking out."

I don't think author has a clue. The secrets which could be accidentally spilled are not worth keeping. If it so short it bound to be trivial, really essential results are megabytes and megabytes of data or code or know-how. Treat your researcher as prisoners, get prison science in return.

It was not too long ago that Java was going to:Give us applets to do what Browsers can never do: Bring animated and reactive interfaces to the web browsing experience!Take over the desktop. Write once, run anywhere and render the dominance of Intel/MS moot by creating a neutral development platform!

Yes, perhaps its found a niche somewhere. But its fair to say it fell short of the hype.

The most obvious counterexample to the "AI" nonsense is to consider that, back around 1800 or any time earlier, it was obvious to anyone that the ability to count and do arithmetic was a sign of intelligence. Not even smart animals like dogs or monkeys could add or subtract; only we smart humans could do that. Then those engineer types invented the adding machine. Were people amazed by the advent of intelligent machines? No; they simply reclassified adding and subtracting as "mechanical" actions that required no intelligence at all.

Fast forward to the computer age, and you see the same process over and over. As soon as something becomes routinely doable by a computer, it is no longer considered a sign of intelligence; it's a mere mechanical activity. Back in the 1960s, when the widely-used programming languages were Fortran and Cobol, the AI researchers were developing languages like LISP that could actually process free-form, variable-length lists. This promised to be the start of truly intelligent computers. By the early 1970s, however, list processing was taught in low-level programming courses and had become a routine part of the software developers toolkits. So it was just a "software engineering" tool, a mechanical activity that didn't require any machine intelligence.

Meanwhile, the AI researchers were developing more sophisticated "intelligent" data structures, such as tables that could associate arbitrary strings with each other. Did these lead to development of intelligent software? Well, now some of our common programming languages (perl, prolog, etc.) include such tables as basic data types, and the programmers use them routinely. But nobody considers the resulting software "intelligent"; it's merely more complex computer software, but basically still just as mechanical and unintelligent as the first adding machines.

So my prediction is that we'll never have Artificial Intelligence. Every new advance in that direction will always be reclassified from "intelligent" to "merely mechanical". When we have computer software composing best-selling music and writing best-selling novels or creating entire computer-generated movies from scratch, it will be obvious that such things are merely mechanical activities, requiring no actual intelligence.

Whether there will still be things that humans are intelligent enough to do, I can't predict.

The most obvious counterexample to the "AI" nonsense is to consider that, back around 1800 or any time earlier, it was obvious to anyone that the ability to count and do arithmetic was a sign of intelligence. Not even smart animals like dogs or monkeys could add or subtract; only we smart humans could do that.

Interestingly, in recent years, many animals have been found to be able to perform simple mathematical tasks.

I definitely agree with a lot of the items on that list. This time around, however, thin clients are definitely in the running because of all the amazing VDI, virtual app stuff and fast cheap networks. However, anyone who tells you that you can replace every single PC or laptop in your company needs to calm down a little. Same goes for the people who explain thin clients in a way that makes it sound like client problems go away magically. They don't - you just roll them all up into the data center, where you had better have a crack operations staff who can keep everything going. Why? Because if the network fails, your users have a useless paperweighr on their desk until you fix it.

I'm definitely surprised to not see cloud computing on that list. This is another rehashed technology, this time with the fast cheap network connectivity thrown in. The design principles are great -- build your app so it's abstracted from physical hardware, etc. but I've seen way too many cloud vendors downplay the whole data ownership and vendor lock-in problems. In my opinion, this makes sense for people's Facebook photos, not a company's annual budget numbers.

You've been pushing this Storage Virtualization on us storage admins for years now, and it's more trouble than it's worth. What is it? It's putting some sort of appliance (or in HDS's view a new disk array) in front of all of my other disk arrays, trying to commoditize my back end disk arrays, so that I can have capacity provided by any vendor I choose. You make claims like,

1. "You'll never have vendor lock-in with Storage virtualization!" However, now that I'm using your appliance to provide the intelligence (snapshots, sync/async replication, migration etc) I'm now locked into your solution.2. "This will be easy to manage." How many of these fucking appliances do I need for my new 100TB disk array? When I've got over 300 storage ports on my various arrays, and my appliance has 4 (IBM SVC I'm looking at you), how many nodes do I need? I'm now spending as much time trying to scale up your appliance solution that for every large array I deploy, I need 4 racks worth of appliances.3. "This will be homogeneous!" Bull fucking shit. You claimed that this stuff will work with any vendor's disk arrays so that I can purchase the cheapest $/GB arrays out there. No more DMX, just clariion, no more DS8000 now fastT. What a load. You only support other vendor's disk arrays during the initial migration and then I'm pretty much stuck with your arrays until the end of time. So much for your utopian view of any vendor. So now that I've got to standardize on your back end disk arrays, it's not like you're saving me the trouble of only having one loadbalancing software solutions (DMP, Powerpath, HDLM, SDD etc..). If I have DMX on the backend, I'm using Powerpath whether I like it or not. This would have been nice if I was willing to have four different vendor's selling me backend capacity, but since I don't want to deal with service contracts from four different vendors, that idea is a goner.

Besides, when I go to your large conferences down in Tampa, FL; even your own IT doesn't use it. Why? Because all you did is add another layer of complexity (troubleshooting, firmware updates, configuration) between my servers and their storage.

You can take this appliance (or switch based in EMC's case) based storage virtualization and Shove It!

Hell, almost all the cases should be considered successes now. The problem was that they were all massively over hyped back in the day.

Our massive move to web-applications and the newly-but-stupidly-coined "Cloud" is as much a thin client solution as it was back then.

To many, Google can be considered an AI. After all, it helps answers your questions. With more and more NLP being built into it (and other web applications), it its getting closer to directly answering your questions.