After all there is a guitar tuner app on my Android phone and it works well enough. Can't plug it into my son's electric guitar and tune it whilst he is jamming out some Deep Purple covers at the local pub. (Actually he isn't doing that... he is only 11 but I think he wishes he was doing that...)

But that sort of expresses the problem. For most purposes the hardware (the guitar tuner) becomes "appified" - that is it just becomes another app on your mobile phone. A software guitar tuner replaces a hardware one except for specialist uses.

OK - for the uninitiated I will explain what this is. Its a virtual computer download. After all you can run a computer on Virtual Box (or a Vmware player) and it will virtualize a computer - that is run a computer on software that looks, feels and operates like the box you purchased online from Dell. The computer you are virtualizing with this download is (and you need to breathe deeply to get the significance of this) a hotel administration system which will manage billing for telephones, wifi hotspots, the porno movie you watched at 10.30pm because you were lonely and in a strange town and the mini-bar.

And because it is a virtual machine you can house it in any nondescript linux running box provided you run Vmware on it and plug a few nondescript switches into the back of it.

The expensive hotel PABX has been virtualized: eaten by software. This "virtual machine" the product of some French company that assembled it from publicly available bits of software just wants to sell and install a few trivial phones.

Which neatly describes the problem for Cisco. Cisco you see makes complicated integrated hardware-software devices - and their product set - like most hardware-software devices - is being appified.

What is happened to guitar tuners is happening to Cisco which is the real problem with the stock - the reason it trades at such low multiples.

And you can see this in their results: we are in the middle of one of the biggest routing booms you could imagine - as we go from a world where a few devices are connected to one where every device, every tablet is connected to the web. Cisco talk about 50 billion devices but you can't see it in their revenue line.

Instead in the middle of the biggest imaginable boom they had that dreadful conference call where they talked about government spending being weak. Hey isn't this a private sector routing boom? Well it is but the private sector is appifying really fast. The government sector are all paranoid about Chinese government hacking and terrorist vulnerabilities and so are wanting the tried-and-proven hardware-software integrated device (which they think is harder to hack). Government paranoia is stopping the software eating the hardware and them guys at the Department of Homeland Security are sitting there safely in their (antiquated Cisco) box...

But whilst the government holds back the tide of history the lesson holds true: if you make hardware-software integrated devices your risk is you are going to get appified and unless you do the appification someone will do it for you.

Marc Andreessen is right: software will eat the world or at least part of it.

If you are a pure hardware maker its going to be really diabolical. I should illustrate by example. A financial firm I know (one office, two floors) runs about 70 desktop computers running Windows. They used to have a computer on each desk, a series of centralized servers and a backup of the servers (minute by minute) stored off-site and the desktops backed up once a week. You were told not to store stuff on the client computer - only the server.

There were a bunch of security risks with this. For example the client computers all had USB ports so you could plug in a USB key and steal data. So the USB ports were disabled. The client computers still had hard-drives. A staff member could steal data by downloading stuff to their hard drive and then walking out with the hard-drive in their bag. I guess you could lock up the client computers and put alarms on them.

It ain't run that way anymore. The computers are now virtualized.

I need to explain that. With a linux system you can run five computers on one box (you run linux and on that box you run a virtualization software like Vmware or Citrix or Virtual Box). Each of those computers can be different. One could be Ubuntu (a flavor of Linux which I kind of like). Another could be Windows 7. Another two could be Vista. If you are prepared to stretch the law one could even be Apple OSX. In other words one box makes five computers. Or sixty-five provided the box is powerful enough. And they share the same processing power and the same RAM which means if one person is not using it another person can.

But also Linux boxes can be used a different way. You can run ten boxes linked together and pretend they are one computer - and from the perspective of the user they are one computer. Indeed you can run a million computers together that way and they will behave like a single integrated supercomputer. We have an example - its the Googleplex in which it looks like you are sending your request to a single super-computer but the whole thing is run on desktop computers racked in huge storage barns. The beauty of the linked computers is massive redundancy. If 1000 computers went out simultaneously in the Googleplex you would not even notice. The other computers - Borg like - will just take up the slack.

Now you can do this in combo - you could run 70 Microsoft machines on two linux boxes each box being redundant. That is pretty much bombproof because linux machines barely crash and virtual machines barely crash (for reasons explained in this post).

And that is what this financial institution does. It has two largish linux boxes linked and running 70 virtualized Microsoft boxes.

And because it is a financial firm the two linux boxes are backed up second-by-second at a remote site 70km away.

The staff have their old computer sitting under their desk. They see it. They just have no idea that it is non-functional - a dumb terminal for their virtual machines with only the graphics card doing anything. (For some reason graphics cards don't yet virtualize well though that is changing...)

Given the boxes under the desk do nothing they are never going to be upgraded. The linux machines will be upgraded - but that is little more than throwing in another server blade.

Its OK for Microsoft: Microsoft is still renting 70 software licenses to run on 70 virtual machines. It is still renting office and the whole suite of other Microsoft products. But it is diabolical for Hewlett Packard who like Dell are highly dependent on corporate computing businesses for their margin.

Andreessen in on the board of Hewlett Packard. He thinks software will eat the world but in this case it is his company that is being eaten. He desperately wants to salvage it but salvaging it is expensive if you start from the platter Mr Hurd left him.

Anyway if pure hardware businesses are stuffed (and I think they are) then what happens to hardware-software integrated devices? If they can be replaced by software only they are stuffed (example Cisco). But it is not always that clear. Andreessen's article gives the example of military drones - pilot-less planes which can kill. The pilot is being not replaced by software but turned into a jockey with a joystick who may go to war someone in the Continental USA - killing people with drones before going back to be with his wife and kids. But the plane still exists and they still needs guns. It is a software-hardware device and it is not getting eaten. It may be getting better but the bullets are real bullets and they can't be virtualized. So they look safe enough from being eaten by software and it is the fact that the drones are lethal which makes them safe.

The point here is that software does not eat the world - it changes the world but the drone business doesn't get eaten in the way guitar tuners or Cisco routers get eaten.

So lets change the title of Andreessen's piece: software eats a good part of the world but supplements other parts - and as an investor in existing technology you need to know whether you are eaten or not.

I think I have a ready answer for this: every time you look at a piece of kit (a hardware device) you have to ask yourself whether the output of your hardware device is information or the manipulation of information or whether it is something else.

If the output of your hardware is information or the manipulation of information then you are going to get eaten. If the output is something else then you are not.

So lets do the division.

Guitar tuner: information. What is the pitch of the guitar? Doomed.

Alarm clock: What is the time? Do I need to wake up? All information. Doomed.

Military drone: Output is violent death. That is not information, it is a brutal physical reality and hence it is not eaten by software.

Cisco router: manipulation of information. Doomed.

Other items in Andreessen's article

Telephone companies. That is pure information manipulation. Ultimately doomed except for linking all this together. Certainly the old analog phone network is problematic.

Walmart distribution systems. Well the output is shopping for physical goods. Not doomed. Information is just an input.

Oil and gas exploration where computers drive drills etc: not doomed - the output is oil and gas.

Additional ones:

Libraries. Doomed.

Traffic lights. Doomed - but that will take time to get the controlled cars up.

You can go on.

For thought.

John

*We are - despite appearances on this blog - primarily long investors and we spend a lot of time thinking about our longs.

**Disclosure: I once shorted HP but made no money. The analysis was right. Our timing and execution left a lot to be desired.

35 comments:

At the end of the article, John brings up Libraries as a place which in the long run is dead. While it saddens me to agree with him, I can only do so half heartedly.

I'm still a great user of my local library, despite my ownership of kindles, smartphones, etc. Granted, I understand that people like me are in the minority (and becoming even more so), however, I am always seeing very studious people in there studying for the MCAT, GMAT, or other subjects. They are of course not there actually loaning out books but rather using the space to study, but there's real value to that. We can't all simply leave the library and go camp at Starbucks. So, will the world's libraries eventually turn into giant study halls? Somehow I have a difficult time envisioning that and would be curious for your thoughts, John.

1) Cisco has got a partnership going with vmware at least on the unified communications space, if not across the entire cloud computing spectrum. So, Cisco may yet have a fighting chance on this while turning the ship around

2) A bit confused on the 'appification' impact on HP - so if the financial firm still has those five dumb boxes, who will they buy them from ? Can't HP assemble and sell them ? HP is quitting the consumer PC business because of the tablet onslaught - the enterprise PC business might be more affected by cloud offerings through Chrome than virtualized terminals, right ?

"Microsoft has been doing that for a while and is losing more than a bit... (iOS, OSX, Android, Linux) but at least that is a fair fight."

MSFT's balance sheet seems to suggest that you are correct, but for the life of me, I can't figure out why large companies are so happy to waste so much money on crappy software when they can use linux for free--and have more stability and fewer security risks, to boot.

In other words, this is perhaps a "fair fight", but I don't understand why or how the expensive, crappy software is even left standing after competing with superior, free software.

Actually, where does intel fit into this model? Are they, too, doomed due to software advancement? After all, the cost of simply building a state of the art plant is absolutely astronomical (on par with oil refineries, if memory serves).

It was only 20 years ago we saw the start of an explosion of special purpose computer hardware: sound cards, graphics cards, modems, network cards, 3d accelerators.

Now much of that functionality is done in software on general purpose hardware. Even 3d cards are becoming GPGPUs.

This is actually a cycle in technology which has existed for a very long time. Software eats the world only in the medium term. In the long term, there will always be another cycle where important things can be done faster and cheaper in hardware and software is simply not up to the task.

This happens because the increase in transistors from Moore's law is smooth, while the observed performance gains that enable new applications is not smooth.

There are periods such as 1982-1989 or 2003-2011 where performance increases slowly and established applications are the norm. In those periods software seems good enough.

Then there are periods like the 1970's or 1992-2002 where all the action is in hardware and the software can't keep up.

Your thesis is sound for now, but if you really have a long term view, you should be watching for the turn. It happens after hardware has stagnated (relatively speaking) and people start demanding computers do things that they simply can not do well.

On Cisco - If government is worried about router security, I would think that private sector would be equally concerned. Wouldn't Cisco's IP help Cisco in transforming its software to run on virtual Cisco devices? Thoughts?

The reason that graphics cards don't virtualise well is down to bandwidth. And that's something you need an awful lot of to drive a display.

A full HD stream (1920 x 1080 pixels with 8 bits of colour for each of its three channels) running at 24 frames per second requires around 150 MB / sec.

A lossless image compression algorithm could halve that, but I doubt you'd do much better.

I'm not sure if you could find an office with a fast enough network to handle that. :)

I actually think that there's a subtle mistake in the thesis. Hardware isn't going away, it's just becoming more capable at a given level, and specialised devices are losing out to generalised devices.

Problem with a lot of the deployment scenarioes and use cases you mentioned in your article is that they are running on generalised standard architectures. Take high frequency trading for instance. Information (market data, order information) has to propagate through the same OS stacks. The new step up in performance will come from the silicon level (hardware) like FPGAs. So hardware is not all dead yet.

I agree with the article, but the wag in me can't help but note that a DCF model/EBITDA multiple and a few potted observations about whether this or that sector is growing is something that might be doable in software.

@Greg Hao and Alistair: Agreed on the libraries. They provide a multitude of uses beyond just the rental of books. I suppose all we can do is band together to save them, because I for one, need some place to take the kid so she isn't on front of the TV all day. Besides, I know plenty of people who can't read, but they also never went to the library. Cause/Correl?

Your comments are a lot more to the point than Andreessen's. Andreessen fails to distinguish between mechanics of information handling (where software supported by generic hardware is indeed eating specialized hardware) and content or knowledge processing(where we are far from true breakthroughs to emulate human intelligence). Web 2.0 experiments showed that computer software is still far from being able to process human knowledge (the start-ups that aimed at this holy grail got nowhere) but is able to perform aggregation and distribution of information cheaply (start-ups that succeeded found a way to exploit the associated networking effect).

Seems to me that software has always been a better business than hardware in technology, so this is not a new paradigm or anything. I can't name many hardware businesses other than maybe Cisco that have thrived for long time periods. Great fortunes have been made in software using minimal capital, but can that be said about any hardware businesses? Apple is effectively a software business as you've pointed out on your blog several times.

Still, hardware is needed in the world to get to the software, but as usual tech hardware is a mediocre business unless there is integrated software involved (good example would be Checkpoint). And the tech hardware that is most relevant is changing - perhaps that is the paradigm shift. The end of the PC and the rise of tablets and smart phones. That gives rise to the fall of the evil empire, Microsoft, which stifled innovation, to open companies like Google that allow other companies to thrive on their platform.

I went through something like this about 10 years ago, when I thought my series of palmpilots could do anything, given the huge range of software that was available. After a long time, I decided that this wasn't practicable and went back to using specific devices for particular functions. More recently, I bought into the whole Apple marketing nonsense about "Apps for everything" and tried that for a while, only to change back again.

Let me list (briefly) the main reasons why the "App for everything" approach doesn't work for me:

1. Using just one device for everything means that you are subject to the inherent limitations of that device and this forces you into making compromises which may not make much sense. For example, for a while I tried using a HP12C emulator on an iPod Touch and it was kind of fun to do so. But, I eventually decided this was not very satisfactory, due to the relatively large number of input errors I made in my attempts to press virtual buttons on a flat glass screen. I went back to using my old HP17BII, because it had real buttons with tactile feedback, and my input error problems disappeared.

2. Using one device for everything exposes you to what I call "upgrade hell". That is, if you replace your main device with the latest and greatest version, or even if you just upgrade the operating system, you face the task if reloading all your old apps. Sometimes, the apps have not been updated to cope with the latest device/system, which means that you have to either: (a) do without that App while you wait for the developer to upgrade it; or (b) get a similar app which does run properly. Approach (a) involves the problem that your App of choice might have been developed by someone who no longer has the time or inclination to update it, or the skill to elimiate the latest bugs which ever update inevitably produces. Approach (b) exposes you to additional cost and inconvenience, as well as risk that the new App may not accept all your existing data.

3. The App for everything approach raises the "all your eggs in one basket" risk. Put simply, if you drop your precious toy or leave it in the back of a cab, not only have you lost your phone, but you also have lost all the additional functionality too. (A variation on this theme related to battery life, but this is less of a problem these days).

There are some other problems too, but the above represents the main reasons why I don't try to do everything on my phone.

Having said all that, I must acknowledge that I am a complete neanderthal when it comes to technology (eg I think my cheap little Windows laptop is just fine) which means that I am usually wrong about such things.

More importantly, I know that most people will accept the "App for everything" idea due to all the marketing effort behind it. Likely this will become like a self-fulfilling prophesy. That is, the market for specific devices (eg guitar tuners) will be much less than before, likely that will affect the availability of standalone devices, which may have the effect of forcing people into the "one device to rule them all" approach despite the latent problems of that approach.

As for me, I think I have wasted far too much time and money over the years fooling around with a succession of handheld devices which promised to do it all!!!

It is about competition and gross margin. There is no reason why Cisco couldn't compete by providing its own genericized hardware controlled by its own software. For all we know Cisco probably already does that for all but its most performance critical hardware. The question is when generic hardware is available, what kind of margin could a vendor of some hardware function expect?

Virtualisation of routing is an area I disagree with. Virtualisation generally occurs when the power of the general purpose CPU greatly surpasses the requirements for the task being virtualised. Take the humble MODEM of the early 90's. Early on they were relatively expensive pieces of hardware because all the functions of the MODEM were handled by dedicated chipsets. Once the genera purpose CPU became powerful enough to handle this and other computing tasks, the MODEM was virtualised and handled in software. They became substantially cheaper as they effectively just became a port that you could plug into a phoneline...all the logic was now handled in software.

The key here though is that the general purpose CPU has to become fast enough to handle these tasks, as it is many times less efficient at performing them. Routing is an extremely processor intensive task, so it make sense to have dedicated hardware to run it on.

And just as an aside, Linux will not take over the world. They had their chance when there was a lot of momentum 5-10 years ago, but now the reasons to switch are much less apparent. There is an inherent problem with the model as well, and this permeates through the quality of the software - if you are working for free, wouldn't you want to work on the fun stuff (new features etc) rather than do the boring bug checking and code auditing? I find it hard to accept a world where a tool that is so important to me (my computer) is essentially provided to me without any payment. It's just a situation that can't really exist.

I agree that HP in its current form is stuffed, but I don't agree that the whole PC hardware business is stuffed.

People spend ages in front of their computing devices, be they smart phones, tablets, laptops or desktops. The consumer market for this hardware is huge.. and consumers like to have attractive/sexy/well built/etc devices and will pay more for the good ones.

Apple has built themselves into such a profitable company by making the best hardware you can get. Yes, their software is also pretty good but everyone thought webOS was pretty awesome when HP bought them too... in the end it was HP's inability to support good software with good hardware that was their undoing.

I bought a MacBook Pro 2yrs ago because there wasn't a PC manufacturer that made a piece of physical hardware that was as attractive, well built, fast, etc etc as the mac. If HP could build machines like this (and 2yrs later they still haven't) then they could grow the market and charge premium prices. But for whatever reason, neither HP or any other Windows OEM has managed to build market-leading hardware.. so Apple is dominating growth in the laptop market.

Same goes for smartphones, same goes for tablets.

At the moment the iPhone is 1yo and the Samsungs of the world have caught up with better hardware that runs Android and they are now outselling iPhone.. that will probably change when Apple releases the next iPhone and leaps ahead of the market again - but you see - a lot of the innovation is in hardware, it is not all software.

This article "looks" right on the surface, but it mixes concepts and trends:

-Consumerization trend: new generation of devices, powerful, mobile, SW-based functionalities instead of HW based ones. Key features do not lay on the "keys" but instead on the SW running on it.

-Virtualization trend: centralized, scalable, web based SW that can be consumed as services by anyone.

These two trends are leveraging each other, but are different ones.

They generated the market transition we are witnessing now: new type of devices are being used (i.e. a Tablet instead of a PC), new architectures created to support the delivery of these services (i.e. Cisco's Unified Computing System), etc etc.

The combination of pervasive use of applications on multiplicity of devices, consuming cloud services, just generates more traffic, no less.

So more infrastructure is needed, but instead of the traditional PC, Switches and Routers on a corporate environment, more infrastructure is being added in huge Datacenters like Amazon, SFDC, or Service Provide facilities, etc, etc.

There is a shift in how technology budgets are spent, and Cisco & HP are trying to adapt with new products.

So yes, both can be dead in 5 or 10 years, but according to the Cisco strategy, for example, they are adapting to this market shift.

I always thought routers had to be very fast, to switch packets with ever higher throughput at faster and faster speeds. Maybe not the Linksys wireless router in your house that lets you use your laptop while sitting on the toilet, mind you, but the big ones that are the backbone of the "plumbing" of the Internet.

I always thought the latter were where Cisco made the bulk of their revenue. Can those really run efficiently on generic off-the-shelf hardware? Surely cutting-edge high-bandwidth routers require purpose-built custom hardware, and the video-ificiation of Internet traffic (Netflix, etc.) will surely only increase demand for those?

To all those wandering why Cisco is not protected by the performance of their hardware implementation, you need to read up on merchant silicon and how generic, programmable chips are now a faster way to market at lower cost for switch and router vendors.

Even Cisco itself is starting to use merchant silicon where it can't justify the cost of it's own in-house R&D and production fabrication. Slowly this will move up the value chain until Cisco's only advantage will be their software that runs on the same chips as everyone else.

Thanks for providing the networking industry term "merchant silicon" which I wasn't previously aware of. I used that as a starting point for extensive googling and I see a lot of industry experts agree with you.

Disclosure: I'm a former Cisco employee and now work at a large enterprise in the network space.

I agree that Cisco are in some trouble. With the advent of opensource virtualized routing software like Vyatta (http://www.vyatta.com/) and the increased performance of the base Intel gear and the merchant silicon I think it is possible to replace the large majority of standalone network devices such as firewalls, load balances, SSL termination, remote access, WAN optimization, proxies etc. with virtualized network stacks. See http://kellyherrell.wordpress.com/2011/04/15/x86-asics-and-relativity/ for details about x86 performance. More reading at http://vnetworkstack.com/. I don't have any relationship with Vyatta or Citrix and post these as FYI only.

There are a small number of cases where specific high end equipment is necessary based on the volume and throughput (Core WAN networks, market data firewalls etc.) but huge swathes of individual purpose built boxes can be replaced. The maintenance and lifecyle management benefits of virtualizing the network layer are huge.

Players like Arista Networks use merchant silicon in the low latency trading networks space for when you need better performance. Intel just bought one of these merchant silicon players in Fulcrum - http://gigaom.com/cloud/intel-buys-networking-chipmaker-because-the-data-center-is-now-the-computer/. Intel combined with Vyatt could take out large sectors in Cisco's install base,

Interesting research at https://encrypted.google.com/search?hl=en&q=%22Performance+Evaluation+of+Open+Virtual+Routers%22

James Hamilton's (Amazon) blog has some very good commentary on the commoditization of the network space.

Cisco may be in some trouble (as others have pointed out in the commentary above), but not for the reasons argued in this post. I agree with Graeme - hardware isn't going away, it is becoming generalized. This puts Cisco in a different position. Not gone, just different. Cisco's real value is in their software, comparative quality, the ability they provide to properly manage networks with their products etc.

bandwidth and cpu will continue to march on and you will continually be able to do more with both, but that's the point really...

General disclaimer

The content contained in this blog represents the opinions of Mr. Hempton. Mr. Hempton may hold either long or short positions in securities of various companies discussed in the blog based upon Mr. Hempton's recommendations. The commentary in this blog in no way constitutes a solicitation of business or investment advice. In fact, it should not be relied upon in making investment decisions, ever. It is intended solely for the entertainment of the reader, and the author. In particular this blog is not directed for investment purposes at US Persons.