Can software scale?

Do you know the story of the King and the toaster? Briefly, it recounts a
King asking two of his courtiers about the shiny box with two slots and how
they would go about designing one with an embedded computer on board. The
‘engineer’ offers a simple, straightforward answer based on a 4-bit
microcontroller with no software involved, while the ‘computer
scientist’ goes to great length to redefine the device as a multi-purpose
‘breakfast food cooker’, which needs a beefy Pentium-90 based system
to support it. We'll soon see versions of this joke asking how one would
connect the toaster to the Internet...

This month saw Telecom 99. Every four years the International
Telecommunication Union organises a huge gathering of the industry. At
Telecom 95, everyone was talking about mobile connectivity and IP-based
solutions for traditional telco applications. This year, demonstrations of
voice over IP and professional mobile connected devices were everywhere, and
the talks were about introducing mobile and connected devices into the home.
Expect this to happen by the next event in 2003.

This evolution raises two fundamental issues that are related. First, in a
connected environment, where do you put the complexity? You've got the
choice of putting it just at the periphery (in our previous example, it would
be in the toaster), just in the network (in the electrical network), or split
it between the two. The different choices have an obvious impact on the volume
of communication that has to take place. The second issue is that if every dumb
or intelligent device gets an Internet address – a world where
there'll be more Internet addresses than human beings (ie more than 6
billions) – then we;ve seriously got to study the scalability and
behaviour of very large systems.

During a Q&A at Telecom 99, Larry Ellison was mocking the
Microsoft strategy of always creating the need for ever more powerful computers
for users. He's got a point. The time we spend configuring,
re-configuring, tweaking, and rebooting, our PCs is quite ridiculous. The
complexity of Windows 2000 is rather mind-boggling when most users just need it
for writing letters, sending email, browsing the Web, calculating some figures,
and getting some data from a database. The few hiccups of websites like E-Bay
amount to much less time offline than the cumulative time PCs need to be sorted
out. Ellison characterised the Oracle strategy as: ‘We've been 100%
Internet pure for some time [...] the only way to get to an Oracle
application is through an Internet browser; [there is] no software on the
desktop PC’. So, we've got a clear case for putting the intelligence
in the network and having dumb devices at the periphery. In fact Ellison jested
that his company has reinvented the economy of scale principle.

Taken at face value, this strategy looks rather attractive. However when
asked about what he thinks about micro payments, Ellison clearly stated that it
just cantt work. ‘Micro payment is of mind-boggling complexity.’
He explained that no database today could handle the amount of data and
communication required by micro payment. He recommended a flat fee solution. I
quite enjoy having most Internet content free so this really sounds good,
however that's where I started to have problems with his overall strategy.
If we add a vast number of dumb devices to the network, which all get their
services from network-based central resources (and Ellison stands by his
prediction that before the end of the year there will be more networked
computers than traditional PCs), then we'll need some rather beefy
software and hardware to handle it all. And if it can't work for micro
payment, how can it work for the rest? Unsurprisingly, Ellison avoided all
questions related to this point.

Another company with a strong vested interest in a similar model is Sun
with its motto ‘The network is the computer’. Greg Papadopoulos, CTO
Sun Microsystems, is one of Sun's visionaries who spends much of his time
analysing what's going to happen and trying to find some answers. At
Telecom 99 he shared his thoughts on how the increasing number of
‘things networked’ will impact us. He drew a parallel between (a) the
increase of networked computers being overtaken by the increase of networked
consumer devices and (b) the increase of computer power (Moore's law
states that the increase is roughly 2x every 12 months) being dwarfed by the
increase in network bandwidth available (roughly 2x every 6 to 9 months). In
both cases, after the crossover point, we've got an issue of scale. Even
though he stressed this view, he didn't offer any advice on how to get
there from the here and now. He estimates that the gap between bandwidth and
computer power will soon reach orders of magnitude in excess of a 1000. The
only way to deliver such bandwidth will be to scale networks of computers! But
just how you do that was left as an exercise...

An interesting side-point is his view that when we have more networked
consumer devices than computers we'll have to change our model for
software delivery, from the current shrinkwrap model to a service-oriented one.
Another factor will contribute to this explosion of networked elements: at some
point along the line the number of consumer devices will itself be overtaken by
the number of effectors and sensors.

If we start to rely on a general ‘webtone’ infrastructure
without tackling how to scale it, then as soon as consumer devices, effectors,
sensors, and all you can think of, begin to plug into this infrastructure it
will fall apart. Finding ways to create software capable of handling such a
large number of clients and such a volume of communication is a challenge that
will show if Computer Science has reached maturity.

(By the way do you know how to decide if a field is a science or not?
Simple. If its name includes Science, it is not!)