Search This Blog

Telirati Newsletter #43:

In this newsletter I take a shot at the server craze of the time. Over the years, server software has remained a very active area of the software industry, but Linux and other free and open source software has cooled the gold rush mentality.Recently, tools like distcc have brought distributed omcpilation to Linux, confirming my prediction that software development would be one of the first targets for workgroup clustering.But the real importance of this is in striking a blow against the TV-izing of the Internet. The less distinction between a user's node and a server, the better. An interesting example of the virtualization of server software in use today is Skype's distributed directory. A contining trend in this direction is all part of the return of the Internet to its network-of-servers roots, and it is a Very Good Thing.Telirati Newsletter #43: Servers, Re-centralization, and the Promise of the Personal Computer

A wave of servers is sweeping over the face of computing. If you want an Internet presence, you need a server. For many prospective customers, this fact finally answers the question: "Why do I need a server?" For some people, mainly those who were never wholly comfortable with personal computers, servers feel like home, and the term "re-centralization" has come into currency, meaning the return to importance of a central computing resource.

I don't like it! No, I am not suggesting that the server market is over-hyped. It is as real as can be, and will make everyone serving this market a lot of money. However, it is easy to take enthusiasm for servers, server technology, and server-based software too far. One reason why it is so easy is that one can charge thousands of dollars for server software, and the vision of every-business-an-e-business with a server makes people who sell server software positively tumid with anticipation. On the other hand, the personal computer hardware business is now an exercise in extreme brutality of competition. Software, outside of Microsoft's dominant spheres of operating systems and desktop productivity applications, and a few high-end specialties, is also suffering from reduced price expectations, and Web-based replacement technology. Servers and server software offer a comfortable refuge. So it is no surprise that servers have sucked up all the available mind share.

The problem is that, while the server business is booming, servers have a long way to go to catch up with the computing power being deployed on desktops. Not the computing power of individual desktops, of course, but the aggregate computing power of the millions of desktop PCs built every single month of every year, with unit volume that continues explosive growth. This even as the revenue and, worse still, the profit picture for makers of PCs looks bleaker by the day. Nevertheless, desktop computing power dominates the computing landscape as never before. You can cluster servers, of course, and if you get several dozen of them together this way the results are very impressive, but it is still but a tiny fraction of what could be accomplished by turning large groups of personal computers into a kind of "hive mind."

Before this develops into a speculative discussion, here is an application of workstation-level clustering, as you might call such a technology: Software development. Compile-time is a critical element of coding productivity. Typically, coders are given fast machines for software development, so that they spend less time waiting for a compiler to complete its work. The process of "doing a build" - compiling all the code in a system and linking it, is also a key bottleneck in software development. On large projects, doing builds is a monumental effort. Still, I have not heard of one instance where the aggregate power of the idle or mostly-idle workstations in a software development organization are brought to bear on the task of compiling an individual programmer's modifications to a module or to the task of building a complete software system.

Compilation is an ideal task for exploring the possibilities of this type of distributed computing: It can be implemented at the level of the software development applications. It can be implemented with existing distributed component technologies. It can be administered using the same group administration user interface and database as used for source code version control. Compilation is almost entirely risk-free. It does not impinge on the state of the PC except to use a modest amount of temporary storage. The results would be valuable. Even a two-fold improvement in compile time would be a boon to everyone developing applications software. If it proves useful, such facilities can be migrated into the OS and made more user-friendly so that they can be applied to all applications.

There are some research implementations of this idea extant. Legion is one such system. The ideas in Legion are not too distant from what can be extrapolated from Microsoft?s directions in the development of COM, intentional programming, and next-generation component software systems. Jini incorporates some of these ideas, too. Market drivers for this technology could come from such end-user categories as game software.

But I want it now! This opportunity, and others, such as a successful implementation of personal communication in the form of IP telephony and conferencing, true integration with consumer electronics, the lack of any credible commercial attempts at an interface beyond the desktop metaphor, the lack of commercial attempts to replace the notion of data stored in files, the lack of breakthroughs in speech processing, etc. all indicated that anyone whining about a lack of opportunity to compete with Microsoft in operating systems is just not looking to see where the holes are.

Diversity in personal computer operating systems appears to have died with OS/2's larger ambitions. This need not be so. If there is a competitor out there that can put eight or nine zeroes in front of the decimal point of an operating system development budget, there are ample domains of product definition where superiority can be established. Compared with having to compete with Intel?s ability to spend a few billion on each new generation of a fab, going up against Microsoft is less daunting. Would it be a thankless task? Only if you think half a trillion dollars in market value is not worth the effort.

What of current challengers? Linux is a worthy opponent. Linux could acquire capabilities like ad-hoc workstation clustering sooner than Windows. But bringing a real challenge to Windows' superiority means surpassing Windows in several areas of product formulation. Linux, with all its advantages, doesn't have product formulation. It is a remarkable phenomenon, but it cannot go in an inspired, risky, and uncharted direction, which is exactly what is needed in order to compete with Windows. BeOS is still out there, waiting to get enough zeroes behind its budget, and BeOS does have developers of a particular vision. Lucent's Inferno and Plan 9 have, at least potentially, more zeroes than Croesus could bring to bear on a problem, plus the singularly fine mind of Dennis M. Ritchie. But Ritchie has recently given interviews that indicate he has given up on challenging Microsoft.

The likely outcome will be that this concept will be pioneered in Linux, achieving some popularity in, for example, cracking a crypto key or analyzing vast amount of planetary imaging data. The trouble is, to be well implemented, the concept has to incorporate the use of a distributed component technology, and CORBA is just not that intimately a part of Unix-like OSs, the way COM is part of Windows. So when Microsoft addresses this competitive threat, it will do so in a way that is more comprehensive and more interwoven with the way Windows software works. Unless someone else decides it's worth taking a run at Microsoft.

Copyright 1999 Zigurd Mednieks. May be reproduced and redistributed with attribution and this notice intact.

Comments

Post a Comment

Popular Posts

UPDATE
A continually updated version of the information in this post, plus a lot of new information, is now available at: 5Ggui.de
Telecom companies, their suppliers, and politicians are putting 5G in the news
There have been a lot of news stories about 5G, a new mobile wireless standard. The theme many of these suspiciously similar articles is that 5G is going to transform everything. I'll tell you what to expect in reality, and what is wishful thinking on the part of the telecom industry, and why telecom service providers and equipment makers are hyping fantasies.
5G is a better radio
5G means better mobile devices and a better mobile network. There are three main reasons 5G is better:5G introduces a new radio technology that makes more efficient use of radio spectrumThe network behind those radios will be faster and have lower latency5G enables using more of the radio spectrum
There are many factors in the increased sophistication in 5G radios. These are the most important:Encod…

At Google I/O 2016, Google announced two new messaging products: Allo, for text messaging, and Duo, for video communications. These are the most recent in a series of messaging products Google has created, none of which have succeeded in attracting a really large user community the way that other messaging products have done. Google doesn't release figures for monthly active users of Hangouts, while WhatsApp has a billion users, Facebook Messenger and QQ have 850 million, and WeChat has about 700 million. The stakes in messaging are very high, and, so far, Google is an also-ran.

In 2015, it looked like Google might go in a different direction, perhaps acting as a spoiler for proprietary messaging apps that don't interoperate and don't use carrier protocols like SMS and MMS. Google bought a company called Jibe that makes next-generation messaging servers for standard telecom protocols called Rich Communications Services, or RCS. If Google based a messaging system on RCS it w…

Photo by Matthew Hester (CC BY-ND 2.0)
Google has become something of a slumlord. While Google+ has been accused of being a "ghost town," at least it looks pretty, even now after what feels like a long period of stagnation and un-addressed bugs. But Google+ isn't the most neglected neighborhood in Googleland.

Where is the Blogger blog?
Various parts of Google, notably Search, use Blogger to convey news about new releases and their development road map. Blogger is key infrastructure for Google itself. But where is the Blogger blog? Like some other semi-abandoned properties, Blogger no longer has frequent updates about features.

If you want a history of Blogger and it's features, you'll have to rely on Wikipedia. Evidently there are ardent Blogger users who keep track of these things.

Why does this matter? There are a lot of abandoned places on the Internet. Blogger, however, is emblematic of the problems that precipitated the departure of Vic Gundotra from Google+…