Management Summary
Despite the confused and ill-defined way in which the term "information superhighway" has entered the vocabulary, it does signify a fairly precise ideal. As represented by politicians such as US Vice President Al Gore and Martin Bangemann of the European Commission, access to information and efficient electronic communication will be the foundation of future economic and cultural life. In the information society, the means by which information is distributed will be as vital as road, rail and air are today. Hence "superhighways" - high-speed networks capable of delivering data, messages and video images interactively and with imperceptible delays.

At present, we have different transmission technologies for different types of information service, and they work well enough. Information superhighways promise to integrate all these services and create new ones: interactive home shopping, real video-on-demand in which videos are delivered in digital form exactly when each individual orders them, cooperative working at a distance using virtual reality, and so on. Gore has compared his national information infrastructure proposals with the 1950s programme to construct interstate highways in the US, which did indeed have direct and profound effects on economic and social life, but the superhighway is really a flagship project for government, closer in significance and likely impact to the moon landings of the 1960s and 1970s than to any road-building programme. Behind the rhetoric, the immediate objective for most of the politicians who have focused on information superhighways is to stimulate economic activity in what are seen as critical industries - IT and telecommunications - rather than to provide a genuinely workable infrastructure for the whole of industry and society. That remains a secondary goal.

Commenting on the report of the Bangemann Group, Europe and the Global Information Society, which was presented to the European Council in June, 1994, the Swedish Prime Minister, Carl Bildt, observed that it fails to address the universal applicability of IT and telecommunications. "There is a lingering tendency in the Bangemann recommendations," said Bildt, "to see information technologies as some sort of separate part of economic development that can be treated like the steel industry or agriculture." Much the same is true of the policies and programmes now being developed in the White House. Despite the administration's evident commitment to the use of new networking technology to support education, research and the democratic process itself, the real foundation of Gore's National Information Infrastructure programme is the belief that "information superhighways are critical to American competitiveness and economic strength". President Clinton's view, expressed at the G7 summit in Detroit last March, is succinct. "We must create those new markets," he said, "... [by] investing in job- creating technologies, from dual use military and civilian technologies ... to an information superhighway connecting every classroom and library in the country."

Establishing systems adequate to support the performance of superhighways and allow them to reach into classrooms, libraries, homes and offices, is hugely expensive. But it is not the expense alone that has encouraged politicians to think in terms of private sector development - it is, quite simply, the desire to support the private sector in developing new and profitable markets in IT&T. This has usually involved governments in faster or deeper deregulation of telecommunications and the stimulation of competition between telecommunications companies (telcos).

In this highly competitive market, a number of alternative technologies and approaches to the delivery of information services present themselves. Currently, there are only a handful of suitable technologies for integrating different data types at suitably high speed. There are technical considerations apart from the speed or capacity of the transmission medium, and most interested parties are explicitly committed to supporting ATM switching and SDH/SONET transmission technology on broadband optical fibre operating in the Gigabit range. These technologies are particularly suited to the transmission of mixed data types and multimedia, supposedly defining characteristics of the information superhighway, and certainly central to Broadband-ISDN and Integrated Broadband Communications, the somewhat less glamorous names for information superhighways.

ATM and fibre-optics promise to eliminate the distinction between local and wide area networks and bring high speed networks seamlessly to end-users. Unfortunately, ATM is an immature and only partly standardised solution. For local, campus and metropolitan area networks Fibre Distributed Data Interface (FDDI), Frame Relay and Switched Multimegabit Data Streams over Distributed Queue Dual Bus (DQDB) are introducing high-speed data transmission technologies which, like ATM achieve high speed by the use of optical fibre, simplified routing, and advanced switching. Particularly important is the fact that today's technology has produced highly reliable routers and switches. Data packets can now be stripped of error-checking code and no longer have to be retransmitted to ensure error-free reception. But there are numerous contending technologies for the desktop and even more contending players with large amounts of capital invested in copper and coax delivering television pictures and telephone services. ATM's ability to handle a variety of data types and to interface with FDDI, Frame Relay, and DQDB will probably be more significant in the medium-term than its - as yet untried - ability to offer seamless connectivity across local and wide area networks.

TCP/IP and the Internet already achieve many of the aims of the information superhighway, and also offer similarly seamless connectivity from local to wide area, but remain inadequate for the provision of multimedia services. If multimedia is required, ATM will almost certainly have to be used as a base technology. But there is evidence that the US telecommunications companies will promote TCP/IP (or a form of it) as a higher level protocol. This may be an unbeatable practical combination for new multimedia-based wide area network applications.

Meanwhile, experimental projects are underway in many countries which seem designed to produce a number of different types of high-speed backbone, depending on the precise application. The idea that there will be one information superhighway seems implausible, while the feasibility or usefulness of connecting all the high-speed backbones in the world, or indeed in one country, is highly questionable. Change will be gradual and fragmented markets for different communications and network services will continue to exist for the foreseeable future. However, the recent alliances of telcos, cable TV operators, IT companies, and entertainment corporations in pursuit of integrated services, killer applications like video-on-demand, and ever-bigger markets, will probably produce a handful of giant conglomerates with global market power.

Introduction
The phrase "information superhighway" has been uttered with something approaching abandon since it first entered the vocabulary in the early 1990s. A search of US and UK news databases produced 1,594 articles published over the last year including the expression. It is not completely surprising that more of them were in the general press than in scientific and technical publications. The information superhighway seems to have grabbed the imagination of leader writers and pundits like electronic brains, Beatlemania, and mad cows before it.

With such a proliferation of news and opinion, one might be forgiven for believing that the information superhighway is common currency, an expression routinely used in homes and offices everywhere. In fact, relatively few people have any idea of what an information superhighway is or might be. Almost everybody understands the telephone, most people know about cable TV, and some even understand e-mail, but the information superhighway is perceived as little more than a cartoon image of encyclopaedias and telephone directories racing down an eight-lane freeway.

Government Views
Most public pronouncements about information superhighways avoid definitions or precise descriptions of technologies, preferring to list potential applications or, with even less precision, assumed benefits, such as "electronic correspondence between government departments and citizens", "hospital patient tracking facilities including images", "remote access for the public to electronic forms", home-shopping or video-on-demand. Part of the reason for this is that the governments and supra-governmental agencies have promoted information superhighways, but will not build them. Practical development will be largely the responsibility of private sector companies and deregulated telecommunications operators who will need to seek business opportunities apart from servicing public sector organizations and governement agencies. The information superhighway will have to pay for itself, and as part of the commercial world it is outside the scope of public authorities to define.

Writing in Scientific American, September 1991, the then Senator Al Gore - credited with coining the phrase "information superhighway" argued for the establishment of "a high-speed computer network" modelled on the US interstate highway system, started by an initial investment by federal government and inspired by "federal leadership", but financed "as a commercial enterprise".

The relationship between public and private sectors is inevitably uneasy - the one committed primarily and at least overtly to neutral promotion of the public good, the other prioritising profitability. The potential for conflict is a constant area of debate around the liberalisation of telecommunications, where the social goal of universal service provision (USP) would be at risk in a comprehensively deregulated market. This balancing act between public and private sector is perhaps the single most consistent theme in the whole information superhighways project.

"It is estimated that $100 billion of private investment would be needed to connect optical fibre to every home, office, factory, school, library, and hospital," wrote Gore in 1991, although more recent estimates suggest a cost closer to $400 billion. "Unfortunately, the onslaught of technology is causing fierce competition between once separate entities in telecommunications. The telephone companies want to transmit entertainment; the cable carriers are asking to get into the communications business. And businesses are turning to satellite communications carriers or installing private fibre-optic networks. All are regulated by vague, outdated, conflicting, constantly changing government telecommunications policies. The result is that the private sector hesitates to jump in and make the investment. We face the classic `chicken and egg' dilemma. Because there is no network, there is no apparent demand for the network; because there is no demand, there is no network.

"Consider the interstate highway system. It made sense for post-war America with lots of new automobiles clogging crooked two-lane roads. But who was going to pay for it? There was no private business that could afford it and no entrepreneur convince it was worth it. Similarly, the private sector cannot afford to build the high-speed computer network we need and may not even be convinced of its value. But like the interstate highway system, once this network is completed the demand for its use will skyrocket. We are already seeing private and public concerns building feeder networks that, just as state roads feed into the interstate highways, could make the network more effective.... For almost 15 years, I have been working to change federal policy so that as a nation we will invest in the critical infrastructure of information superhighways."

Today, as Vice President, Gore has achieved a position in which he believes he can make the superhighway happen. The publication of his white paper, The Natioonal Information Instructure: Agenda for Action, in September last year, and the administration's support for a number of measures introduced before Congress will help to liberalise the US telecommunications scene further and so open up the market to what will almost certainly be intense competition between cable TV companies, the Regional Bell Operating Companies (RBOCs), AT&T itself, and its existing long-distance competitors, MCI and Sprint.

But what is the information superhighway? First of all, it goes by a number of names: in the US it is also called the data superhighway, the data highway, and the information highway. On the whole, these are used interchangeably. They are also typically used to describe a national, not a global, network. Different information superhighways will exist in the US, Canada, Europe and Japan; there is no guarantee that they will interwork, nor do the politicians or the telcos promise global coverage.

Al Gore's own views about the nature of the superhighway are not altogether precise, and this lack of precision creates problems when technologically uneducated commentators and decision- makers adopt the rhetoric. Sometimes, there are many superhighways even within the US; sometimes only one. In one version, the superhighway is a single high-speed network, funded by the federal government, and providing a backbone for commercial and non-commercial feeder networks to use. In 1991, the single high-speed network may have been the National Research and Education Network (NREN), a Gigabit network proposed in Gore's High Performance Computing Act. In 1994, according to a White House briefing issued in January, it was "an all- optical network testbed operating 100 Gigabit ... per second by 1995 ... [which] will be the foundation of an information superhighway that can provide new commercial opportunities to US manufacturing and service firms". In practice, the real focus is on promoting the growth of commercial networks - many superhighways, not one. The critical role of the federally funded backbone is not just to improve research and education, but to impose standards in a deregulated and, therefore, competitive environment.

"Without federal funding for this national network," Gore has said, "we would end up with a balkanized system, consisting of dozens of incompatible parts. The strength of the national network is that it will not be controlled or run by a single entity. Hundreds of different players will be able to connect their own networks to this one." (SA)

By 1994, the White House had recognised that "private sector firms are already developing and deploying that infrastructure today. It is the private sector that will build and own the NII of tomorrow." The government's role is now seen as promotional, removing regulatory obstacles and encouraging competition, while preserving a - possibly transformed - concept of universal service provision.

Despite the evident power of the superhighways methaphor and the consequent high profile achieved by the Gore proposals, the substance of these proposals is far from unique. Similar concerns have been expressed by a number of European governments and by the European Commission for some years. From 1988 to 1994, the European Commission funded the RACE programme - Research and development into Advanced Communications in Europe - and subsequently made specific proposals for a number of projects, most notably the Europe-wide Integrated Broadband Communications (IBC) network, Metran, Hermes, and an inter- administration network called the European Nervous System. It is commonly believed - and widely repeated - that Europe lags behind the US in the development of high-speed networks. If so, it is not for lack of vision. The implementation of a pan-European IBC network has been part of European Community policy for at least six years. The 1989 PACE report from the CEC notes that:

"The case for an advanced communication system (IBC) in Europe is driven by the following forces:

strategic considerations and capabilities for the European telcommunications industry."

The report observed that technological developments now ensured that one network could carry a full range of services, and also noted the importance of standards, open access and a cost-related competitive tariff structure. The report's final recommendations foresaw initial implementation of an IBC network (delivering, among other services, experimental interactive video and digital HDTV) by 1995. The recommendations, said the report, "melt down into just one - efforts towards the early development of advanced European broadband telecommunications infrastructures should be accelerated."

Unfortunately, European progress has to contend with a fragmented telecommunications industry, extremely complex tariff structures, and nationally focused markets (only about 10% of the revenue of European telcos derives from cross-border traffic). European telecommunications has been and still is characterised by aggressive competition with largely protected markets - a recipe for stagnation. While the US has used government funded R&D programmes, military and civilian, to leverage the development of large-scale networks, and the post-MFJ long distance carriers were necessarily focused on such development, Europe as a whole had no such advantages. European programmes - up to and including the recommendations of the Bangemann Report on Europe and the Global Information Society (presented to the European Union summit in Greece, in June 1994) - have accordingly focused on attempts to encourage cooperation and standardisation, and increasingly on deregulating the market. The result has been relative failure across the board, and the encouragement of alliances between European and US telcos which have been defensive from the European point-of-view and aggressive from the American.

In 1992, the PACE report noted the US plans for Gigabit networks and observed the lack of any such plans in Europe:

"A major gap is thus developing, both in absolute terms between the processing power that is available to scientists and the lack of communication capabilities that would approach, if not match, their needs for interconnection, and in relative terms when compared with the US. Furthermore, the absence of such a powerful, if focused, communications infrastructure will carry much impact once technical and commercial needs for very high speed communications develop outside the scientific community."

The 1994 Bangemann Report, while underlining the European Union's commitment "to the extension of Euro-ISDN and to implementation of the European broadband infrastructure," addresses this problem only in terms of further liberalisation of the telecoms industry and, crucially, revising tariff structures and standards processes. The European Council's response to Bangemann (spelt out in a statement from the Greek Presidency) agreed that "it is primarily up to the private sector to respond to ... [the] challenge [of the information society], by evaluating what is at stake and taking the necessary initiatives, notably in the matter of financing." This is, roughly speaking, the American approach taken out of context. Individual governments may be more interventionist than the EU, but some - like the UK government - appear less so:

"An issue of interest to the government is whether a market will develop for sophisticated products such as financial and information services in the private sector, which will then attract investment in infrastructure which the public sector can exploit, or indeed whether the public sector opportunities are great enough to drive these developments."

As a result of such equivocation, it seems likely that European superhighways will be developed by global consortia, probably using US technology, operating within increasingly deregulated European markets. The alternative is for the pockets of high-speed networking which already exist in Europe (for example, the new Gigabit SuperJanet academic network) to remain relatively isolated and under-utilised.

Background
Whatever its precise scope, the technological foundations of the information superhighway are the high-speed, optical fibre, wide area networks known as backbones. The development of such networks is more advanced in the US than elsewhere for reasons that are largely political. The technology has always been widely available, but its adoption was stimulated in the US by early deregulation of the telecommunications industry as a result of anti-trust legislation.

The 1984 Modified Final Judgement (MFJ), under the terms of which AT&T's Bell telephone system was broken up, produced seven local telecoms operators (the Regional Bell Operating Companies, RBOCs, or Baby Bells) and one long-distance operator, the rump of AT&T. (It is called the Modified Final Judgement because it modifies the original 1982 decision to break up AT&T). Under the terms of the MFJ, the RBOCs were granted local monopolies but prohibited from owning value-added data or information service, although these restrictions are now being lifted. Long-distance or interexchange services were opened up to competition, and what was left of AT&T was soon joined by MCI and Sprint. These companies ran backbone networks and were allowed to offer value-added services.

From the outset, optical fibre and digital technology were seen as critical to the development of efficient backbone networks and value-added services such as virtual private networks and data communications. Early backbones, such as Arpanet, funded by the US government's Advanced Research Projects Agency and eventually to transmute into the Internet, were established partly to investigate networking technology itself and partly to enable widely dispersed research institutes (and military centres) to communicate quickly and cheaply. Distance evidently played an important part in the development of high-speed networks, but it was by no means the only concern.

Firstly, as telecommunications traffic grew there was a clear need for more capacity - particularly over the critical long-distance links. Secondly, the use of separate networks for voice and data became increasingly inefficient and expensive. Voice networks are sensitive to delays and are conventionally circuit switched. That is, when two parties speak to each other over the phone there is a unique physical link between the two that remains unbroken as long as the line is open. This arrangement is both the most obvious way of doing things and the best, since a conventional phone call is a conversation which takes place in real time and whose pace is determined by the interaction between the two parties. Data networks such as Arpanet, however, were developed with different priorities. While phone conversations may tolerate degradation of quality and occasional interruptions or interference, data communications must be effectively error-proof. Error correction algorithms can help, but they slow transmission speeds dramatically particularly when dealing with lengthy streams of data. Some types of data communication are relatively insensitive to delay, but others - notably digitised sound and video - are not.

The original aim of Arpanet was to develop a method which would ensure that data and voice communications could proceed to a successful conclusion even when parts of the network were destroyed in mid-transmission. as, for example, by the unexpected arrival of a Soviet intercontinental ballistic missile. The solution was called packet switching. Packet switched networks essentially split up a data stream into small bundles which are transmitted by the best available route and reassembled at the other end into their original sequence. There are two basic forms of packet-switching - datagram transmission in which packet routing decision are made at every step of the journey, and virtual circuit switching in which a single routing decision is made for each transmission before it begins.

The former is very flexible and resilient to service disruption, but is best suited to relatively short transmissions. It is typically asynchronous, which means that timing is referenced to the beginning of the transmission of each packet. It is also as a "connectionless" transmission mode - meaning that an end-to-end connection is not established when the transmission begins. In a connectionless system, transmission times may vary widely because of delays introduced in the routing of individual packets. Routing is probably the most complex procedure representing the most significant delay in packet switching.

Virtual circuit switching improves on the routing problem. It involves a relatively lengthy set-up procedure for each transmission in order to make a connection. It is described as a "connection oriented" technology. Having made the connection, virtual circuit switching is relatively fast and reliable. It is synchronous, which means that timing is very precise, generated by either the signal itself or an accurate external clock. It is suited to long transmissions, and is not usually characterised by unacceptable delays or variations in transmission times. The advantages of virtual circuit switching over `real' circuit switching is that it offers interleaved access to the same cable by packets from different sources going to different destinations and, of course, it is more resilient.

The two main existing packet-switching systems, TCP/IP and X.25, represent the datagram and virtual circuit approaches respectively. Both emerged at roughly the same time in the early to mid 1970s and have been seen by many as competitive protocols. TCP/IP is used by the Internet and has grown in popularity to the point at which it appears to have become the de facto standard for data communications networks. As a rule, telcos don't like TCP/IP - it is too unpredictable, and has no simple procedure for billing because packets can take such variable routes. As it turned out, it was also effectively useless for transmitting voice, the bread-and-butter of telco operations. However, TCP/IP was absorbed into the Unix operating system and became one of the most important protocols for local area networks and internetworks, particularly influential within the scientific and academic communities worldwide. X.25, on the other hand, found favour with telcos, particularly in Europe. Its biggest problem is its speed - usually between 9.6 Kbs and 64 Kbs, although it has been stretched to 2 Mbs (noatbly on the French Transpac system). X.25 requires elaborate error checking and collision management to avoid data corruption and delays due to packets simultaneously attempting to access the same link. It is, needless to say, very resilient and ideally suited to lenghty data transmissions or digitised voice, and has become the basis of a large number of public and private wide area networks.

None of this solved what was increasingly identified as a problem by network developers from both the computing and communications camps. These two have been traditionally divided by a culture gap which has ensured their failure to coordinate developments unless it became inevitable, but both of them began to realise that the proliferation of cabling, bridges, repeaters, routers and concentrators was creating a logistical nightmare for network operations, while the demand for network services was expanding. Businesses, government, and research organizations saw a growing need to integrate voice and data, while increased competition between telcos, cable TV operators, and broadcasters pressed them into looking for new markets and even poaching customers.

Whatever the precise scope of information superhighways or their ultimate applications, the technical demands are the same: they must use high-speed, high-bandwidth media to carry mixed data types and formats over long distances, reliably and, whenever necessary, in continuous data streams. If any of these characteristics are absent, the network is not a superhighway. In other words, a superhighway is not just a high-speed network offering data rates of, say, between 1 and 100 Gigabits/second. It also supports a high traffic density, the simultaneous transmission of bursty data, continuous digitised speech and video, and the ability to interface with a wide range of different end-user terminal devices and local area networks. While it may be correct in an evolutionary sense to argue (as Andy Reinhardt has in Byte, March 1994) that:

"The data highway's backbone will use every wide-area communication technology now known, including fiber (sic), satellites and microwaves, and the on- and off-ramps connecting users to the backbone will be fiber, coaxial cable, copper and wireless",

it misses the essential point that the core of future integrated high-speed networks will actually be fixed links of the sort that now provides the MCI-managed NSFnet backbone for the Internet.

Two things have driven the development of superhighways - firstly, increased user demand for bandwidth because of increased data traffic and the growing importance of long distance networking, and second, supplier pressure to deliver mixed and multimedia services to users. To some extent, the superhighway is simply a rationalisation of the existing situation: it simplifies cabling, and presumes a medium capable of servicing the widest possible market, from home- shoppers to nuclear physicists. Its development is simultaneously encouraged by and supportive of the convergence, both technologically and commercially, of different communications media.

Today, information originating from a variety of sources may already arrive at its destination via a telephone socket, network connection, aerial, satellite dish or cable. Domestic users in particular may be intimidated by the need to introduce yet more sockets to receive interactive multimedia transmissions, while cable companies and telcos have seen the commercial logic of integrating their services, arguing that digital technology makes all forms of communication alike, regardless of historical boundaries and the regulatory frameworks which reflect them. As Nicholas Negroponte, head of MIT's Media Laboratory has put it, "Bits are bits".

Integration remains a difficult problem, however. It is possible to run data over the coaxial cable and analogue fibre used by cable companies, and also to send sound and highly compressed video over the 1.5 Mbs T1 backbones and 64 Kbs ISDN lines operated by telcos, but the results are not good and existing media and transmission technologies may introduce unacceptable delays in data transmission and are simply not adequate for intensive applications such as the transmission of digital high definition TV (HDTV) which may require up to 150 Mbs for compressed images. The result has been to pursue a medium which could handle any form of data thrown at it, and with sufficient capacity to support an economically viable user base. Fibre-optic cable using broadband transmission (that is, high frequency carrier signals) is the answer.

With virtually unlimited bandwidth, broadband optical fibre has sufficient capacity to carry many thousands of communications channels simultaneously using the technique of time division multiplexing (TDM) which splits a transmission into fixed time slots each one of which carries data from a different source. For example, 25,000 voice channels can be transmitted on a single 1.7 Gbs fibre, and the same fibre is capable of carrying bandwidth-hungry digitised HDTV signals or relatively leisurely data streams. Transmission using laser light is comparatively lossless, so repeaters to `condition' the signals can be set apart at relatively great distances (30 Km on average for 500 Mbs fibre). Metre-for-metre optical fibre is significantly more expensive than copper cable, even before considering the cost of laying it, but its cost per bit in long-distance, traffic- heavy networks is substantially lower. Of course, optical fibre requires the transformation of electrical into optical signals and vice-versa, but this is a relatively small price to pay and investment in optical fibre cabling can easily be justified if predicted demand is high enough. By the end of 1987, more than 3.2 million Km of optical fibre had been laid in the US. Today, almost all long distance telephone land-lines are fibre-optic.

But the integration of digital services requires a network capable of carrying different types of data at different rates. With enough capacity on a fibre optic cable, the ability to handle diverse streams of data is no problem, but a mechanism is necessary to organize the process. This is SDH, Synchronous Digital Hierarchy, a transmission standard issued by the International Consultative Committee on Telegraphy and Telephony (the CCITT) and based on the ANSI SONET (Synchronous Optical Network) standard. SDH/SONET describes a TDM method which specifies a hierarchy of different interface rates - data being transmitted at different bit rates can be inserted into or removed into a single mulitplexed data stream, thus allowing the fibre to interface with a wide range of data sources.

In practice, routing data on an SDH/SONET fibre network would be achieved using an asynchronous transfer mode (ATM) switch. ATM is a technology developed by AT&T in 1980 specifically for the purpose of routing mixed voice and data signals. The switch sits between a number of incoming and outgoing fibre links operating at up to 622 Mbs. It routes data between the two. The technique is based on packet switching but, unusually, uses short packets of fixed length, called cells - hence, the description of ATM as cell relay technology. The data arriving at an ATM switch is first routed through a so-called ATM Adaptation Layer (AAL) which segments and encapsulates it into cells comprising five bytes of header and 48 bytes of data. A virtual circuit is established for any given input data stream, the cells are buffered to avoid contention for the same output link and are then streamed in the appropriate order down each virtual circuit. The hardware takes over many of the tasks which in previous packet switching routers were implemented in the packets themselves. Routing decisions are minimised by the use of virtual circuits and error correction is virtually eliminated because optical fibre offers significantly better transmission quality, with less noise and virtually no cross-talk, than copper or coax. Most importantly, the ATM switch can be set up to prioritise cell streams and assign bandwidth on demand. Video data, for example, will get more bandwidth than LAN data, while voice data may be prioritised.

All this is achievable essentially because of improvements to the technology. Like all switches and routers, an ATM swicth is essentially a specialised computer. Because it and the input and output links connected to it are faster and more reliable than those used by early TCP/IP and X.25 systems, the data requires less processing and can be handled much more quickly and flexibly. Because cells are of fixed length, timing and synchronisation problems are minimised and any delays are predictable and can be accounted for.

ATM is still an immature and only partly standardised technology, and some critics have observed that its high-end performance at 622 Mbs suffers from inadequate flow control resulting in loss of data. But that seems to be less of a problem than the willingness of service providers like the telcos and cable operators to employ the technology. The current orthodoxy is that ATM switching (or something very like it) will provide the key technology for B-ISDN and therefore for the information superhighway.

Of course, if the superhighway is to connect us all up, then its biggest problem is likely to be the `last mile' - the local loop or connections to local area networks. But here too, ATM is held up as a vital technology. Its simplified approach to routing could allow it to be used as a LAN switching technology, and some commentators have argued that future LANs will be based on optical fibre and ATM transmission control. Despite the growing popularity of technologies such as Frame Relay, FDDI and Distributed Queue Dual Bus (DQDB) on Metropolitan Area Networks (which are seen both as potential links bewteen high-speed optical fibre WANs and conventional LANs and as intermediate stages in a long term migration to ATM-based combined LAN/WANs) this seems far less certain than the success of SDH/SONET and ATM in high-speed backbones. Research in 1988 suggested that fibre-optic would become economical in the local loop for users requiring at least ten channels at 64 Kbs. While fibre costs are decreasing, so too are the costs of twisted cable or coax based technology. This is also increasing in performance so that, for example, a copper-based version of FDDI can now deliver the same data rate of 155 Mbs, admittedly over a shorter distance, while a standard for a 100 Mbs versions of Ethernet on copper wire is already under consideration.

At the same time, ATM and fibre are not universally welcomed. While discussions are held about the possibility of running the Internet protocols over ATM, Internet pioneer Tony Rutkowski, the chief executive of the Internet Society and a director of technology assessment at Sprint, has observed that, "It's nonsense to think that everything will run over ATM". It also seems unlikely that the cable TV operators and RBOCs will gladly sacrifice their enormous investment in copper and coax, while the telcos may be unwilling to put fibre in the local loop for years to come. Digital Equipment, meanwhile, has recently announced ChannelWorks, a bridge allowing Ethernet LANs to interconnect over cable systems. The computer company is in partnership with some of the US's (and therefore the world's) biggest cable operators: TCI, Continental Cablevision, and Times Mirror Cable among them.

None of this is to deny the possibility that the information superhighway will be coming up your drive very soon. It just might not be quite as integrated or quite as global as the promise. As always in the world of telecommunications, the last mile will be the hardest.

The Internet
It makes sense for superhighway-type projects to focus on the Internet, which has been acknowledged explicitly by Al Gore as an important inspiration and a model of development. As a spin-off from Gore's NREN proposals, the current main US Internet backbone, the 45 Mbs NSFnet, will effectively cease to be government funded in 1996. Nevertheless, the Internet is expected to continue growing at a substantial rate. Figures of 20 to 30 million users are misleading, since they are derived by esoteric formulae which involve estimating the number of addresses that have been attached to devices and hosts and multiplying by 7.5 or 10 (depending on who you speak to). In fact, many Internet addresses are used by unconnected corporate networks (so-called "enterprise networks"), many belong to inactive users (they are never withdrawn), and more are probably restricted to e-mail connectivity. Realistic estimates suggest something between 1 and 2 million active users are connected to the global network, which makes it comparable to other on-line bulletin-board and e-mail systems such as Compuserve.

NSFnet was built and is now managed on behalf of the US National Science Foundation by ANS, a consortium formed by MCI, IBM and Merit, and it is expected that as the Internet `goes commercial' these and other companies will attempt to impose distance-related usage fees for Internet access provided as a commercial service. BT and MCI, AT&T and Novell, and Sprint have already announced Internet services for corporate clients and may also offer dial-up access for occasional and domestic users. Meanwhile, the NSFnet backbone has already been restructured into four Network Access Points (NAPs) for service providers from different zones. Transmissions from NAP to NAP will require negotiated transit arrangements which may include financial settlements like those that operate today with telephone systems.

The Internet has the advantage of being a proven system, using protocols which, unlike ATM offer end-to-end error checking. In practice, this means that Internet access requires less computing power at the end-user terminal than ATM systems would. Users - particularly domestic users - may prefer to adopt an Internet-based system for this reason. It will be cheaper for them, although possibly less easy to use, and probably slower. The real benefit of the Internet is that it is already some way down the road to the superhighway vision offering cheap and reasonably reliable intercontinental connections. An experimental programme called the Multimedia Backbone or M-Bone has even been used to send live video using the existing Internet of the meetings of the 800-strong Internet Engineering Task Force, and VBNS, a 155 Mbs backbone for connecting supercomputers, will be joined to the Internet later this year. There are problems - notably, the lack of adequate address space to satisfy the apparent demand for Internet connections, the difficulty of devising a suitable billing system for a connectionless packet switching system, the non-commercial culture of existing Internet users, and the overhead due to error correction and retransmission of data packets using TCP/IP. The technical problems are all capable of solution: for example, IPng (IP next generation) should solve the address space issue while a proposed slimmed down protocol, the High Speed Transport Protocol (known as XTP), may imporve the performance of packet switching. It is probable that the leading US telcos, AT&T, MCI and Sprint, will attempt to reach an accommodation with the ATM Forum so that TCP/IP or a slimmed down form of it can be run over ATM networks. If this happens, the Internet or something like it it will dictate the direction of the US (and probably the world's) superhighways projects.

Berkom (Berlin Kommunikation): Berkom is an early Gigabit project involving Deutsche Telekom and the Berlin Senate which undertook basic research into high-speed networking and now runs pilot applications including telepublishing, telemedicine and city information systems.

Betel (Broadband Exchange over Trans-European Links): European Commission funded project involving France Telecom and the Swiss PTT which has established 34 Mbs ATM links between FDDI LANs in Geneva, Lyons and Sophia-Antipolis. Pilot applications include distance learning using videoconferencing and collaborative supercomputing.

Blanca: US government funded, and going back to the 1986 Experimental University Network (XUNET), Blanca involves AT&T, Lawrence Berkeley Lab, NCSA (developers of the Internet Mosaic user interface), the Universities of Illinois, Wisconsin and California (at Berkeley), Astronautics, and the RBOCs Ameritech, Bell Atlantic and Pacific Bell. Blanca tests local area network interfaces to SDH/SONET networks using FDDI and ATM. It is studying the flows of different data types, congestion control and multiplexing.

Energis: A collaboration between the 12 Regional Electric Companies in England and Wales, intended to link all major towns and cities by 1995 with a fibre-optic network running along existing power lines. Services will be targeted at business and residential users.

Gen/Metran (The Global European Network/Managed European Transmission Network): Supported by the European Commission, and involving France Telecom, Deutsche Telekom, British Telecom, Telefonica (Spain), STET/ASST (Italy), and Telia (Sweden), Gen is a project to build a Europe-wide high-speed fibre network as a precursor to a full ATM network. This is likely to be Metran, which will support data rates of up to 155 Mbs across Europe,

HPC-Vision: A European Commission co-funded collaboration between France Telecom and Deutsche Telekom to develop a 34 Mbs link between existing networks in Strasbourg and Karslruhe.

Isabel: European Commission funded project to link two national broadband networks - RIA in Portugal and RECIBA in Spain, initially using 2 Mbs links but migrating to 34 Mbs ATM on a circuit switched network.

Magic: US government funded, involving Earth Resources Observation System Data Center, Lawrence Berkeley Lab, Minnesota Supercomputer Center, SRI International, MITRE, the US Army High Performance Computing Center, Army Battle Command Battle Laboratory, Digital Equipment, Northern Telecom, Splitrock Telecom, Sprint, and RBOCs US West and Southwestern Bell. Based on a military terrain visualisation application, this project will test real- time interactive data exchange among geographically distributed systems. The project will use a 2.4 Gbs SDH/SONET backbone with ATM switching to connect high-speed LANs (including three based on ATM technology and one using an experimental high performance parallel interface) accessing the backbone at 622 Mbs.

MultiG: Part of the Swedish national gigabit research programme, launched in 1990. MultiG focuses on computer supported cooperative work (CSCW) and the protocols to support it, including areas such as multimedia conferencing, video servers and virtual reality.

Nectar: US government funded, involving Carnegie Mellon University, Bellcore, the Pittsburgh Supercomputing Center and RBOC Bell Atlantic. This project will test network scalability by examining the use of ATM to add individual computers and LANs directly to high-speed SDH/SONET backbones.

NIIT (the National Information Infrastructure Test-bed): Commercially funded and involving AT&T, the US department of Energy, Digital Equipment, Ellery Systems, Essential Communications, Hewlett-Packard, Network Systems, Novell, Ohio and Oregon State Universities, the Universities of California (at Berkeley) and New Hampshire, Sandia Labs, Sprint, the Smithsonian Institute, Sun Microsystems, SynOptics, and RBOC Pacific Bell. This project is designed to test real-world demonstrator applications using the Internet, FDDI, frame relay and ATM. The first was a multimedia application for distributing environmental data called Earth Data Sciences.

NTT: The Japanese telco, which remains highly regulated, lags behind its US and European competitors, despite widespread use of basic and primary rate ISDN at 64 Kbs and 1.5 Mbs, and a 65% fibre-optic backbone. NTT has a multimedia project oiffice which plans to test an ATM- based WAN towards the end of 1994.

PEAN (Pan European ATM Network): Pilot project involving 18 European telcos and AT&T, intended to offer cross-border video and image data transmission using agreed CCITT and ETSI standards. Scheduled to come on stream in late 1994.

VBN: Deutsche Telekom network launched in 1989 which now supports 140 Mbs data transfers for videoconferencing. Intended to be the basis of German fibre-optic network for domestic users.

Vistanet: US government funded, involving the University of North Carolina, North Carolina State University, GTE MCNC, and RBOC Bell South. This project will examine medical applications for high-speed networks by testing the performance of ATM, SDH/SONET, broadband circuit switching, and high performance parallel interfaces when handling medical images such as CAT scans.

XIWT (the Cross-Industry Working Team): Coomercially funded and involving Apple, AT&T, Bellcore, CableLabs, Citicorp, Digital Equipment, GTE, Hewlett-Packard, IBM, Intel, MCI, McCaw Cellular, Motorola, Silicon Graphics, Sun Microsystems, and the RBOCs Bell South, Nynex, Pacific Bell, and Southwestern Bell. XIWT is working on the technical issues involved in delivering Gigabit data to desktops and homes. Its four working groups are examining architecture, services, portability, and applications, with the overall goals of universal and affordable access, flexibility, and ease of use.

SuperJanet/NISTAR: Implemented by BT for the UK higher education sector, SuperJanet is intended to be a successor to the existing `joint academic network', Janet. The pilot network serves the Universities of Cambridge, Edinburgh, and Manchester, together with University College London, the Rutherford-Appleton Laboratory, and Imperial College. Initially, the network used 140 Mbs conventional digital links and also ran 34 Mbs ATM and Internet networks in parallel. It will migrate to Internet protocols on SMDS and ATM on SDH/SONET, and is intended to provide 10 Gbs in the near future. SuperJanet connects to the the first SDH network launched in Europe, BT's Northern Ireland Star Network, NISTAR, via SMDS at 10-34 Mbs. Current SuperJanet pilot applications include collaborative molecular modelling, document delivery and advanced distance-based learning.
Japan

Conclusion
While there is clearly enthusiasm, and not a little money, for the investigation and implementation of high-speed fibre-optic networks, the combination of deregulation of the telecommunications market and the determination of the authorities to leave the commercial development of information superhighways to the private sector will undoubtedly see a wide range of alternative technologies on offer. At present, ATM combined with SDH/SONET seems to satsify the technical requirements for a system which will support mixed data types and interactive multimedia. A number of problems stand in the way. Neither ATM nor SDH/SONET are completely global standards or fully implemented technologies and it seems likely that different approaches will continue to co-exist, requiring further investment in interfacing technology. The applications that are supposed to drive the market for large-scale superhighways (notably, video- on-demand and home shopping) conflict with both the interests and the technologies of other players, and it is difficult to belive that most consumers will be all that eager to buy the computing power necessary to connect themselves to the superhighway. Finally, pressure from the Internet and from those companies with a vested interest in the survival of TCP/IP (notably MCI and Sprint) may slow the universal adoption of what is, after all, AT&T's technology. The result will probably be many superhighways, not one.