Σχόλια 0

Το κείμενο του εγγράφου

Market pressures haveforced marginalcomputerfirmsout ofbusiness. Looking tocompatibles,warycustomers are helpingcreate de factostandards.With today's multitiered, over-lapping set of programmablecomputer classes, where and howcomputing can be done and howmuch it will cost can vary con-siderably. Computing costs can beanywhere from$100to$10million(Figure1).In addition, computingdevices can include electronictypewriters with built-in communica-tion capability, further increasing thechoices to be made and the complex-ity of the information processingmarket.What is happening to mini andmainframe companies as the microcontinues to pervade the industry?One thing is that several traditionalmainframe suppliers, Burroughs,Univac, NCR, CDC, and Honeywell(or BUNCH, for brevity's sake), areexperiencing a declining marketshare as mainframe customers selectIBM-compatible hardware as a stan-dard and turn to other forms of com-puting. Fujitsu, Hitachi, Mitsubishi,and NEC supply commodity main-frames, which are distributedthrough Amdahl, National, Univac,and Honeywell.The microprocessor-based systemsare the newest alternative for distrib-uted computation. New companiesare forming to develop these pro-ducts; Burroughs and NCR have dis-tribution agreements with newmicroprocessor suppliers such asConvergent Technology. As micro-processor technology continues to besubstituted for that of traditionalminicomputers, these suppliers findthemselves in a situation similar toBUNCH'Sdilemma. For example,SELand Prime, minicomputermanufacturers, have marketing/distribution agreements with Con-vergent Technology, but mini com-panies must compete with systemsbuilt from high-performance,commodity-oriented, 32-bitMOS-based microprocessors-processorsthat provide the sameperformanceas the traditional TTL-based pro-cessors at a small fraction of the cost.In short, the forecast could begloomy for mini companies. Just asthe mainframe companies were un-able to respond to the mini, the minicompanies will have difficulty mov-ing to meet the micro challengeTable1.Minicomputer technology cir-ca 1970.BASICMINICOMPUTERCOMPONENTSYSTEMINDUSTRIES COMPANIESPowerSupplies OptionalPackaging EssentialCore MemoryOptionalSemiconductorsCPUand Memories(MSI)DisksandTapes PeripheralControllersTerminals-Operating SystemsLanguagesApplicationsSystemIntegrationCOMPUTERbecause of the large installed bases,proprietary standards, and largefunctional organizations.The minicomputer generationIn the beginning of the minicom-puter industry, a product took twoyears to reach the market. Thisperiod began with the start of hard-ware design and went through writ-ing an assembler, a minioperatingsystem, and utility routines for thesophisticated users. A relatively widerange of technology (Table 1) was re-quired to design logic, core mem-ories, and power supplies; to inter-face peripherals and do packaging;and to write system software, such asoperating systems, compilers, assem-blers, and all types of applicationssoftware such as message switching.Clearly, this industry was high-tech.The early minicomputer, charac-terized by a 16-bit word length and4K-word memory, sold for about$10,000. It was small and could beembedded in larger systems (for ex-ample, electronic circuit testers andmachine tools); it could be evolvedto large system configurations; and itwas used for departmental timeshar-ing. Applications varied from factorycontrol to laboratory collection anddata analysis, and communicationsto computing in the office and smallbusiness. The original equipmentmanufacturer, orOEM,concept wasestablished so that hardware andsoftware and software-only applica-tions could be designed and mar-keted in two applications, thereby in-creasing the market for what wasbasically a general-purpose com-puter. Many more markets werecreated than could be reached by asingle organization with a limitedview of applications.From 1968 to 1972, about 100minicomputer efforts were started byfour different kinds of organizations(see box on following page). At least50new companies were formed byindividuals who came from estab-lished companies or research labora-tories. Some of these later mergedwith other companies. Establishedsmall and mainframe computer com-panies such as Scientific Data Sys-tems and CDC attempted to developa line of minis, and otherelectronics-related companies looked at the op-portunity to enter the computerbusiness.No significant minicomputer com-panies were established after 1972. Inthe late1970's,IBM decided thatdistributed departmental computing,using multichannel distribution(OEM/enduser), was not a fad andintroduced the Series1.Several com-panies, Floating Point Systems, forone, were started up to build specialsignal- and image-processing "nicheto supply high-availability andcluster-expandable minicomputersystems.We can make several conclusionsfrom the data on the minicomputercompanies:Seven successful minicomputercompanies-or eight percent ofall tries-survived to enter andFigure1.1984system price versus machine class. The dots on the ends ofthe lines signify the uncertainty of price range. Because these classes arerelatively new, prices are changing rapidly. Also the class has a broad defini-tion; that is, a number of products of varying complexity cangoby the samename. Products within a class can be anything from boards to completesystems.October 1984 15defend themselves in the micro-processor market.Another 16 companies suc-ceeded t o a lesser degree andstill exist in either diminished orniche segments of the market.Of all organizations,23(25%)were successful. While virtuallyall companies built workingcomputers, 75 percent did notbuild organizations with anylongevity for a variety ofreasons, including failure inengineering, failure in market-ing, faulty manufacturing, orinsufficient product depth orbreadth.Only two of 50(4%)start-upssucceeded and remained inde-pendent, although nineof50(18%)cont i nued in somefashion.For start-ups, merging increasedthe chance of survival; four of60 (7%) could be consideredwinners.The probability of a successfulmerger was 50-50.An organization that is part of alarger body in some other busi-ness is pretty likely t o fail; onlyHP-one of 23-really made it.A start-up within a large ex-isting company may as well be astand-alone start-up.Companies selling in a differentmarket or price band were un-able to make the transition.Only DEC made it, but we canargue that DEC was already inthe mini business and simplymaintained its market wheneveryone else startedmakingminis.IBMeventually started makingtraditional minis in the late1970's with the Series1andbegan claiming a significantmarket share. The System3(cir-ca 1972) was the most successful"business minicomputer."Companies that differentiatedtheir products by using special-ized hardware and softwarewere prone to failure. VendorsCOMPUTERthat made special computers foran application such as commun-ications or testing (real-timecontrol)piwaysfailed to makesuccessful minis and often failedor fell behind in developingtheir main product. Specializedhardware limited the market in-stead of broadening it; althoughspecialized software couldsometimes leverage sales, it wastypically inadequate when usedwith limited hardware for asingle market.Inthe mini generation having ahigh-performance, low-cost,general-purpose minicomputersuitable for broad applicationensured getting the largest mar-ket share. DEC, for example,had a variety of operating sys-tems aimed at the real-time,single user (which laid the foun-dation for the CP/M operatingsystem for personal computers)and provided communications,real-time control, and timeshar-ing. The real-time system wasultimately extended for transac-tion processing. Minis becameespecially useful for businessapplications because they weredesigned for high throughput.Although business computersweren't useful for real time,minis designed for real timewere very good for business andtimesharing uses.DG and prime-the first marketsuccesses.The initial Data Generaland Prime products were unique andhad a relatively long time to find aplace before the established leader,DEC, reacted to the threat. DG wasestablished by engineers who hadbuilt successful products at DEC (incontrast to many start-ups that hadlittle or no experience in designingproducts). DG had asimple-to-build, yet modem, 16-bit minicom-puter based on integrated circuitsthat enabled it to be priced below allexisting products despite its late en-trance into the market. In fact, thelate entry was a benefit, since moremodern parts could be used and theexperience of others could be takeninto account. The simplicity of theDG product allowed rapid under-standing, production, and distribu-tion, especially to OEMs. The OEMform of distribution is particularlysuited to start-up companies becausea product is not used in any volumeuntil one to two years after the firstshipment.Prime, another successful start-upminicomputer company, arose underdifferent circumstances. Before thecompany was established, BillPoduska, its founder, had built thebreadboard of a large, virtual mem-ory in a NASA laboratory. Primewas thus able to introduce the first ofthe "32-bit (address) minis" in 1973.With this new technology, programssuch as CAD could be run. DECdidn't provide a large, virtualmemory capability until 1978 when itintroduced Vax.The start-up of both DG andPrime were characterized by superbmarketing followed by the establish-ment of a large organization to buildand service in accordance with de-mand.DEC-a steady force in the minimarket.After several false starts,DEC was able to compete with DGand other start-ups because of itsmomentum in three other basicallymini product lines. Thus, its fun-damental business from its inceptionin 1957 was small computers,andwhile it produced the firstlarge-scale timesharing computer in 1966,it also produced the first mini, thePDP-8 in1%5.With the onslaught of minicom-puter start-ups (including DG, whichyou will recall was formed by formerDEC engineers in1%8),DEC finallyresponded with a competitivel dbi tminicomputer, the PDP-11, in 1970.The 11, which was comparativelycomplex, soldasa premium productand allowed DEC to quickly regainthe market. With thePDP-11'sUnibus, interconnection of OEMproducts was easy, and extensivehardware facilitated the constructionof complex software. By 1975,several different operating systemswere available for the various marketsegments.DEC converted the PDP-11 to amultichip set relatively early andentered the board market to competewith microprocessors to somedegree. Until just recently, it led the16-bit micro market, but nowchip-based micros are commodity parts,and the assembly of personal com-puters has become trivial. DECfailed to license the PDP-11 chips ormake them available for broad use,including the transition to personalcomputers, so unfortunately thePDP-11 today is merely another in-teresting machine that failed to makethe generation transition.DEC introducedVax-11,a 32-bitmini, about six years after Prime in-troduced its model, but atatimewhen physical memories were largeenough to support virtual memoriesand provide optimum cost and per-formance. Because it had muchlarger manufacturing and marketingdivisions, DEC quicklyregainidthemarket it had lost to smallermanufacturers including Prime.IBM-a consistent winner.IBMalways responds to mainline com-puting styles and needs, even thoughit sometimes enters the market late;for example, it didn't realize early onthat the minicomputer had broadmarket appeal.IBMsometimes innovates withradical new technology such as thedisk, chain printer, and Fortran, butoften follows pioneers in computingstyles as evidenced by its develop-ment of the minicomputer, timeshar-ing, the PC, local area networks, andhome computers.Some of its low-cost computersadmittedly were nearly minis: the1130 (1965) for technical computing,the1800(1%6)for real-time and pro-cess control, and the System 3 (1971)for business. In fact, while theminicomputer was forming, IBM waspreoccupied with introducing the360.However, we should rememberthat the antitrust suit against IBMOctober198417started in January1969and may ac-count for its lack of aggressivenessduring this time.IBM waited untilPCswere estab-lished before it entered the marketand established the standard. Now,only two years after entering themarket, it has the largest marketshare. Today, IBM is tackling thedifficult problems presented byhome computing. Thus, because ofits size,IBMcan dominateany(andperhaps all) market segments of in-formation processing in just a fewyears.If we look at computing in thesimplest way-that is, in terms ofsubstituting alternative price andperformance levels-we can say thata low cost means more people candecide to buy a product whether theyare small company presidents ordepartment heads in a large com-pany. The cost per user, then, deter-mines the product's attractivenesswhen weighed against other forms ofcomputation. By both measures,IBM missed the minicomputer mar-ket until it introduced the Serieslin1977.In short, IBMwillconsistentlywin, not only because of its size, butalso because it aggressively views allforms of computing and possiblycommunication as part of its market.HP-the only established com-pany to succeed.Hewlett-Packardpurchased a small start-up calledDyrnecto enter the minicomputerbusiness, and thus might be con-sidered a merger even though it in-tegrated the product into its organi-zation right from the start.HP'sfun-damental business was to produce in-formation from instrumentationequipment, and it regarded com-putingasfundamental. For mostcompanies outside the computerfield, computers were too much of adiversion from what they understoodand could manage.The success of HP alone onlyunderlines a concept that usuallyholds: Leaders in a market segmentof an industry usually remainleaders, unless too much evolu-tionary change is required. Tech-nology transition, which typifies thegenerations, requires much changeincluding a new computer, a newmarket, and a new way of com-puting. Since existing companies areunlikely to address a new market,new companies are required.The microprocessor generationThe micro-basedinformation-processing industry is composed ofthousands of independent, entrepre-neurial-oriented companies that arestratified by levels of integration andsegmented by product'function-whether microprocessor, memory,floppy, monitor, orkeyboard-within a level.The first computer companiesbuilt the whole system from circuitsto tape drives through end-user ap-plications in a totally vertically inte-grated fashion. A stratified industry,on the other hand, is a set of in-dustries within an industry, eachbuilding on successive productlayers. Each company designs andbuilds only a single product withineach level. Systems companies thenintegrate collections of the seg-mented products to produce a sys-tem for final use.Three factors have caused this in-dustry structure: (1) entrepreneurialenergy released by venture capital;(2)standards,' which become con-straints for the products and createproduct divisions, or strata; and(3)the establishment of clearly definedtarget product segments-so many infact that we are forced to ask "Whatpart of the industryishigh-tech?"Entrepreneurial energy.Com-panies form in an entrepreneurialfashion and are able to participate inevery level of integration in a singleproduct or through the integrationof products into a complete system.The amount of energy released tobuild products through entrepre-neurialself-determinismis truly in-credible;improvementsin produc-tivity by a factor of several hundredhave been observed in a single, largemonolithic functional organization.The industry formation process,expressed in a style similar to Pascallanguage dialect, is shown below.procedure Entrepreneur-Venture-CyclebeginjJi&Frustration>Reward{PushfromOld-co)andGreed>ear(hllto Newcompany) dobegin-get(PC,spreadsheet);IFSystem-Company thenwrite(Beat-Vax-Plan)-ELSEwrite (Plan)-New-Companyget (Venture-capital);{from Old-Venture-Co)exit {job);start(New-Company);get(Vax,developmenttools);build(product); sell (product);sell (New-Company);{ @100 x sales)venturefunds:=Co.-Salestart (New-Venture-Co.);end-end-The "pushandpull" concept.The WHILE clause in theabove(thestart-up) is evoked by two condi-tions: the"push"of an old com-pany and the "pull" of a new com-pany or product idea. Throughouteach generation, we've seen the"push." Bill Noms led a group(including Seymour Cray) fromRemington Rand's Minneapolisgroup (originally EngineeringResearch Associates) to form CDCin1957.Cray left CDCinthe early1970'sto form Cray Research. GeneAmdahl could not buildhigh-performance360'swithin the IBMenvironment, so he left to formAm-dahl Corporation. Later, he leftAm-dahl to form Trilogy for similarreasons. Bill Poduska, who foundedPrime in the early1970's,came froma NASA laboratory where he hadbuilt a prototype of a minicomputerwith a virtual memory. Later, he leftPrime to found Apollo Corporationand build clustered workstations.Bob Noyce left the Schockley Tran-sistor Company to formFairchild(where he was a major inventor oftheIC)and then left Fairchild withOctober1984Grove and Moore to form Intel todevelop the first MOS memories andmicroprocessors.Bymost accounts,all these transitions were made withat least50percent push from theparent company.Two business plans, separated bytheIFclause in theentrepreneur-venture capital cycle, are(1)a com-ponent plan to enter and address onesegment of the market, such as a newspreadsheet package, and(2)a planto build a computing system that willwin againstVaxor some part of theIBMPCmarket.Money is secured from one ormore venture capital companies. Thefounders leave their jobs and startthe New-Company in almost a singlestep. In some instances, "seed"financing is acquired wherebyfounders actually leave their jobsbefore the first business plan for thenew company is written.Building and selling the company.The company proceeds to get aVaxfor use as a development computer.They develop and sell a product.After the first profitable quarter thecompany goes public and the valua-tion is placed at multiples of up to100times the annualized sales of thecompany.(Amultiple of slightlyover one is not uncommon formature but still profitable com-panies.) With the funds from thepublic sale, New-Venture-Co. canbe formed t o invest in new high-techcompanies.The start-up and two alternatives.A PCrunning Lotus1-2-3is requiredto write the plan and address thefinancial aspects(i.e.,profit and lossandbalance sheet). Poduska's ele-ments in a successful business plan,whichmustbe less than10pages,in-clude2summary-one page;market brief, a synopsis of whowill buy and why;product brief, the what, why,and how of product building;people, the rule being use onlyGradeA,experienced people;andfinancial projection, character-ized by the desire for a practicalstrategy that would yield highyet realizable returns and thatcould be used as an operational"yardstick."Standards.Formal standardsdeveloped by international standardsgroups established many of the stan-dards (constraints) observed bytoday's designers. These restrictionshave gradually caused industriallayers to form, which have clearlydefined limits. The following eightlevels of integration form the in-dustrial strata, the bottom four beinghardware and the top four beingsoftware and applications.Discipline and profession-spe-cific vertical applications.CADfor logic design and cir-cuit design and small businessaccounting.Generic application.Wordprocessing, electronic mail,spreadsheets.Third-generation program-ming languages and databases.COMPUTERIFortran, Basic+Pascal+(evolution).Operating system.Base sys-tems, communication gate-ways,databases/integratedBasic+CP/M+MS/DOS+Unix (evolution).Electromechanical.Disks,monitors, power supplies, en-closures/8"+5"+3"(?)flop-py; 5" Winchester (evolution).Printed circuit board.Busessynchronized to micro andmemoryintros/S100+PCbus, Multibus+MultibusI1and VME.Standard chip.Micros, microperipherals andmemories/evo-lution of Intel and Motorolaarchitectures synchronized tothe evolution of memory chipsizes-8080[S100](4K)+280,6502(16K)+8086 [Multibus,PC Bus] and 68000 [VME](64K)+286 [Multibus111,68020 andNS32032(256K).Silicon wafer.Bipolar andevolving CMOS technologies(proprietary, corporate processstandards.. .require formali-zation to realize a silicon-foundry-based industry).Signal transmission, physical envi-ronment, communications links, andlanguage standards have played akey role in defining these strata. Defacto standards by various manufac-turers, which provide the most im-portant standards, are micropro-cessor architectures, buses, periph-erals, operating systems, and applica-tion software file formats. Regret-tably, we often misunderstand andunderestimate the importance ofthese and other standards.'.'Product segmentation.The num-ber of clear product segments in theindustry is a major determinant of itspresent structure. To understandthat structure, we need to isolatewhich products are worthy of thetitle "high tech." Advanced technol-ogy is characterized by significant in-vestment, highly skilled personnelwho understand the technology, andoften high project risk.Products evolve at a rapid rate anddemonstrate continued performanceand price improvements, togetherwith innovative structures. Theresulting products demand a premi-um. High-density semiconductorand magnetic recording products fitthe definition, but most systemsassembled from these components,such as IBM-compatiblePCs,areclearlynothigh-tech because theyare simply a system formed fromhigh-tech components.The barriers for entering an end-user, OEM, or system-level businesswith a generic product are not veryimposing (Table 2 shows the technol-ogy requirements), especially whenthey are compared with the complex-ity of the engineering needed to pro-duce early mainframes and minis(Table 1).Amicro-based systemcompany can be formedbya part-time president, someone with a PCand Lotus 1-2-3 to do the businessplan, someone who can buy andassemble the various circuit boardsintoaMultibus backplane, a pro-October198421Table2.Microcomputer-based tech-nology circa1978.BASICMICROCOMPUTERCOMPONENTSYSTEMINDUSTRIES COMPANIESPower Supplies OptionalPackaging OptionalSemiconductors-(micros, memory,peripherals)CRTsandTerminals-Disks and Tapes-Board Options Optional(displays)Unix&DiagnosticsOptionalLanguages&OptionalDatabasesLANsand OptionalCommunicationApplications OptionalSystem Integrationgrammer to buy and load a versionof Unix, and one or two helpers.The pointIam making here is thatthe single, most important measureof the high-tech portion of the microindustry is semiconductor improve-ment. That is, semiconductortechnology mainly determines thecomputer class (see box on previouspage). Clearly, many more issuesare involved in accounting for per-formance, price and relativeperfor-mance/price,including machine age;hardwired versus microprogrammedcontrol and associated instructiontimes; memory speed;Vax'scacheperformance (neither the Cray northeIBMPC uses a cache); floatingpoint speed; degree of parallelism forboth vectors and scalars; the relativegoodness of the Fortran compilers;and actual use versus a single bench-mark to typify a computer's work-load.Micro and mini computingstructuresHundreds more products can bebuilt from the micro than can bebuilt from the mini because of themicro's low cost, small size, and easeof programming. Personal com-puters, terminals, typewriters, andcomputingPABXsare all lower costalternatives to larger computers thatprovide relatively the same perfor-mance as their larger computerancestors. In addition, micro-basedproducts can be interconnected in avast array, forming a much largerrange than ever before. The most im-portant structure to emerge is thelocal area network, becauseitper-mits the formation of a much larger,potentially single system.LAN-basedcomputing.The infor-mation processing structure within alarge organization is driven by newlyemerging computer structures, com-puting nodes, and local area net-works, or communication links (Fig-ure2).TheLANis critical to com-puter evolution during the next fewyears, and the lack of standards isgreatly impedingpr ~gr ess.~The multiprogrammable operatingsystems introduced in the mid 1960'sallowed a machine to be shared by anumber of users, if each had a "vir-tual" computer (Figure 2a). Sinceoverloading is common in shared sys-tems, users enjoyed having theirown personal computers when rea-sonably powerful, reasonably cheapFigure2.Evolution from timeshared central computers to LAN-based clustered workstations and personalcomputers.22COMPUTERmodels were introduced in 1978 byApple and then in 1981 by IBM(Figure 2b).PCsproliferated in largeorganizations. The need to obtaindata from the shared computersmeant that programs had to bedeveloped that would allowPCstoemulate dumb terminals. IncreasedPC usage, coupled with greater ex-pectation of response time, provideda demand for increased shared com-putation at minis and mainframes.Because users wanted access tospecialized and central data, the de-mand for mainframes has resurged,and this trend is likely to continueuntil a fully distributed, LAN-basedsystem (Figure2c)is built.XeroxPaloAlto Research Centerinvented the LAN-based cluster con-cept in the mid-1970's using Ether-net, the basis of IEEE 802.3, theLAN standard. For powerful work-stations such as the Xerox Star orApollo Domain, the LAN must per-mit the sharing of files and intercom-munication of work. Functionalservices such as filing and printing ofthe shared system (Figure 2a) are de-composed into specialized "servers"(Figure2c)and connected along aLAN. A LAN, then, must addressseveral needs:Large, shared systems must be"decomposed" for improvedlocality, lower cost, physicalsecurity, communication with asingle resource, and incrementalevolution.Personal computers or work-stations must be "aggregated"into a single system to shareresources such as printers andfiles to intercommunicate.Networks of minis and main-frames, which have relied onpoor wide-area, data communi-cations facilities for local com-munications, require high-speedintercommunication.The connection of minis andmainframes to terminals mustbe completely flexible, and in-cremental upgrades must bepossible.Gateways must be done oncefor a network or protocol in-stead of for each system, there-by limiting the number of com-munications protocols.Thecomputingnodes.Figure 3 is ataxonomy of common mini- andmicro-based computer structures,which illustrate the plethora of newcomputer structures made possibleby the micro. (For more details onspecific structures, see the appendixto this article, "Specific Microcom-puter and Minicomputer Structures."These range from the simple PC tothe LAN, omitting the wide-area net-work. (A WAN is usually not usedasa single system, but as a communica-tion network among several systems,includingLANs.)The combination of micros,higher level performance, wide-scaleuse, and higher reliability can be of-fered for the price of a mini or super-mini. Complete new structures haveemerged, including functional multi-processors, symmetric multiproces-sors for performance and high avail-ability, fault-tolerant computers,and multicomputer clusters. In addi-tion, microcomputers are combinedinfixedstructures to provide high-performance, close-area-networkFigure3.Taxonomy of common mini- andmicrebasedcomputer structures.C=computer;P=processor;K=controller; Cluster=collection ofC'sactingas a single C-interprocessor communication times determine parallel process-ing grain size; and function=arithmetic, array processor, signal processor,communication (front end), database (back end), display, slmulatlon.October 1984computer clusters. If a method canbe found to use a large number ofessentially zero-cost microprocessorsin various multiple-processor struc-tures to work on a single job stream,then micros can potentially competewith all forms of computers in-cluding mainframes. Fox4 has usedan array of64Intel8086/8087-basedcomputers for particular theoreticalphysics calculations to show that thisstructure can approach supercom-puter performance.Figure4illustrates the variation inprocessor types for common com-puter types. Micros have followedthe traditional mini evolution andare today microprogrammed withthe exception of the MIPS chip atStanford5 and the RISC chip at theUniversity of California, Berkeley."Given the current speed of logicrelative to memory, it is again time t oreturn to direct (versus micropro-grammed) execution of the instruc-tion set when performance is a con-sideration.The systems industryVirtually all microprocessor-basedsystems supply a single informationprocessing market. Micros allowedthe PC to form but also t o attackthe traditional minicomputer, thehigh-availability mini, and possiblythe mainframe. Now with the stan-dard operating system, completeproduct segmentation may occur toeliminate vanity architectures at alllevels of integration.If minicomputer history is a goodindicator, fallout in the micro-basedindustry will be even more legendary.For example, of the100+worksta-tion companies, we can expect fewerthan 10 to survive, let alone prosper.A similar statement can be madeabout thePCmarket. The followingcriteria will determine success:Economy of scale in distribu-tion and service is most impor-tant.*Economy of scale in manufac-turing is critical for a fewfocused products such as thePC but less important for largerproducts. Here, systems inte-gration costs dominate. For ex-ample, the Japanese are likelyto dominate the PC market inmuch the same way they domi-nate consumer electronics.Time to market is far more im-portant than economy of scalein engineering or manufactur-ing.Since there are few techno-logical challenges in a start-up,companies will form if they getventure capital; later entrantswill be less successful.Specialized, or niche, productsare rarely "sacred" enough orFigure4.Taxonomy of common processor types.24large enough to serve as mainproducts very long.Generic and unique softwareapplications likeCADthat runon a few standardized structures(PCs,workstations, and super-micros) will fuel this generation.Truly unique structures likehome robots are rarely suffi-ciently protected by patents,processes, or practice to avoidbecoming displaced by an estab-lished supplier entering themarket. Remember how quicklyIBM became a dominant forcein the PC market?The applications challengeNow that we have examined thebewildering number of products andservices available, we need to look atways to supply them.Anumber ofstrategies are possible, from selling apurely general-purpose base systemto offering customized hardware andsoftware. In the latter case, however,the resulting function may scarcelyresembleacomputer. Economy ofscale may occur in the widespreadsales, distribution, installation, andservice of hardware products.An OEM approach usually re-quires a product range, not just apoint product. AnOEMcustomeroften requires service and always re-quires high-level applications andfield support. An end-user approachrequires both a wide product rangeand completesales/service.A new application software com-pany, such as one offering CAD ortypesetting, that has to invent itsown hardware system is likely eitherto become obsolete because of itshardware or to fall behind in itssoftware development. The companyis limited because investment has t obe divided between its unique vanityhardware and its specialty, added-value software. In most cases, largehardware vendors, such asAT&T,DEC,and IBM, can surpass thesmallhardware/softwaresupplier byusing packaged software from theapplications software industry.COMPUTERSupplying the basic computer.Figure 5a shows the simplest form ofdistribution for what is fundamental-ly a computer sold with some gener-al-purpose software. A base systemwould typically include generic soft-ware such as languages, utilities,editors, communications interfaces,and database programs. The systemis built by a hardware manufactureror system integrator; it is sold (S)directly or through another distribu-tion channel of some sort; and even-tually, the system is installed (I), theuser is trained (T), and the system isserviced(S).Supplying the basic computer withapplications software.As users re-quire more specialized applicationsfor particular professional environ-ments, such as the computer-aideddesign of electrical circuits, variousindustries will supply these pro-grams, creating a product develop-ment and distribution structure (Fig-ures5b,5c,and5d).The base-system manufacturerand an independent software in-dustry can coordinate the introduc-tion of applications programs intothe distribution network (Figure5b).Special software can be integratedwith the base system by the hardwaresupplier, the application supplier,the distribution channel (store orsystems installer), or the final user.A system manufacturer can ac-quire a variety of packages andtransform what is a general-purposesystem into a variety ofspecial-purpose systems. The software sup-pliers are likely to be the best ob-tainable for the application selectedbecause they have focused on theparticular, vertical professional ap-plication, be it mechanical or elec-trical CAD, architectural drawing,office automation, or actuarial orstatistical analysis. The software sup-pliers have the largest marketbecause a program can be trans-formed to run on many differentbase systems. Mentor is an exampleof a CAD company with a flexibleapproach to systems integration. Atotal system can be purchased fromMentor Apollo (and the hardwaresupplier of the workstation) or it canbe bought a la carte and integrated bythe customer.Supplying applications software aspartofasystem.Since the perceived(and often actual) price of software islow, a company marketing a softwareproduct and wishing to enhance itssales volume may buy hardware forresale as a complete system (Figure5c).In effect, a company potentiallycompetes with the hardware's mainmanufacturer by supplying a similar,but greatly enhanced product. Whilethe gross sales are up, the costs caneasily outrun the sales, since theonce-software-only company mustnow support hardware too. In addi-tion, the software company doesn'tusually market the range of productsof a mainline hardware supplier. Of-fering a total system, therefore, islikely to be less profitable-whenmeasured by return oninvestment-than offering software only, eventhough the total revenue of the com-pany would be much larger in theformer case. Furthermore, the sup-plier is cut off from the large numberof distribution channels possiblewhen a basic software package istailored for operation on many dif-ferent base systems. Computer Vi-sion is an example of a company thatnow buys products on an OEM basisfrom Apple andIBM,integratesthem, and supplies them as turnkeyFigure5.Alternative industry structures for supplying base, application andhardware-embedded computer systems(S/I/T/S=sell, install, train, ser-vice).October1984FigureA-1.Common micro- and mini-computer structures.PC=central processor;Pio=i/o processor;K=control; Mp=primary memory; Mc=cache; Ms=secon-dary memory; andT=transduce (terminal).26products. Computer Vision formerlymanufactured its own base system.Supplying unique hardware andapplication programs.The tradi-tional approach of catering toOEMs, which DEC established withthe minicomputer, is shown in Figure5e.A company skilled in a particulartechnology-computed axial tomog-raphy is a good example-or in logicboard testing can build a highly com-plex instrument. A computer mayconstitute up to half the cost of thesystem. Products of this nature arenotbasic, general-purpose com-puters, and as such, the customerwill not require other softwarebeyond the control of the device. Aspecialized field organization is re-quired to sell, install, and service thesystem and to train users. This sup-port is hardly possible with a conven-tional computer company.Afinal word about applications.Applications that involved minicom-puters are likely to be a good historylesson. Companies that tried tobackward integrate and build theirown minicomputer, such as Cincin-nati Milling, failed in the market,often neglecting their mainlinebusiness. The applications systemwinners combined the use ofageneral-purpose mini with their ex-pertise in the application. Companiesthat use high-cost, vanity hardwareor who distribute someone else'shardware will be at a disadvantagebecause the value of the product iscompletely in the software.New professional software ap-plication products will come fromthose in existing companies and in-stitutions such as universities whohave expertise in particular problemdomains. Applications industries willform and evolve through the stratamodel discussed earlier in "Stan-dards" to software-only companiesthat create the professional applica-tion (a form of "expert" system) anduse standard systems supplied byhardware vendors such asIBM.Thus, we have an opportunity notavailable in industry-to buildCOMPUTERgeneric, basic hardware systems ina2.w.D.Poduska,The Formation ofcrowdedfield,resultinginanalmostA P O~ ~ OCor~orati ofl,IEEEEngineer-ing Management Chapter,Dec.12,unlimited set of professional applica-,,,,tionproducts as experts encode their"knowledge" i nt o programs formachine interpretation and personaluse. These will constitute t he real ex-pert systemsoft he fifth generation,as they run o n evolutionary micro-processor-based comput er s a n dclusters of computers connected bylocal area networks.New t echnol ogy, especi al l yVLSI,has provided powerful,low-cost microprocessors and mem-ory which, in t urn, have acted asstandards and permittedanew in-dustrial structure t o emerge. Thestructure, which is typical ofacot -I/"-'.3. C.G.Bell, "Standards Can HelpUs,"Computer,Vol.17, No.6,June1984,pp.71-77.4. C.G.Fox, "Concurrent Processingfor Scientific Calculations,"Proc.CompconSpring84,IEEE-CS Press,LosAlamitos,Calif., pp.70-73.5.J.L. Hennessy et al., "The MIPSMachine,"Proc.l EEECornpconSpring82,pp. 2-7.6.D.A. Patterson and C. H. Sequin,"RISC-I: A Reduced Instruction SetVLSI Computer,"EighthAnnualSyrnp.ComputerArchitecture,May1981,pp.443-458.tage industry and is almost t he anti-thesis ofavertically integratedin-Appendix: Specificdustry, is stratified by eight hardwareandand software levels of integrationMicrocomputer Structuresand segmented byavast array ofcomponent products. Compani es arefunded byavast array of venturecapital companies formed f r om t heprofits of selling previous companies.The resulting product s are integratedi nt o an equally large array of systemproducts by traditional system sup-pliers, suchas IBM;compani es t hatadd value by distribution, service,a nd training; conventional, retaildistribution channels; and even t hefinal user.Themicro industry offersamuchwider range of computing products atalower cost-($500 t o$500,000)t hant he mini($20,000t o$500,000)ormainframe($250,000t o$5,000,000)industries can afford, and t he microoffers comparable performance. Theresults?Acontinued shakeout of alltypes of products and companies andchanging roles for all parts of t he in-dustry, including t he users.ReferencesFigureA-1illustrates a wide range ofmicrocomputers, from the common,single-processor, "Unibus" structure(Figure A-la), to computer clusters forhigh availability (Figure A-lb). Sincemicroprocessors require memory accessat a higher rate than the first DECUnibus and Intel Multibus (2M and 4Mbytes per second), the common adapta-tion is to provide a direct connection be-tween the processor and primary memory(Figure A-lb). Performance can be in-creased for these systems by having func-tional multiprocessors serve disks andterminals, including the migration ofsoftware for file access.A completely symmetrical multipro-cessor can be made using more recentbuses such as the MultibusI1or VMEbus (Figure A-lc), if a cache is used toreduceprocessor/memorytraffic. TheAretkquad processor, which uses thisprinciple, is shown in Figure A-Id.A variety of approaches are used to in-crease system availability. Parallel com-puters (Figure A-le) use the Multibus forintercommunicating among distinct,redundant computers(PC-Mp)andamong redundant controllers for secon-dary memory (Ms) and terminals. Ofcourse, much software is required to pro-vide true high-availability computingI.H.Hecht,6'ComputerStandards,,,with this structure. The structure is aComputer,vol.17,N ~.10,act.vastly scaled down version of theTan-19841dem (Figure A-1h).Stratus provides a fault-tolerant sys-tem (Figure A-If) that is completelytransparent to its software. Any hard-ware component can fail and the systemwill continue to operate without affect-ing the basic software. The single pointof failure is system and application soft-ware. Stratus systems require four pro-cessors and two memories to provide asingle, effective processor.SynapseN+I(Figure A-lg) uses asecond bus for both performance andredundancy in a true symmetric multi-processor version of the single bussystem. By having all resources in a singlepool, users can trade off performanceand reliability. Since work can be run onany processor, load leveling is automatic.Tandem (Figure A-lh) pioneered high-availability computing when it intro-duced its multicomputer system in themid-1970's using minicomputer technol-ogy. Sixteen computers are connected ina cluster via a dual, high-speedmessage-passing bus. Complete redundancy isprovided, including computers, controlunits, and mass storage. Operatingsystems and applications are run in twocomputers in a backup fashion. Informa-tion is forwarded to the backup processusing the intercomputer bus. A key useof the Tandem structure is to permit in-cremental addition of performance.Since processes and files are assigned tospecific processors, load balancing is lessdynamic than that in the multiprocessor.Several microprocessor versions of theTandem structure have been introduced,including models by Auragen and Com-puter Consoles Inc.The price range of micros from $500for a lap PC to nearly$500,000for a fullyconfigured multimicroprocessor is muchgreater than that for any previousgeneration or computer class (see Figure1in main article). TableA-1illustratesthe range of several Motorola68000/Unix-based computers that compete withthe minicomputer.'Winners and losers in products,organization, and marketing may alreadybe established.' However, manymicro-based products are still to be inventedoutside the computer classes previouslydescribed. The box on the following pagecontains questions about each structurein terms of competitiveness, long-termstability, and substitution with otherstructures.In addition t o these questions aboutword processing, workstations,super-micros, and clusters of micros and high-October1984availability computers is the most impor-tant question: that of standards, espe-cially Unix.Uniw.For awhile, Unix appeared to besuitable only for a particular class of ex-perimental uses, but now it promises t obe a constraint for the whole market. In-teractive computing with Unix is theproduct constraint future users are allhastening to demand, or at least specify.Justas the PC market has standardizedon the IBM PC(8086,MS/DOS,PCbus, graphics interface, file formats,etc.), the market for systems larger thana PC appears t o be standardizing onUnix. IBM has shown its flexibility inadopting industry standards especiallywhen the time to market is crucial andthe market demands it. If customerswant a product, IBM will likely supply it.IBM has already announced Unix on thePC and will probably respond with Unixon its4300and mainframes.In a similar fashion, every minicom-puter and microcomputer supplier ap-pears t o be offering Unix in acommodity-like fashion. While the com-bined market is large, the fundamentalmarket hasnotbeen expanded, butmerely made more accessible by everymanufacturer. The result will be thatmore small manufacturers who have in-adequate marketing and manufacturingorganizations will fail t o compete withmainframe and mini suppliers.Unix has been an opiate that hundredsof companies have used as an excuse toform and assemble-quite trivially-aproduct from boards, Unix ports, orgeneral-purpose software. Perhaps theentry cost for computer systems shouldbe higher.Office and word-processing systems.Historically, general-purpose computershave won in the marketplace over equiv-alent special-purpose machines. TheIBM PC standard is the unique structureto watch as conventionalword-proces-sing software becomes available andreplaces simple editors. Terminals, in-cluding typewriters with built-in modemsor computing telephones, can be con-nected to desktop and pedestal-sized,shared micros running Unix or to largesystems for the casual users. Profes-sionals who already have large worksta-tions use them for text processing.Workstations.Over100workstationvendors value themselves at up to$100COMPUTERbillion for a commodity-like productwith a limited market to engineers,scientists, and business analysts. Allhave enough organizational overhead tostart, but few have the critical mass orability to raise the next round of capitalto gain a significant market share exceptthose well on theway-Apollo,Apple(with Macintosh), Convergent Technol-ogy, and Sun-or those with uniquehigh-performance products such as Sili-con Graphics.'Workstation design consists of "assem-bling" the following:boards with microprocessors, disk,CRT, and communication con-trollers that use one of several stan-dard buses, suchasMultibus, Qbus,orVME/Versabus;appropriate disks andCRTs;standard or custom enclosures;a licensed version of Unix availablefrom myriad suppliers; andgeneric software, including wordprocessing and spreadsheet.Each start-up company believes itsproduct and business plan will beatApollo, the first entrant into the high-performance, clustered workstationmarket. In fall 1983, just after goingpublic, Apollo was valued at$1billionwith annualized sales of less than $100million and with fewer t han1000employees. At the same time, Digital hada valuation of about$4billion with salesof$4billion andawork force of over70,000.Atypical workstation start-up com-pany compares itself with Apollo on twopoints: the start-up date (usually one totwo years after Apollo when systemswere easier t o build) and the currentmonth's annualized shipments. In thiscontext, within two years, each of100+companies will be valued at$1billiondollars, giving a valuation of workstationcompanies of $10 t o $100 billion...atleast one order of magnitude greater thanany optimistic projection of the market.This valuation doesn't include estab-lished companies. The workstation is amainlineproduct for large suppliers suchasAT&T(via new teletype computingterminals), DEC, HP, and IBM. Also,the 32-bit personal computers circa1984-85, led by IBM using256Kchipsand the Intel 286, will provide the powerof emerging 68000-based Unix worksta-tions at a lower price.TableA-1. Selected68000AJnixcomputer systems.FIRSTENTRYMAXI MUMSYSTEM BUS STRUCTUREDELIVERY PRICE(K$)USERSApple Macin-toshCorvus UniplexAltos586-1 0Wicat150WSNCR Tower1632PlexusP/60SUN Worksta-tionONYXC8002Aretk1000Synapse'Stratus/32*Auragen 4000Ext.serialBackplaneMultibusBackplaneMultibus+PcMp busMultibus+PcMp busMultibusNoneSingle prop.busDualDual votingbusModified VMEdual interCPCMicro, LANserveShared microPC, sharedSharedSupermicroWorkstationShared microSymmetricmPSymmetric highavailmP-:multiFault tolerantMultiprocessorMulticomputer,Tandem type'Operating system kernel LSnotUnix-basedt UNl XReview,June/July1983.$Degreeofrange for a multiprocessor.Super-microand clustered super-micro systems.Basically this structurecompetes with old-line miniandmain-frame makers, both of which are begin-ning to distribute supermicros (the Con-vergent Technology distribution model,for example).CTsupplies hardware totraditional manufacturers who use onlytheir distribution capability. Neithergroup will let its base erode withoutresistance, and both are ultimatelycapable of backwardly integrating OEMhardware.High-availability computer systems.High-availability computing, pioneeredby Tandem, may no longer be treatedasa niche, but rather something a usershould be able to trade off. Tandem'sproduct line is based on mini technologyand as such now has about 20 companiestargeting its base using microprocessors.DEC has introduced theVaxclusters inthe "Tandem-price" market, butVLSIwill reduce the cost. An IBM product islong overdue.Because a somewhat different struc-ture is involved in buildinghigh-avad-ability computers, especially with respectto software, there is a clear market. Asthe overall reliability of computers in-creases,tnedemand andpncepremiumfor high-reliability or high-availabilitycomputing is unclear.There is still interest in making aself-diagnosable, self-repairing computer thatneverfails, however. While this feat ispossible for the CPU portion of a sys-tem, the peripherals and software do notpermit the ultimate machine to be builtfor some time.The most important aspect of high-availability computers is that they can bedesigned for incremental upgrades usingboth the multiprocessor and multicom-puter structures. This capability is whymany computers are sold, regardless oftheir availability. With much lowerpriced machines, a broader range, andthe introduction of fully distributedcomputing inLANclusters, the need forhigh-availability computers for in-cremental expansion may decline.October1984References1.R. Baily, R. Scott, andK.Roberts,"Buyer's Guide To Hardware,"UnixReview,Vol. I, No. I,June/July1983,pp. 48-73.2.S.T.McClellan,The ComingCom-purerIndusfryShakeout,JohnWiley&Sons, New York, 1984.3.C.Machover and W. Myers,"lnterac-tive Computer Graphics,"Computer,Vol.17,No. 10, Oct. 1984.C.GordonBellis chief technical officerfor Encore Computer Corporation,where he is responsible for the overallproduct strategy. Before joining EncoreComputer, he was vice president of engi-neering for Digital Equipment Corpora-tion, responsible forR&Dactivities incomputer hardware, software, and sys-tems. He was also manager of computerdesign at DEC, responsible for thePDP-4,-5, and-6computers and served on thefaculty of Carnegie-Mellon Universityfrom 1966 to 1972.Bell led the team that conceived theVax architecture, established DigitalComputing Architecture, and was one ofthe principal architects ofC.mmp(16processors) and Cm* (50 processors) atCarnegie-Mellon. He is widely publishedin computer architecture and computerdesign.Bell earned his BS andMSin electricalengineering at the Massachusetts In-stitute of Technology and holds severalpatents in computer and logical design.His address is Encore Computer Corp.,15 Walnut St.,WeHesleyHills, MA02181.