Continued research into Intel's i960 architecture has resulted in the development of performance improvements beyond those implemented in the i960CA microprocessor. These improvements allow additional superscalar dispatch opportunities, reduce memory access delays, and enhance the performance of specific instructions. The i960MM microprocessor is an implementation of these performance enhancements...
View full abstract»

The next-generation MIPS CMOS microprocessor, the R4000, uses a technique called superpipelining to achieve a high level of performance. The authors discuss the evolution of the R4000 pipeline from the R3000 pipeline and the reasons why a superpipelined microarchitecture is chosen. First, there are no instruction issue restrictions with the R4000 superpipeline, as there would have been in a supers...
View full abstract»

The Lightning SPARC superscalar microprocessor chipset is the first processor to implement the Metaflow architecture. This architecture exploits instruction-level parallelism present in conventional sequential programs by hardware means, without relying on sophisticated optimizing compilers. The Lightning processor is capable of executing instructions out of order and speculatively. Lighting is ba...
View full abstract»

The author describes all the functions of the Am286ZX integrated processor and shows how to design a PC-AT motherboard using the Am286ZX integrated processor. The Am286ZX integrates the 80C286 CPU along with all the other logic functions into one piece of silicon, which makes the Am286ZX a PC-AT motherboard on a chip.<>
View full abstract»

Many commercial users are faced with replacing their DOS PCs with next generation PCs or powerful desktop workstations. With this in mind Opus Systems has developed the 500 Personal Mainframe, a PC add-in card and SunOS Unix software that transforms a PC into a fully SPARCstation-compatible unit capable of running simultaneous native DOs and SunOS. The hardware and software features of the 500 Per...
View full abstract»

A system that is a hybrid of a PC and a workstation is described. It gives users a superset of both systems without loss of the compatibility, performance, or capability of existing PC systems. The system consists of a SPARC-based workstation with an optional 80386-based DOS coprocessor module. The RISC motherboard and the DOS module have their own cache, memory, and graphics subsystems, which ope...
View full abstract»

An attempt was made to build a SPARC-based system with more features and higher performance than the existing SPARC system at a competitive price. The Solarix/4 Personal Workstation Plus, (PW+), which initially offered 18 MIPS at 25 MHz, was augmented by a floating point unit, delivering more than 3 MFLOPS and 64 kbyte of cache. With eight SIMM slots on the motherboard, RAM can be expanded to 32 M...
View full abstract»

The BBN TC2000 is a scalable general-purpose parallel architecture capable of efficiently supporting both shared memory and message passing programming paradigms. The TC2000 machine architecture and the programming models that have been implemented on it are described. In particular, the split-join model, its memory model, and the message passing model are described. Specifics on how the implement...
View full abstract»

The authors describe the deterministic solution of the neutron transport equation and the computation of the effective criticality of three-dimensional assemblies using the BBN TC2000 killer micros. They observe that the performance of the research code PTRAN running on 48 processors of the TC2000 is competitive with the partially vectorizable version running on a single Cray Y/MP processor. This ...
View full abstract»

The authors report their experiences with the Gauss elimination algorithm on several parallel machines. Several different software designs are demonstrated, ranging from a simple shared memory implementation to the use of a message passing programming model. It is found that the efficient use of local memory is critical to obtaining good performance on scalable machines. Machines with large cohere...
View full abstract»

Caltech uses parallel computers for a variety of large-scale scientific applications. It has acquired commercial parallel computers, some of which have performance that rivals or exceeds that of conventional, vector-oriented supercomputers. A new project has been started that builds on experience with concurrent computers and attempts to apply Caltech methods to the simultaneous use of parallel an...
View full abstract»

A programming environment for instructured triangular meshes has been written. The resulting software, DIME (distributed irregular mesh environment), is responsible for the mesh structure, and a separate application code runs a particular type of simulation on the mesh. DIME keeps track of the mesh structure, allowing mesh creation, reading and writing meshes to disk, and graphics. Adaptive refine...
View full abstract»

A parallel version of the Barnes-Hut N-body algorithm is described. The algorithm first assembles a tree data structure which represents the distribution of bodies at all length scales. A domain decomposition is used to assign regions of space and hence bodies to processors. An adaptive load balancing technique is used to insure that processors are assigned equal amounts of work. A tree is built i...
View full abstract»

The supercomputing field is witnessing new processor technologies and architectural approaches that provide more cost-effective computing. The design of FPS Computing's System 500 supercomputer is discussed. It provides the ability to integrate in a modular manner the heterogeneous processing capabilities of scalar, vector, parallel/matrix, and application-specific processors-all in single, integr...
View full abstract»

A network of Sun workstations implemented in the SPARC architecture provides a basis for convenient sharing of scientific computing results, both as procedures and data, through strict adherence to accepted standards. The purpose of the Star 910/VP is to provide a compatible extension of the SPARC architecture so that large-scale scientific computing can be made available as a facility of such net...
View full abstract»

The Hewlett-Packard DN 10000TX scales the original DN 10000 processor design to twice the performance through the use of more aggressive semiconductor technologies. The original eleven VLSI chip CPU design has been recast onto eight VLSI chips including a 1.0- mu m structured-custom chip, five submicron gate arrays, and one bipolar floating-point chip. Continuing to exploit the PRISM architecture ...
View full abstract»

The author examines a method of applying parallelism to data-moving operations to enhance performance so that they may fit into today's maintenance windows. She specifically discusses converting the algorithm used to load an alternate key file (index) from serial to parallel using Tandem's Non-Stop SQL. It is shown that using disks as the unit of parallelism allows data movement to be parallelized...
View full abstract»

It is noted that Tandem's loosely coupled multiprocessor machine is a natural environment for parallel database operations. Horizontally partitioned tables are supported by the Tandem system. It is shown how this environment is utilized in Tandem's NonStop SQL implementation. The Tandem architecture, parallel index maintenance, parallel use of partitioned tables, and repartitioning are discussed. ...
View full abstract»

It is pointed out that resource-intensive batch query processing can adversely affect the performance of concurrently executing response time-critical interactive workloads such as online transaction processing. This is especially true in loosely coupled client-server multiprocessor architectures without centralized scheduling or load-balancing mechanisms such as the Tandem NonStop system. In this...
View full abstract»

It is noted that, even though SQL (structured query language) was invented in 1974, it did not become a standard until late 1986. Between 1974 and 1986, many vendors developed relational database systems based on the SQL language. Thus, even though many vendors support the SQL language, the implementations are not necessarily compatible because they were developed before an official standard was a...
View full abstract»

The SQL (structured query language) Access Group, an open industry consortium that includes many major SQL database vendors and a number of database tool developers, is developing a common embedded SQL language interface for client-server interoperability in OSI (open systems interconnection) environments. This specification covers most of the features in the current ANSI/ISO SQL standard, include...
View full abstract»

It is pointed out that, in a client-server environment, use of standard SQL (structured query language) itself is not sufficient. If the client and the server reside on the heterogeneous systems connected through a network, then they must communicate using a common protocol in order to interoperate. The author describes one such protocol being developed by ANSI and ISO and also discusses the effor...
View full abstract»

Pegasus, a heterogeneous multidatabase management system that is under development, is described. The goal of the system is to provide facilities for applications to access and manipulate multiple autonomous heterogeneous object-oriented, relational, and other information systems. Pegasus defines a common object model for unifying the data models of the underlying systems. The data language of Peg...
View full abstract»

The authors discuss approaches that can be used to enforce global serializability and provide recovery in a multidatabase environment. They present solutions to the problem of transaction management in a multidatabase environment. Rather than developing a new global mechanism that duplicates the functionality of local systems, they attempt to take full advantage of the existence of their local con...
View full abstract»

A deadlock detection algorithm and a deadlock prevention algorithm in a multidatabase environment are introduced. The deadlock detection algorithm is based on the potential conflict graph (PCG) introduced by Y. Breitbart et al. (1990). The deadlock prevention algorithm is based on the value data protocol discussed. The correctness of both algorithms is proved, and their performance is discussed.< View full abstract»