PC Servers Get Big Iron Boost

Bidding to regain market share lost to other server vendors, IBM Corp. this week outlined plans to bring mainframe-like throughput and high-end clustering capabilities to its Netfinity server line.

IBM says it will replace Netfinity's PCI I/O bus architecture with what it calls Future I/O. The new I/O scheme will more than double the throughput of existing high-end Intel servers, says Tom Bradicich, director of server architecture and technology at IBM's personal computer group.

Big Blue's plan to bolster the power, throughput and reliability of its Netfinity server family could make it easier for users to justify running large-scale distributed applications, databases and Web sites on the boxes rather than on competing PC and Unix servers from competitors, such as Compaq, Hewlett-Packard and Dell.

In the long term, Netfinity servers implementing Future I/O will be capable of handling "dozens" more adapters than current servers, Bradicich says.

In a nutshell, Future I/O will take mainframe channel connectivity technology and make it part of Netfinity's external adapter and internal I/O design.

Bradicich says the mainframe channel I/O technology uses an internal switching fabric to route around bottlenecks so there is no single point of congestion. In comparison, PCI bus technology offers a single pipeline for data to flow through to the network, so if the pipe becomes congested, server performance quickly suffers.

With the emergence of larger applications and faster LAN technology, such as Gigabit Ethernet, today's servers will increasingly become bottlenecks, Bradicich says. With Future I/O technology Netfinity servers will be able to support more users and run bigger applications.

The drawback is that Future I/O -- which will be implemented by using cards that fit into Netfinity slots -- won't be available until sometime between late 2000 and early 2001, Bradicich says. Adapters conforming to the latest IBM PCI bus architecture -- PCI-X - will be compatible with the Future I/O technology. But most PCI-X products won't be available until late 1999.

Still, any improvement in I/O performance is good news to at least one user, Todd Dion, vice president of technology at Tutor Time Learning Systems, a chain of child-care facilities in Boca Raton, Florida. Bottlenecks in the server are "always an issue," he says. In his data center are about 150 users and eight Netfinity servers. The servers run corporate accounting applications.

"Our corporate center users can benefit from an increase in I/O performance," he says. He also hopes that IBM will be able to successfully get this future channel I/O technology accepted by regulating bodies. "I like to keep the system as open as possible so I can plug in as many applications as possible and keep running multiple vendors' products," he says.

In the near term, Bradicich says IBM will roll out a Netfinity SP Switch, a dedicated, high-bandwidth server-clustering device. The SP Switch, which IBM offers today for its RS/6000 family, will tie together between eight and 16 servers as a clustered system. Currently, PC servers are generally limited to two- to four-way clusters.

While exact details of the SP Switch are sketchy, the device should support Ethernet, Fast Ethernet and ATM links. The SP Switch also has load-balancing features that will let it direct traffic to the least busy Netfinity server in the cluster. Included with the SP Switch will be management software that will help users set up and control the data in the cluster.

The idea is that by strapping together Netfinity servers, users would have more system uptime and server failover capabilities. Because Netfinity servers run Windows NT, users would have the option of building large NT clusters and supporting large NT applications as that operating system becomes more popular.

IBM says its DB2 database, Lotus' Domino and Oracle applications will be able to run on SP Switch clusters.

Copyright 2018 IDG Communications. ABN 14 001 592 650. All rights reserved. Reproduction in whole or in part in any form or medium without express written permission of IDG Communications is prohibited.