Posted on July 01, 2002

Much has been said and written about Serial ATA and its potential impact on storage markets. For the past year, analysts have been forecasting big numbers for Serial ATA not only on the desktop, but also in multi-user environments where cost, not reliability or performance, primarily drives purchase decisions (see figure).

Finalized in August 2001, the Serial ATA specification was designed to replace Ultra ATA/100 (parallel ATA). Its cost (expected to be significantly lower than SCSI alternatives), data rate (150MBps), compact cable design, and low pin count make it a viable alternative to SCSI in desktop and server/networked storage markets, according to analysts (see "ATA puts the squeeze on SCSI," InfoStor, January 2002, p. 1).

First-generation drives are expected from Fujitsu and Seagate as early as this summer. Maxtor plans to ship Serial ATA drives by year-end and will integrate the drives into its MaxAttach line of network-attached storage (NAS) devices in an undisclosed time frame.

As an indicator of market acceptance, vendors such as Dell, EMC, Hewlett-Packard, IBM, Network Appliance, Quantum, and Sun are all shipping parallel ATA-based products.

"I don't think anyone [believes] that Serial ATA is going to be the common interface for all drives in all situations," says Dave Reinsel, research manager at International Data Corp., in Framingham, MA. "There are high-end, 24x7, OLTP-type applications that just aren't appropriate for Serial ATA." For these applications, SCSI and Fibre Channel will continue to be the drive of choice, he says.

John Monroe, a vice president at Gartner Inc., believes that budget constraints will help accelerate the acceptance of "lowest-possible-cost" storage solutions. "For many low-end and medium-range multi-user requirements, Serial ATA can provide significant cost advantages with minimalif anysacrifice in performance," he says.

Monroe expects the total number of multi-user ATA/Serial ATA hard disk drives to increase from 2.4 million drives this year to about 9.2 million drives by 2006. This compares to 18.3 million and 15.7 million, respectively, for SCSI (all types) and 2.5 million and 12.4 million, respectively, for Fibre Channel (1,2Gbps).

IDC's Reinsel is less bullish on Fibre Channel, expecting drive shipments to increase to 5 million units by 2005 (see figure). As for SCSI and ATA unit shipments, he predicts a decrease in total SCSI shipments from 16 million to 12.5 million units over the forecast period, and a steep ramp in ATA shipments to nearly 10 million units. He also expects Serial ATA to account for nearly 100% and Serial-attached SCSI about 50%, respectively, of total ATA and SCSI unit shipments by 2005.

Reinsel says his projections assume several things:

That leveraging Serial ATA drives will significantly lower the cost of enterprise storage (ATA drives currently cost about one-third of SCSI drives);

That the reliability of Serial ATA drives is good enough for networked storage applications (the MTBF of an ATA drive is 300,000 to 500,000 hours versus 1.2 million hours for SCSI and Fibre Channel);

That Serial ATA will meet the performance requirements of these applications (will vendors come out with 10,000rpm drives?); and

That the interface's command set will be sophisticated enough to compete against SCSI.

At the Intel Developers Forum in February, the promoters of Serial ATA 1.0 announced the formation of a second working group to develop a second-generation Serial ATA specification to meet the requirements of users in server and network storage markets.

"Serial ATA 1.0 will work today in server and network storage [devices]," says Larry Leszczynski, Intel technology initiatives manager, "but 2.0 will improve their use in these markets."

The specification is expected to go to ballot by year-end. The SCSI Trade Association (STA) is positioning the standard as a mainstream replacement to the large installed base of parallel SCSI products.

The key to the proposed standard is its flexibility and extensibility, says Marty Czekalski, an interface specialist at Maxtor and STA treasurer. Like Serial ATA and Fibre Channel, the proposed SCSI standard has a point-to-point architecture, which he says makes configuration easier, increases device support (more than 128 devices), and improves potential per-pin bandwidth.

Click here to enlarge image

The interface leverages the SCSI protocol set, the Serial ATA physical layer (with additional extenders), and the Fibre Channel packet structure. According to Czekalski, by taking pieces of existing technologies, STA will be able to bring products to market faster, improve interoperability, and reportedly scale the technology to 6Gbps.

"Parallel SCSI faces a lot of challenges as it moves forward," say Czekalski. "640Mbps is doable, but with some constraints."

The current Ultra320 SCSI interface has a 320MBps data rate. This rate gives the interface ample throughput to handle up to five hard disk drives (without saturating the bus) and a lifespan beyond 2003. Initial Serial-attached SCSI products are expected in 2004.

Maxtor says that the general rule of thumb is that the SCSI bus bandwidth should be at least four times a hard-disk-drive's maximum sustained data rates. (For more information about Ultra320, see "A technical look at Ultra320 SCSI," p. 26.)

Performance aside, perhaps the most interesting feature from an end-user and vendor perspective about the proposed Serial-attached SCSI standard is its interoperability with Serial ATA. Because Serial-attached SCSI uses the same physical layer as Serial ATA drives, vendors will be able to design systems that support both Serial ATA and Serial-attached SCSI drives.

The shared backplane will cut down on the number of system configurations vendors will need to design and manufacture and will give users more flexibility in configuring systems.

"You can plug in 10,000rpm or 15,000rpm Serial-attached SCSI drives for high-performance applications or you can plug in a lower-cost Serial ATA drive where cost[not performance]is the key metric," says Czekalski.

IDC's Reinsel believes that this feature may actually benefit Serial ATA more than it does Serial-attached SCSI since users will be able to "try out" lower-cost drives in new applications.

"The fact that the Serial-attached SCSI and Serial ATA shared the same hardware interconnect will allow users to be creative and to try to leverage Serial ATA in new ways," he says.

While some vendors may opt to build systems that only support Serial ATA or only Serial-attached SCSI, initial vendor feedback suggests that most products will be built to support both interfaces, says Czekalski.

The Serial-attached SCSI interface also includes support for more than 128 devices and peer-to-peer connectivity; has a full-duplex architecture, which allows for rate matching and mixing within loops (i.e., drives run at their full rates, not at the lowest-common-denominator as is the case with Fibre Channel); and provides near-instantaneous load balancing via "link aggregation."

STA officials are positioning ATA for entry-level applications, Serial-attached SCSI for the bulk of enterprise applications, and Fibre Channel for very large, high-end configurations with long-distance requirements.

Product spotlightWhile the spotlight recently has been on ATA markets, nearly all of the product activity has been in traditional SCSI and Fibre Channel markets. Following is a sampling of new product announcements from drive vendors.

Fujitsu packs 147GB in 1-inch driveFujitsu last month announced that it is expanding its hard-drive family to include two higher-capacity, higher-speed drive series. The MAP and MAS series have spindle rates of 10,000rpm and 15,000rpm, respectively.

The MAP series is available in 36GB, 73GB, and 147GB capacities; the 15,000rpm MAS series in 18GB, 36GB, and 73GB. Both drives come in a 1-inch-high, 3.5-inch form factor, include an 8MB multi-segmented data buffer and a 32-bit-wide internal data path, and support Ultra320 SCSI. (The MAP series also supports 2Gbps Fibre Channel.)

The company claims an internal data-transfer rate of 114MBps for the MAS series and 107MBps for the MAP drives.

Maxtor hits 15,000rpmMaxtor last month announced its first 15,000rpm drivethe Atlas 15K. The drive has an Ultra320 SCSI interface, a 3.4ms seek time, 8MB cache buffer, and 75MBps sustained data-transfer rate. It is available in 18GB, 36GB, and 73GB capacities.

The drive's 15,000rpm spin rate allows for faster access to data than with 10,000rpm drives, which means system administrators can use fewer drives to meet performance requirements. The Atlas 15K is being positioned for data-intensive applications/systems (e.g., SANs, high-end servers, and workstations). Evaluation units are expected next month, with general availability slated for Q4.

Also slated for volume shipment in the fourth quarter is the Atlas 10K III-U320 drive. This second-generation Ultra320 drive has a 72MBps data-transfer rate and a 4.4ms average seek time. It will be available in 36GB, 73GB, and 146GB capacities.

Seagate drives down storage costsSeagate says it plans to begin volume shipments this quarter of its Cheetah 15K.3 and 10K.6 series. The 15K.3Seagate's third-generation 15,000rpm driveis priced from $289 to $939 (depending on capacity), and the 10K.6 series is priced from $289 to $1,289.

The 15K.3 drive is available in 18GB, 36GB, and 73GB capacities, has a 3.6ms average seek time, and a maximum 75MBps sustained transfer rate. The 10K.6 comes in 36GB, 73GB, and 146GB capacities, has a 4.7ms average seek time, and a maximum 68.5MBps sustained transfer rate. Both drives support Ultra320 SCSI or 2Gbps Fibre Channel.

A technical look at Ultra320 SCSI

By KK Rao

Over its 20 years of development, each generation of SCSI has doubled performance. The latest generationUltra320 SCSIcontinues that tradition by doubling bandwidth from the 160MBps of Ultra160 SCSI to 320MBps.

Ultra320 SCSI requires protocol changes to reduce command/status overhead to maximize the advantage offered by the high bandwidth. Accordingly, Ultra320 SCSI introduces several new features to support high speed and reliable data transfer. These new features include physical/signaling enhancements and protocol enhancements.

Double transfer speedWhile doubling the data-transfer rate to 320MBps, Ultra320 SCSI continues to use the dual-edge clocking mechanism introduced in Ultra160 SCSI, but the clock speed is doubled to 80MHz. The increased performance is particularly noticeable in large block data transfers or systems with several devices on a single bus.

Paced transfersEarlier versions of SCSI supported two transfer modesasynchronous and synchronous. In asynchronous mode, every element of data transfer requires a complete handshake between the initiator and target. In synchronous mode, data elements up to the negotiated synchronous offset can be transferred at the negotiated transfer period before a handshake is received. Ultra320 SCSI introduces paced transfers, where the handshake's request (REQ) or acknowledge (ACK) signal (depending on whether the target or the initiator is sourcing the data) is like a free-running clock. The P1 parity line indicates when data is active. This simplifies data clocking logic, enabling the high transfer rates of Ultra320 SCSI.

Training patternsAt high speeds, delays between individual signals through slightly varying cable lengths are significant enough to cause data to be incorrectly interpreted. Training patterns help overcome such problems. With Ultra320 SCSI, a predetermined pattern is sent on all data signals at the start of the first data phase between pairs of devices. The recipient uses it to compensate for signal timing during the actual data transfer. The protocol is defined to retain training information so that it needs to be performed only when conditions change.

Driver pre-compensationUltra320 SCSI specifies twice the data-transfer rate of Ultra160 SCSI over cables with the same characteristics. When a data signal has a sequence of bits of the same value (0 or 1) followed by a single bit of the opposite value, the resulting voltage level transition may be too small to be accurately detected. Ultra320 SCSI specifies strong and weak driver strength levels. With pre-compensation enabled, the driver uses a strong drive for data bits that change state and a weak drive for data bits that remain the same. This improves the detection in the receiver, providing for reliable data transfers.

Protocol enhancementsThe increase in data-transfer rate offered by Ultra320 SCSI will show up in increased bandwidth, mainly for large block transfers. For small transfers (less than 8KB), a large portion of the transfer time is taken up by overhead. To reduce overhead and maximize utilization of bus bandwidth, Ultra320 SCSI offers two key featuresinformation unit transfers, and quick arbitration and selection.

Information unit transfers (also known as "packetized" transfers)Commands and status, in addition to data, are transferred in packets using paced or synchronous modes. (In previous SCSI versions, commands and status were always transferred in 8-bit asynchronous mode.) Further, multiple commands can be transferred in a single connection. Streaming transfers can also be performed with information unit transfers, enabling multiple data streams to be transferred with a single control (L_Q or LUN-tag) packet.

All information units are CRC protected for data integrity.

Quick arbitration and selectionQAS enables transfer of ownership of the SCSI bus from one target to another without a transition to the bus-free phase. Thus, data transfers are virtually back-to-back, improving bus utilization considerably.

Ultra320 SCSI disk drives and PCI-X RAID controllers are expected this summer and will be deployed in servers in direct-attached and networked storage environments, as well as in high-end workstations.

RAID implementations include components such as RAID tables that define the configuration of RAID arrays, data structures to store the descriptors for cached data, engines for calculating parity, and logic for handling I/Os to and from RAID arrays. These components may be implemented in softwaretypically in kernel modeor embedded in controllers.

Software RAIDSystem processors continue to evolve rapidly, which has enticed developers to place greater loads on system CPUs with an array of applications, including software-based RAID. However, there are drawbacks to implementing RAID in software. Software implementations are less portable because they have specific components that must be re-written for each operating system. Moreover, kernel-mode programs must be flawless since, unlike user-mode applications, their ability to execute privileged instructions and manipulate the contents of any virtual address renders the system vulnerable to crashes.

Click here to enlarge image

Kernel-mode software implementations avoid spawning threads to avoid the costly overhead of context switching, but they are still at the mercy of the scheduler that pre-empts their operation as soon as their time quantum expires or a higher priority task is scheduled. Thus, even under the best circumstances, a kernel-mode RAID engine is forced to share processor time with other kernel-mode components and the applications that use them. This is not critical if those applications have modest processing requirements. However, some applications can overwhelm the CPU.

In addition, the effect of network traffic on servers is significant because network interface cards (NICs) rely heavily on the system CPU for protocol processing and data transfers to and from physical memory.

NIC drivers perform functions such as handling interrupts, receiving and sending packets to and from the network, and providing an interface to set or query operational characteristics of the NIC. An NIC driver typically interfaces with a transport driver above it that implements the stacks for network protocols such as TCP/IP. It successively strips and interprets the network-protocol layers of the packets it receives from the NIC driver and transfers data in the "stripped" packets to system memory.

Conversely, the driver wraps data it receives from the application with layers required by the network protocol and hands it off to the NIC driver for transmission. These drivers handle the bulk of the tasks involved in processing network packets, and since these drivers are executed in the system's CPU, the CPU bears the associated processing burden.

Click here to enlarge image

Obviously, applications also affect CPU resources. While file and print servers have little effect on the CPU, OLTP and other applications requiring high availability and performance can heavily impact the CPU. Anyone familiar with relational databases is aware of the computational expense of performing operations such as inner joins. Such operations cannot be preprocessed since the record sets for most database applications are dynamic, placing an enormous demand on computing resources.

The architecture of the operating system can also affect CPU load. While a high degree of modularity ensures robustness and eases OS component maintenance, it also introduces performance latency at inter-module interfaces.

Clearly, many environments place a heavy burden on system CPUs, and auxiliary processingwith hardware RAIDcan be a significant benefit in such environments.

Hardware RAIDHardware RAID can provide several advantages over software RAID. First, RAID firmware is executed on a dedicated processor and therefore does not share the system's CPU(s) with other software components. Second, it is portable across operating systems, and in the event of a malfunction in the RAID hardware or firmware, the server can continue to operate. If the server crashes, hardware RAID generally offers better survivability. Many hardware RAID implementations have battery backup modules that allow them to maintain the coherency of their caches and complete outstanding operations without loss of integrity. Finally, hardware RAID often incorporates specialized features for optimizing performance, including the following:

Auxiliary processors dedicated to calculating the parity for data blocks that are to be written to disk while the main embedded processor is concurrently fetching or executing the next instruction in the RAID firmware code; and

Dedicated cache on the controller that allows the host to transparently complete "write" commands even while the read-write heads on the disk are seeking the appropriate sectors for writing the data. This prevents host interruptions. Also, it allows the controller to coalesce contiguous "dirty" data blocks that have accumulated over time and write them out in a consolidated chunk, speeding the process of finding the appropriate disk sectors where individual blocks are to be written.

A midrange SCSI controller with 64MB of RAM was compared to the native software RAID utility provided by Windows 2000 used in conjunction with a SCSI card.

As shown in the test results, hardware RAID provides better performance than software RAID in a networked environment. Its benefits are even more significant when running applications with high CPU utilization.

Sanjeeb Nanda is a product marketing engineer at Adaptec (www.adaptec.com) in Milpitas, CA.

Please enable Javascript in your browser, before you post the comment! Now Javascript is disabled.