By submitting my Email address I confirm that I have read and accepted the Terms of Use and Declaration of Consent.

By submitting your personal information, you agree to receive emails regarding relevant products and special offers from TechTarget and its partners. You also agree that your personal information may be transferred and processed in the United States, and that you have read and agree to the Terms of Use and the Privacy Policy.

used to working with a techonology that has dominance in the market.

For many companies, Systems Network Architecture (SNA) and data link control (DLC) were the primary network protocol running on Token Ring. Then came all that IPX and SPX stuff, not to mention Netbeui. A great many things had to happen for TCP/IP to get to the position it is in today, including but not limited to the growth of the Internet.

There have been times when Ethernet had serious challengers, and companies that went the other way ripped out the equipment they installed to replace it. Even within the Ethernet community, it was only a few years ago that 3com pulled out of the corporate market and some companies who had deployed 3com kit extensively before Y2K ripped it all out again.

In other words, history repeates itself, particularly in the IT world. And it's interesting that most of us do not notice this or, if we do, cannot learn from what has happened before.

Storage protocols

SCSI

Most of us are working with SCSI, or at least SCSI commands. Most servers may use a SCSI bus, which runs SCSI commands from the operating system to the disk. Those working with Fibre Channel are mostly using FCP (SCSI over Fibre Channel). Many others still are discussing iSCSI (SCSI over IP).

The SCSI command set is not the be-all, end-all of storage. In the mainframe world, we have ESCON and FICON (sort of ESCON over FC). Many companies still run mainframes as a critical part of their business. There is also a lot of IDE (Integrated Drive Electronics) around -- including servers, RAID controllers and even external disk subsystems.

iSCSI

Today, most servers use SCSI commands over a parallel SCSI bus to talk to disk, SCSI commands over Fibre Channel (usually switched SANs), or iSCSI. But given the millions of ports of Fibre Channel installed, why would anyone move to iSCSI?

Some will say it's because there are billions of Ethernet/IP ports. Maybe. But there are only millions of Gigabit Ethernet ports available; not so many more Fibre Channel ports. Actually, not all servers need the bandwidth or the latency that Fibre Channel gives them. Some servers are now so cheap that the cost of a Fibre Channel HBA is as much as the cost of the server, so it's cheaper to use an Ethernet NIC instead. Fibre Channel people will tell you that a NIC designed for iSCSI will be as expensive as an HBA (as you need an offload engine). Maybe, maybe not. It's not that simple and if Ethernet can drive quantity, then special iSCSI NICs could be quite cheap.

Last year, there was talk about bladed servers and non-bladed 1u and 2u servers on the market. This year will likely see a lot of sales for small 1u and 2u servers, as well as rack-dense bladed servers. All these super-small servers need storage. If you can get 40 odd 1u servers in a standard rack, or using bladed servers even more servers per rack, where will these guys get their storage from? Some of them have little on-board hard drives but then we have the captive storage problem. There was a time when Infiniband was thought to be the answer. There was even talk of storage protocols over Infiniband. But that seems less likely now. I believe that iSCSI will find a very large number of ports coming very quickly from this super-small server market.

We also have the stranded server problem: odd servers too far away from the corporate storage. Put in an iSCSI card and you have some connectivity. With so many small Wintel and Linux servers, this may be a larger market than most people realize.

If the price and performance is right, people will replace their SANs with iSCSI. Sorry, they will. It has happened before in the network world. People replaced Token Ring even though some would argue that Token Ring was a better technology. People put in ATM then took it out. People put in 3com then took it out. People replaced all their NetWare servers with Microsoft NT servers. It can happen, particularly if it makes business sense and is not just a technical decision.

Will iSCSI be used to extend the reach of your SAN into these areas through router boxes? Yes. Will iSCSI replace your SAN? Only time will tell.

Fibre Channel

Fibre Channel can carry IP just as easily as it can carry SCSI. Many Fibre Channel switches and HBAs support this. Several customers use FCIP (IP over Fibre Channel) as well. The most common use of FCIP is for inbound management. Rather than connect every Fibre Channel switch to the LAN for management access, just connect one or two and then route your IP within the SAN to manage the rest. Perhaps less common is the use of FCIP as a low latency server interconnect, particularly for clustering and the like.

Also, FCIP can connect two SAN islands together. In this case, you may not care what is in the Fibre Channel frame. So there is a protocol that simply wraps the entire frame into Fibre Channel and sends it over a normal IP network. Thus, we connect a box to each SAN and so join SAN islands into a single enterprise SAN over IP. This is great for disaster recovery and business continuity. It is an alternative to DWDM, and has advantages and disadvantages to DWDM.

Of course, some boxes with Fibre Channel and Ethernet ports can do both these tricks giving you a lot of flexibility.

Summary

This is an important year for both Fibre Channel and iSCSI. Fibre Channel will see new manufacturers, new standards and a maturing set of management products. ISCSI will see ratified standards, more products and business coming from new approaches in the server market like bladed servers.

As we have just enjoyed the Chinese new year, I would say we are certainly living in interesting times.

About the author:

About the author: Simon Gordon is a senior solution architect for McDATA based in the UK. Simon has been working as a European expert in storage networking technology for more than 5 years. He specializes in distance solutions and business continuity. Simon has been working in the IT industry for more than 20 years in a variety or technologies and business sectors including software development, systems integration, Unix and open systems, Microsoft infrastructure design as well as storage networking. He is also a contributor to and presenter for the SNIA IP-Storage Forum in Europe.

E-Zine

0 comments

E-Mail

Username / Password

Password

By submitting you agree to receive email from TechTarget and its partners. If you reside outside of the United States, you consent to having your personal data transferred to and processed in the United States. Privacy