By submitting my Email address I confirm that I have read and accepted the Terms of Use and Declaration of Consent.

By submitting your personal information, you agree to receive emails regarding relevant products and special offers from TechTarget and its partners. You also agree that your personal information may be transferred and processed in the United States, and that you have read and agree to the Terms of Use and the Privacy Policy.

Dealing with distance

Something to keep in mind as we delve deeper into this solution space is the distance in miles between the sites supporting your applications. The latency associated with the speed of light over an optical cable equates to a single data or management frame taking one millisecond per 100 miles. To address this latency, switch vendors recommend that you allocate more frame buffers at both endpoints of the long-distance storage area network (SAN) link. Increasing the frame buffers at both ends will allow the transmitting E_Port to have more frames in flight without waiting for an acknowledgment (ACK) from the receiving E_Port for having received previously sent frames.

How many buffers should your E_Port support? Well, it depends on two things. For one, it depends on whether the application in question is synchronous or asynchronous in nature. The more synchronous, the more buffers you will need. It also depends on the speed and distance among connecting E_Ports. The greater the speed and distance, the more buffer credits you will need to keep frames flowing in opposite directions without stalling.

It's been two years now since the attacks on the World Trade Center and the Pentagon, and the disaster recovery considerations for storage area networks (SANs) have changed tremendously--for the better.

Back then, SAN islands were expensive, yet gaining in popularity because of the many benefits they yielded in scalability and performance, and because of the distances that could be placed between attached storage enclosures and their respective application servers. Still, only the most lucrative companies--such as large financial and insurance institutions--could afford to put space between redundant pieces of their storage infrastructures.

Since then, hardware vendors have slashed prices, new protocols have been developed and standardized and the intricacies of long-distance SAN links are better understood. Now you have the opportunity to determine which solution will best ensure that when your storage islands are bridged together (for enhanced management and business continuity), you don't compromise day-to-day results for something (disaster recovery) we all agree is an exception to our normal business processes.

This article looks at how to do that analysis. We'll also begin looking at solutions by exploring dense wavelength division multiplexing (DWDM), the most comprehensive and flexible solution. In the October issue of Storage, we'll look at more focused and less-costly alternatives.

The effect on the U.S. economy that the Sept. 11 attacks and the recent accounting scandals has had is staggering, to say the least. As a result, capital expenditures in many shops were almost nonexistent, and IT managers have been walking around with their hands in their pockets to signify how money-conscious they have become. For example, at a recent presales technical engagement I participated in, the person in charge of spending IT budget dollars walked into the meeting with the lining of her pockets inside out, turned her pocketbook upside down and rolled out a quarter. At this very moment, I remembered why I stay on the technical side of the solution. My job--both before and after the budget director's presentation--was simply to present the facts.

Yet while most IT organizations still haven't rolled out a significant amount of SAN hardware, some larger organizations haven't stopped scaling their SANs since their initial implementation. Department after department has heard of the gains experienced by application owners down the hall and across the country who moved their applications onto a SAN. Realizing the benefit, those on the sidelines quickly scraped up the capital to do the same for their applications.

At the same time, many IT managers are being asked to stretch the resiliency of their data center applications beyond the scope of human and natural disasters. Most organizations need to coordinate their strategies for growth and redundancy when it comes to aggregating the management of many regional or national SAN islands or extending the data center. Selecting the right long-distance SAN link between E_Ports will ultimately determine the success of the project. If you're conducting a SAN assessment, include this issue in your considerations.

DWDM can transport multiple protocols

Application assessment The most important information to come out of a SAN assessment where a long-distance SAN link is a goal should be the very real data characteristics of all applications slated to throw data to the remote location, both now and in the foreseeable future. A few questions you should ask your application owners are:

How important is the data?

How much data is involved?

How many server connections (present and future) are anticipated?

What kind of data access (small asynchronous or large synchronous blocks) needs to be provided?

How will data be transported (Fibre Channel and/or IP)?

When will the data be accessed?

What are the availability requirements of that access?

These are all important questions that you must pursue and answer as honestly as you can, without just filling in the blanks with guesses. Involve senior management to remedy conflicts, such as a case of two or more application owners believing that their data adds the most value to the corporation.

By answering the above questions, you will be systematically eliminating or including specific vendors' offerings. For example, by determining that there are in fact varying levels of importance regarding the data that will flow over the long-distance SAN link, you may discover that QoS functionality is required within your connecting equipment. Continuing down this path, the answer to how much and what type of protocol data could very well steer you toward a DWDM solution and away from point-to-point leased dark fiber.

And by analyzing the size and frequency of data access evident in your applications, you will discover whether Fibre Channel (FC) or IP is best suited to provide you with the bandwidth and distance that your applications need and at the best possible price. Armed with this information and your application availability requirements, your staff will be in the best possible position to narrow choices between vendors, and thus reduce the possibility of being sold extraneous equipment.

In provisioning a long-distance SAN link, hone in on the scalability of that link in terms of capacity and performance. Laying optical cable is costly, so you want to spend more time crunching numbers than it takes to actually lay the cable. This extra effort may save you time and money by minimizing the possibility of fiber exhaust.

Possibly somewhat less important than scalability is the ability to support QoS and multiple protocols. Not all organizations require QoS or support for multiple protocols (such as FC, ESCON, IP and ATM). However, if during your application assessments you discover a real need and not simply a want for these features, you may want to start your ROI justification as soon as you finish reading this article. Although there are many benefits to be had--consolidation being one of them--the hardware products that support these features also require the deepest pockets. Specifically, if you fit into this category, you'll want to look at DWDM.

The case for DWDM DWDM hardware sits between your existing networking equipment and the fiber optic cable(s) extending to your remote location (see "DWDM can transport multiple protocols" on this page). Without a doubt, the greatest benefit of this technology is its ability to use an optical multiplexer to gather incoming light signals from your voice and data networks, and then provision them onto a multiwavelength, single-mode fiber optic cable in different wavelengths to be identified, split and placed onto the appropriate protocol interface in the DWDM equipment at the other end of the fiber optic cable. Signal integrity and performance is maintained because light signals are gathered at their source and then multiplexed, amplified and sent across the fiber optic cable at levels greater than 80 wavelengths with current product offerings.

What this ultimately means is that instead of each application requiring a separate fiber optic cable to support each extended E_Port, for example, each application would be provisioned a different wavelength on the same fiber optic cable. If you need more capacity or performance on the long-distance SAN link, then increase the number of varying wavelengths sent over the glass cable and that's exactly what you will get.

And if the ability to add capacity and performance to the long-distance SAN link by increasing the number of wavelengths isn't enough, some DWDM hardware solutions come with the ability to provide enhanced resource management or QoS to applications by allowing the network engineer to prioritize traffic by protocol or by endpoints. With this QoS feature, not only will you be able to assign multiple applications to the same physical cable, you will also be able to control which wavelength will emerge from the other end of the cable first. That way, both critical and noncritical applications can share the same long-distance optical medium.

Not only can DWDM systems accommodate multiple protocols within a single box, but they can do so over the same fiber cable as well. By giving the user the ability to send IP, FC, ESCON and ATM, for example, over the same cable without sacrificing performance and without propagating error conditions across frequencies, hardware engineers have engineered the ultimate bridge in connecting voice and data networks across distances. However, like all new technologies that solve a multitude of problems, only the rich can afford DWDM systems. But as time passes and more solutions come to market, prices are likely to continue to fall.

What's best for you? Choosing a long-distance SAN link solution isn't a simple decision. More than just a messaging network, this extended SAN link will transport n times more data than a messaging network placed simply for remote management or e-mail access from some central repository. For this reason alone, a sufficient amount of time must be allotted to gather trending information with regard to the data that will be read and written across the link by your production applications.

Scalability in the realms of capacity and performance using DWDM is achieved by increasing the number of wavelengths sent over the fiber optic strand. However, with an IP optical pipe, bandwidth can only be increased by your carrier, and only in the steps defined by the "OC" levels (OC-3, OC-48, etc). Depending on the carrier, that may happen with the click of a button or take days. And although IP optical pipe is sufficient for the majority of long-distance SAN solutions, DWDM's ability to place up to 80 different wavelengths on a fiber strand can't be matched when considering fiber exhaust.

For SAN links that are fewer than 120 kilometers and which require support for transporting multiple protocols, again, DWDM technology should be given top consideration. However, for longer links in which an optical repeater is not a viable solution, an expandable optical WAN pipe supporting IP will have to do the job. Next month, we'll look at your options for that scenario.

E-Zine

0 comments

E-Mail

Username / Password

Password

By submitting you agree to receive email from TechTarget and its partners. If you reside outside of the United States, you consent to having your personal data transferred to and processed in the United States. Privacy