Network Security Essentials by William Stallings

Not for the faint of heart, Network Security Essentials delves into the black art of electronic security, including cryptography, e-mail, Internet Protocol (IP), and web and network security. Written in a textbook format, the concepts are organized in a fashion that provides even an IT novice an ability to explore relatively complex topics.

Much of the book is dedicated to the explanation and exploration of various encryption algorithms and authentication standards. The author, Stallings, remains impartial, declining to recommend any one standard over the other; instead he provides a detailed discussion of the strengths and weaknesses of each against various types of malicious attacks, including estimates of the length of time it would take an attacker to perform a brute-force cryptanalysis and decode each algorithm. He details different types of attacks and provides practical eye-opening examples of how opponents might use these tactics.

At times the detail provided by Stallings is fascinating, such as the various applications of asynchronous encryption used in web and email security and message authentication. However, on a few occasions the depth at which some of the mathematical models and algorithms are covered may cause the reader to lose sight of the big picture.

Stallings concludes the book with a brief exploration of malicious software including viruses, worms, and distributed denial of service attacks; he also offers defensive approaches to combat each of these threats.

The book is not reserved for practitioners, as security is an important issue to every manager. The concepts discussed should be enough to inspire readers to pay more attention to their own security and control policies, as well as arm them with enough knowledge to intelligently converse with their IT staff to put these policies into action.

Sun Tzu wrote, “If you know the enemy and know yourself, you need not fear the result of a hundred battles.” Stallings’ Network Security Essentials provides a reasonable foundation of knowledge to engage in the ongoing battle of security and control that every business must face.

Business Survival Skills

In the wake of the extraordinarily destructive 2005 Gulf Coast hurricane season (roughly from June to November), U.S. businesses are now more aware of the wide variety of perils they may be exposed to on a daily basis. These risks can be related to economic, technological, political, or even biological forces. Some such risks may be controllable, while others are not. Consider the impact that events as unrelated as a global epidemic, a spike in exchange rates, a war, labor strikes, a computer virus, crime waves, or an earthquake could have on any given business or industry. Not only does the possibility of such perils impact companies directly, but the increased speed of economic activity and response to such events also make ever more relevant the risks borne by business partners and strategic allies.

The proliferation and variety of worldwide digital media in use has changed the definition of a “local disaster,” if there truly is such a phenomenon.[1] As managers witness the unexpected, they become more aware of the extent of their vulnerability to factors outside their control. Awareness is a good thing, as long as it does not lead to overreaction or paralysis in decision making. We know that potential disasters receive far more attention after they occur even though the probability that the event will happen again is relatively remote.[2]

In a business setting, it is key for an organization to prepare and plan for any of a number of foreseeable unexpected events that the organization may face. Three important steps of the preparedness process include assessing potential risks (see table at the end of this article), developing survival mechanisms and responding quickly to a given disaster.

Some estimates suggest that 80 percent of companies worldwide are not well prepared in the event of a pandemic type of an event or a natural disaster.[3] Businesses can learn lessons from those who have proven their abilities to prepare for and respond to events out of their control. This article draws on wisdom from Gulf Coast survivors, businesses and individuals alike. Their foresight and insight provide a framework that can be used to better prepare for the unknown. Below are some suggestions that correlate the homeowner’s preparedness to preparing businesses for surviving disasters.

Developing Survival Mechanisms

Board up windows [Secure adequate insurance]. Managers can learn from homeowners who build an effective first line of defense. Many coastal families have perfected the ability to board up their windows at a moment’s notice. Knowing the danger of hurricane force winds, they store a set of boards uniquely fitted to their windows. The purpose of this exercise is to shield against the possibility of broken glass and the vulnerabilities of a home or business exposed to the elements. Businesses should carefully consider the protection that they have in place, particularly insurance, to guard against such unforeseen circumstances.

Statistics reveal that 20 percent of businesses that close after a catastrophe never reopen.[4] Insurance is an absolute necessity for protecting a business from exposure to financial ruin. Business Interruption Insurance is particularly key. It can provide coverage for things such as renting a temporary location and paying employees during a shutdown. While property insurance covers the physical loss a company suffers as a result of an unforeseen event, business interruption insurance can cover a company for the financial loss of being out of business for an extended period of time. As with all insurance, policies should be examined carefully for the extent of coverage, and even more importantly, for exclusions.

Photo: Rodolfo Clix

Secure a water supply [Maintain, update inventories and lists of suppliers]. Business owners must consider how their daily operations are hydrated, and once again, they can learn from coastal residents. During the summer months in the Gulf Coast, one can often find a stockpile of empty containers such as milk cartons or empty 2-liter bottles stashed away in hurricane-ready homes. Although a homeowner plans to dispose of the containers at the conclusion of whatever “disaster season” is typical in a given area, if a natural disaster does pose a threat to the integrity of the water supply, such a container stockpile will be invaluable for storing as much fresh water as possible.

Just as individuals often assume that water will always be available at the turn of a faucet, it is also easy for businesses to take supplier and distribution networks for granted. As the trend toward just-in-time inventories continues to be more prevalent, any disruption in the replenishment pipeline could result in large costs to a business in lost sales and idle capacity. Small businesses may take an even harder hit if a single supplier, distributor, or customer is crucial to their continued existence. The analogy breaks down somewhat due to the fact that businesses often cannot “stock up” on these emergency relationships for future use. Nevertheless, it is important to know what alternatives exist and to be prepared to access them quickly. Accordingly, for many businesses it makes financial sense to maintain a diverse portfolio of suppliers. This is a way to reduce the power that suppliers may have over a company’s financial future.[5]

Plan for evacuation [Prepare a business continuity plan]. Gulf Coast residents know that when an approaching storm makes it absolutely necessary to leave town, it is crucial to have a plan of action that will enable employees to vacate intelligently and without delay. Seasoned evacuees know whom they are responsible for, where they are going, and what supplies are necessary to sustain them for a given period of time. They know how to secure their property and to make provisions for their eventual return in case of evacuation. Businesses must also seek ways to maintain calm among staff and customers in the face of disaster by having a sound disaster plan that is ready for execution. Such a plan is critical, regardless of how small or large the number of people and resources a company is responsible for protecting.

The plan should be communicated and updated regularly to ensure that it addresses each of the most important functions of the respective business. Employees should be aware of the appropriate actions to take and must know their individual responsibilities. It is equally important to be self-sustaining in circumstances that imperil valuables beyond the physical facilities of the company. The plan should basically do the following:

Identify the resources and information that are most important;

Specify how such information will be accessed;

Determine how communication will take place between the organization and its employees, customers, and key stakeholders after an evacuation takes place.

Any number of forces may cause the abandonment of business as usual, but a well thought-out plan will help business personnel escape with the resources necessary to insure the survival of the business until its normal routine can be resumed.[6]

Photo: Wanderlei Talib

Stock up on batteries [Maintain sustainable communication and power access]. Businesses may find themselves in great peril if they have not taken the necessary precautions to maintain sustainable access to communication and power. Months after the advent of Hurricane Katrina, an evening drive through some of the most popular neighborhoods in the city of New Orleans reveals streets that are eerily dark. While Hurricane Katrina was an extraordinarily destructive event, it is important to plan for the possibility of a complete infrastructure failure in which conveniences we have grown to rely on such as Internet, television, power and phone lines become unavailable, perhaps for an extended period of time. In order to sustain a basic level of information and safety, residents in disaster prone regions know that the simple presence of battery powered radios and flashlights can go a long way to calm fears in times of uncertainty.

This survival skill becomes particularly crucial when one considers the growing percentage of business processes that rely on communication-based transactions. The Internet can provide a safety net of sorts, since it is not location dependent, but data on servers is still vulnerable. Along with customer data, accounts receivable and payable data are especially valuable and should be backed up daily.

Given the high value that businesses now place on their data, data loss incidents can often result in bankruptcy.[7] In order to be prepared, it is necessary to have a full understanding of how critical data moves in and out of the organization and what options are available to sustain this flow. In addition to making an onsite back-up with an external hard drive or other media, more businesses are protecting their crucial processes by diversifying access to data and maintaining off-site records.[8] Any organization will benefit by determining alternative ways to continue operations in the absence of phone, power, transportation, database functionality, e-mail capabilities, or access to physical facilities.

Photo: Michael Anderson

Keep a full tank of gas [Plan for financial viability]. A business must prepare a financial reserve for use in a given disaster. Once again, an analogy can be made with hurricane victims. Television images captured the least prepared evacuees waiting in line for gasoline. For the prepared, fueling up is a priority when evacuation conditions become a possibility. Even if residents intend to weather the disaster at home, they should make a plan to have on hand an available supply of gasoline after the emergency stages of the disaster passes.

Businesses worldwide had to bear the financial burden of the Gulf Coast hurricanes manifested in increased energy and raw material costs, decreased retail sales, construction shortages, and decreased industrial production and foreign trade. In the aggregate, the impact of Hurricane Katrina reduced national economic growth in the second half of 2005 by between .5 and 1 percent.[9] It is important to have a financial and economic survival plan that will fund the running of a business even when unexpected costs arise. Having sufficient working capital and even developing a contingency plan to liquidate assets quickly, in addition to having some sort of “rainy day fund” can provide a business a “financial lifeboat” in times of trouble.

Don’t overreact to weather reports [Avoid mismanagement of risk]. Another way to state this survival skill: don’t buy a boat to park in the front yard or your business’ parking lot. Developing a set of survival skills is critical, but it is also important to realize the human tendency to mismanage risk.[10] Often in the aftermath of an unusual catastrophic event, the risk-awareness gauge can swing to the extreme. Too much attention to risk avoidance may cause more harm than it prevents. Some unintended consequences include unnecessary fear, irrational responses, overreacting to low risks, distraction from primary tasks, undue burden on resources, accumulating too much working capital, emotional biases in decision making ability, and too much dependence on risk-hedging instruments. An astute manager finds and maintains a healthy balance between conducting daily operations and preparing for unusual circumstances.

Prepare to respond quickly [Develop a quick disaster response plan]. Responding quickly is the final step in the process of surviving a disaster. Events in the United States over the past few years—perhaps starting with the 9/11 attacks—have stimulated a sharp rise in the national focus on contingency planning in contrast to the previous long period when the geo-political environment appeared to be calmer. Some countries have reacted by requiring risk assessments in financial statements, including detailed analyses of potential risks, potential costs and assigned responsibility.[11] A growing awareness now exists regarding the possibility of compounded impacts from multiple or layered disasters. It appears that prior to 9/11, many companies had shelved their contingency plans for too long and needed motivation for updating their disaster preparedness plans.

Home Depot’s response to Hurricane Katrina provides a good example of how a company’s disaster plan can help not only a company, but also the customers it serves. Home Depot was among the first businesses to reopen in the post-hurricane days. The company’s risk management plan had been determined well ahead of time and helped Home Depot to get the personnel, supplies, and infrastructure in place to service threatened stores before and after the hurricane hit. Four days before the storm arrived, procedures were underway to ensure that each store was adequately secured. While a regional center in Atlanta coordinated Home Depot’s efforts, electrical engineers and maintenance teams were ready to deploy teams to the affected areas as soon as the zone was open. The result was that 23 stores reopened the day following the storm, and all but four were operating within a week. This illustrates how a business that prepares well and responds quickly can recover from a bad situation.[12]

IBM serves as another example of a company that has planned well. On any given day, nearly 40 percent of IBM workers do not report to an IBM facility, and through a web-based collaborative environment that IBM has built, business activities may not miss a beat if everyone worked from home.[13] This forward thinking on the part of IBM could be contrasted with numerous companies that continue policies that prohibit telecommuting.

Conclusion

Businesses must develop plans that are feasible and that protect their most valuable assets. It is important to recognize that any disaster will be costly and that it can be difficult to allocate money in preparation for infrequent events. However, a cost-benefit analysis that takes into account the expected probability of an event will shed some light on the extent to which a company should prepare. Companies should consider location-specific risks and those that may affect key partnerships. Many disasters are foreseeable, and organizations can benefit from the collective wisdom of other organizations that have survived disasters.

Managers should be realistic about the costs of their emergency plans and ensure that finances, staffing and other resources will be available to enact the plan. Companies that follow these steps will be prepared to act in the midst of chaos and to help lead the community in recovery. It is important to remember that a catastrophe by its very nature is unexpected and to realize that it can also be overwhelming. The best preparation for a cataclysmic event is to develop a disaster preparedness plan such as that we have outlined. Remember to include in it a commitment to lending your fellow human beings a helping hand. Some disasters are too big to handle alone.

The Cost of Lost Data

The cost of lost data from computers is substantial. Businesses must be proactive in protecting this important resource.

The Nature of the Problem

All computer users are familiar with the problem of lost data. Fortunately, most such incidents are relatively inconsequential, representing only a few minutes of lost work or the deletion of unnecessary files. However, sometimes the nature of the lost data is critical, and the cost of lost data is substantial. As reliance on information and data as economic drivers for businesses continues to increase, owners and managers are subject to new risks. One study reports that a company that experiences a computer outage lasting for more than 10 days will never fully recover financially and that 50 percent of companies suffering such a predicament will be out of business within 5 years.[1]

Levels of Risk

Of course, the value of lost data varies depending on their application, as well as the potential value that can be captured from use of the data. The loss of computer code, for example, represents a significant loss of value because computer code must be rewritten by highly skilled and highly paid software developers. In contrast, the loss of a customer history database would represent a less significant episode of data loss, assuming original source copies of the information are available. In this case, although the data would need to be re-keyed, it could be done by lower skilled and lower paid data entry personnel.

Using available data sources, this paper attempts to quantify the costs associated with episodes of data loss in the aggregate for the US economy. Implications of these findings for business owners and managers will also be discussed.

PCs in Use

Companies increasingly rely on data in a distributed environment. Therefore, the examination of data loss here will focus on the level of the personal computer.[2] US businesses use an estimated 76.2 million PCs to aid in the production of goods and services. Laptops are relied upon more and more, with a current installed base of 15.2 million units, or about 20 percent of all business PCs. The number of desktops in use currently totals approximately 61.0 million units.[3]

Episodes of Data Loss

Statistics on data loss are sparse. Data loss incidents can be hardware- or software-related. Consequently, a consideration of both is necessary to estimate the magnitude of data loss. Thus, this study combines two data sources to estimate the magnitude of data loss in the US: (1) claims data from an insurance company that insures computer hardware; (2) survey data from a company that specializes in data recovery.[4] Estimates from this combination suggest that the most common cause of data loss is hardware failure, accounting for 40 percent of data loss incidents. These include losses due to hard drive failure and power surges. Human error accounts for 30 percent of data loss episodes, which include the accidental deletion of data as well as accidental damage done to the hardware, such as damage caused by dropping a laptop. Software corruption, which might include damage caused by a software diagnostic program, accounts for 13 percent of data loss incidents. Computer viruses–including boot sector and file infecting viruses–account for 6 percent of data loss episodes. Theft of hardware, especially prevalent with laptops, accounts for 9 percent of data loss incidents. Finally, hardware destruction, which includes damage caused by floods, lightning and fire, accounts for 3 percent of all data loss episodes. The relative magnitudes of the different types of data loss are illustrated in Figure 1.

These data may be mapped to census (“installed base”) data on computers to estimate the number of severe data loss episodes that occur each year. Table 1 reports the results of this mapping, estimating 4.6 million episodes of severe data loss per year. Reflected in these data are significant differences in the incidence of data loss between laptops and desktops. While less than two percent of desktops are likely to experience an episode of data loss each year, the corresponding rate for laptops is greater than ten percent.

The Cost of a Data Loss Incident

An episode of severe data loss will result in one of two outcomes: either the data are recoverable with the assistance of a technical support person, or the data are permanently lost and must be rekeyed.[5] A calculation of the average cost of each data loss incident must take into account both possibilities. The ability to recover data depends on the cause of the data loss episode. The permanent loss or theft of a laptop whose data have no tape backup will result in permanently lost data. In addition, fire or flood damage can also make the possibility of data recovery very remote. For other causes of data loss, data recovery specialists are becoming more adept at restoring inaccessible data.[6] Taking into account all causes of data loss, evidence suggests that in 83 percent of the cases, data may be recovered.[7]

The first cost of data recovery to be considered is that associated with hiring a computer support specialist in the recovery effort. If there is a computer support specialist employed within the company, both the number of hours needed to recover the data and the cost of employing this individual must be taken into account. Most recent information from the Bureau of Labor Statistics states that the average computer support specialist earns an estimated $28.10 an hour, including both salary and benefits.[8] The time needed to recover data may vary greatly. If a data backup exists and is readily accessible, the time needed to recover data may be very short. At the other end of the spectrum, if the data are corrupted on the hard drive, several days may be required to retrieve the data.

If the average time needed to recover lost data is approximately six hours, the cost of using an employee to recover lost data is approximately $170. However, if a firm does not employ a specialist who is able to retrieve lost data, the company must go to an outside firm to attempt data recovery. Outside data recovery specialists can be much more expensive than in-house sources, sometimes exceeding two to three times the cost of an in-house specialist. Thus, taking into account that an outside specialist must often be used in data recovery attempts, one can conservatively estimate the minimum cost of outside technical support to recover lost data to be around $340.

During the time in which the attempt to recover data is underway, an individual is unable to access his or her PC, thereby reducing productivity, which in turn impacts company sales and profitability. This opportunity cost–lost productivity due to computer downtime–impacts a company’s income statement just as do other more common and explicit costs. Lost productivity represents missed opportunities for income generation. Some employees are directly involved in sales and revenue production; others are involved in more supportive or indirect roles. Economics teaches that each employee’s productivity, or contribution to firm revenue, can be approximated using the individual’s compensation.[9] Available data sources suggest that individuals who use computers at work earn an average of $36.20 an hour in wages and benefits.[10] Thus, $38.70 for six hours totals approximately $217.[11]

The final cost to be accounted for in a data loss episode is the value of the lost data if the data cannot be retrieved. As noted earlier, this outcome occurs in approximately 17 percent of data loss incidents. The value of the lost data varies widely depending on the incident and, most critically, on the amount of data lost. In some cases the data may be re-keyed in a short period of time, a result that would translate to a relatively low cost of the lost data. In other cases, the value of the lost data may take hundreds of man-hours over several weeks to recover or reconstruct. Such prolonged effort could cost a company thousands, even potentially millions, of dollars.[12] Although it is difficult to precisely measure the intrinsic value of data, and the value of different types of data varies, several sources in the computer literature suggest that the value of 100 megabytes of data is valued at approximately $1 million, translating to $10,000 for each MB of lost data.[13] Using this figure, and assuming the average data loss incident results in 2 megabytes of lost data, one can calculate that such a loss would cost $20,000. Factoring in the 17 percent probability that the incident would result in permanent data loss, one can further predict that each such data loss would result in a $3,400 expected cost.

Added together, the costs due to technical services, lost productivity, and the value of lost data bring the expected cost for each data loss incident to $3,957. (See Figure 2.) It should be noted that most data loss incidents (approximately 83 percent) result in much lower average costs ($557), but in the smaller portion of cases in which the data are permanently lost, the average costs are estimated to be much higher ($20,557). In addition to highlighting the significant costs involved in re-keying data, these figures reflect the importance that data play in creating value for businesses. Once data are lost, those value-creating opportunities are also lost. These losses are multiplied in a networked environment. A survey conducted in 2001 by Contingency Planning Research reports that the majority of companies estimate the average cost of computer network downtime to exceed $50,000 an hour, and for some companies that figure rises to over $1,000,000 per hour.[14]

Total Annual US Data Loss Costs

When information on data loss episodes is mapped along with the cost data, an estimate of aggregate data loss may be obtained. This calculation, reported in Table 2, estimates that annual data losses to PCs cost US businesses $18.2 billion.[15]> This estimate represents an increase from a 1999 study that estimated the annual cost of lost data to be $11.8 billion.[16] Although it is difficult to measure with precision the cost of lost data, and the analysis is sensitive to the assumptions that underlie its calculations, there are several reasons to believe that $18.2 billion is a conservative estimate. First, that figure does not take into account costs that are difficult to quantify, such as lost sales and reputation damage a firm may experience during an extended period of computer downtime. In addition, research in the field of network economics suggests that extra costs would be incurred if a data loss incident occurs to two or more PCs on a network. Such additional cost is due to the interdependence and reliance that each computer user experiences when working with other computer users. As noted earlier, research on incidents of server downtime suggests that such costs can be significant. Finally, it is important to note that these figures do not include any collateral costs that may be incurred in some instances of data loss, such as when damaged hardware must be replaced.

Trends and Implications

What trends are likely to impact the potential for data loss in the future? Available evidence suggests that PC users are more likely now than ever before to use power surge protectors and virus protection software.[17] In addition, Safeware, a company that specializes in insuring PCs, reports that computer thefts appear to be declining as a percentage of computer loss incidents.[18] This is positive news. However, it is the opinion here that two trends will drive the annual amount of data loss upward: (1) increased reliance on laptops, which are much more likely to suffer episodes of data loss than are PCs, particularly from accidental damage due to dropping; and (2) more data stored in smaller spaces, since hard drive capacity continually increases. Conservative estimates place the rate of data growth at 80 percent per year.[19] Not only is the amount of data increasing, but business reliance on data is also rising.[20]

Implications from this research are clear. Business managers should invest in technologies that can reduce the possibility of data loss. Examples include the use of computer virus software and back-up systems. PCs should be password protected, to reduce the value of a stolen PC to a potential thief. Also serving as a theft deterrent are computer-tracking services which serve as a sort of “LoJack for laptops.”

However, even in the face of strong protection measures, some episodes of data loss will inevitably occur. Plans to deal with such episodes can mitigate recovery times. And although back-up protocols are common for server-located data, plans to protect data in a distributed environment are less commonplace.[21] Since the technologies available to back-up data are often reasonably priced, cost does not necessarily present a stumbling block in preventing permanent data loss. A simple and essentially zero cost data back-up procedure involves copying data on CD writable disks using the pre-loaded software that comes with PCs. IT staff should hold training sessions on such protocols because adherence to such procedures will rely on individual users following through with such protective procedures. A saying that precedes the advent of the computer is appropriate here: an ounce of prevention is worth a pound of cure.

[2] Data, as defined here, are the bytes that reside on personal computers. Data could be more broadly defined as all digital media, but such a consideration is beyond the scope of this study. See Simon Forge, JP Morgenthal, and Richard Ptak, Manager’s Guide to Distributed Environments: From Legacy to Living Systems, (New York: John Wiley & Sons, 1998).

[5] This assumes that the vast majority of computer users are unable to recover from a severe data loss incident without the assistance of a technical support individual. The household equivalent of this assumption would be that the vast majority of households could not recover from a severe plumbing problem without a plumber. This also assumes that the original source information is available in hard copy or some other form from which it can be rekeyed.

[6] The Data Recovery Group reports that they are able to recover inaccessible data in 95% of incidents. See http://www.datarecoverygroup.com, (2003). It is noted that data recovery firms may have an incentive to overestimate their success rates.

[7] An 83 percent recovery rate is reported by Denise Deveau, “Lost all your data? Time to Call the Experts,” The Globe and Mail, February 25, (2000).

[9] The reasoning is fairly intuitive: a company will pay an employee as long the individual adds to firm profits. The company will stop paying an employee when the revenue generated from that individual is exactly equal to the compensation paid.

[11] It may be claimed that an individual could move on to other tasks, which in turn would only reduce their productivity by a fraction of this amount. However, it is common that the PC user must work closely with the data recovery specialist in the recovery effort. In addition, productivity could be hampered for several days if the computer must be sent to an outside specialist.

[12] A National Computer Security Association (now Trusecure Corporation) survey reported that for an average engineering department it would cost $100,000 to rebuild 20 megabytes of data.

[15] This could be considered the annual data loss estimate for the year 2003. However, although this study aims to utilize the most up-to-date data sources available, some data are from years prior to 2003. Thus, $18.2 billion represents the best estimate of annual data loss, based on the most recent sources available.

IT MATTERS: Digital Indemnity

Identity management systems help create an electronic trail of users and applications for regulators to follow, but better processes must also be instituted.

Regulations such as Sarbanes-Oxley and The Health Insurance Portability and Accountability Act (HIPAA) both have significant implications for the CIO as well as the CFO and your company’s General Counsel because these regulations require IT managers to provide internal controls, specifically – an audit trail. Certainly Sarbanes-Oxley is primarily financial legislation, enacted in part as a reaction to corporate scandals such as Enron and others. However, the emphasis on internal controls exceeds policies, procedures and external audits. In fact, even though the SEC is still defining ‘internal controls,’ it is clear that any public company that uses IT as part of its financial and accounting processes will have to deal with the IT controls that inevitably will be included in these regulatory requirements.

Therefore, for CIOs, the heat is on and is getting warmer! The good news is that there are several immediate steps that can be taken to ensure that business teams and technologies will pass muster. Firstly, custom-developed financial systems are often fraught with potential data-integrity vulnerabilities. Thus good practice would be to ensure that the IT control processes include a segregation of duties within the systems development staff. That is, the people who code program changes are different from the people who test programs and are also differentiated from the team that is responsible for production change control.

Packaged financial systems are also vulnerable. Many Enterprise Resource Planning (ERP) systems offer audit-trail functionality. However, customizing these systems can impact the built-in IT controls. Therefore, it is critical to make sure that all such customizations do not create audit problems.

Just as important as the technical and process controls are the project management methodologies that IT teams might employ, especially since poor project management is the leading cause of system implementation failure and degradation. Therefore, one way to help ensure that systems meet requirements is to have a sound and successful selection and implementation process for all new or upgraded systems.

Finally, the ways in which a firm stores and transmits electronic documents and the determination of when this data is deleted have significant legal consequences. For this reason, the CIO must work closely with the CFO and legal counsel to create appropriate policies regarding document retention and destruction.

The real complexity comes with legislation such as the Health Insurance Portability and Accountability Act of l996 (HIPAA) which requires IT managers to provide an audit trail of access records that usually stretches across multiple users and systems. Complying with terms of the act may become a real challenge in light of the complexity of modern web-services and corporate portals. Corporate portals currently deal primarily with managing access and control for unique users. However, the process of identity management is getting more and more complicated as web services and service oriented architecture become more common. In such an environment, both people and applications are seeking access to controlled data.

For example, in the era of HIPAA, one might find a web service application accessing insurance or health data rather than a specific individual. This is a concept in which the application is getting access and then providing that access to a variety of users. In such an instance, auditing might be a problem. Certainly the application is acting on the user’s behalf, yet the application is nonetheless adding a layer or sometimes several layers of complexity. In some such instances, the user may even be four or five layers removed from the actual transaction with specific access and control information carried in a domain of trust.

While such access to controlled data may not be a significant issue yet, things are moving in this direction, and the new federated architecture carries with it great potential for complexity. Consequently, a standards-based software using security access markup language (SAML) and security provisioning markup language (SPML) has been developed to deal with this complex issue. At present, such software allows increased control over identity management in the web services arena. It should also be noted that this type of software comes at a cost. According to the Gartner group, this software currently costs about $5 to $25 per user to license.1 Furthermore, according to International Data Corporation (IDC), sales of identity management software will grow at an annual rate of 52%. They estimate that sales of these applications will increase from $550 million in 2001 to $2 billion in 2006.2

Now that the Securities and Exchange Commission (SEC) has postponed implementing certain sections of the Sarbanes-Oxley Act that pertain to internal auditing, a window of opportunity exists for firms to prepare now for what will follow. In fact, companies have about nine months not only to ensure that the proper processes and procedures are in place, but also that their systems are ready for the appropriate identity management requirements both now and in the future.

The European Directive On Data Privacy

This paper was prepared with the assistance of Theodore Borromeo, the Director of Employment Law at Sun Microsystems, Inc. in Palo Alto, California and Stefan Lechler, a Sun attorney located in Munich, Germany. Clicking on the studies referenced at the end of this paper will open a new browser and take you to that source.

Use of the Internet has skyrocketed in recent years. Internet2, one thousand times faster than the existing Internet, is scheduled to be ready for everyday consumer use within five to 10 years. With this “Brave New World,” we have entered the age of seamless communications and commerce between individuals and institutions across national boundaries.

Or have we? The first major speedbump on the international Information Superhighway may occur as early as October 25 of this year, the date the European Directive on Data Privacy[1] (the “Directive”) is scheduled to take effect.

As of that date, it will become illegal for businesses within the European Union (EU) to “export” personal data that may be used for commercial purposes to countries that do not adequately protect this information. Even at this late date, it is not clear whether American law “adequately protects” personal information within the meaning of the EU Directive. Significant differences still exist between America and the EU on this issue, differences that could be “a potentially serious barrier to [the] creation of a global free market for commerce on the Internet.”[2] Should it be the determined by the EU that America’s protections are insufficient, and should the EU attempt to enforce its Directive, there could be major repercussions for businesses on both sides.

Examples already exist indicating how Europe might respond. In 1994 Citibank had reached a co-branding agreement with the German National Railway for the biggest credit card project in Germany. However, because personal data on millions of German citizens would be processed in the U.S., there was a public outcry, and the German data-protection authorities threatened to prohibit the arrangement unless the two companies could create an acceptable method of protecting the privacy of the German cardholders. It took six months of very intense negotiations for the two companies to agree on a contractual arrangement creating a broad array of privacy protections. The result was that Citibank had to significantly change the way it managed customer information.

Last year Sweden’s privacy watchdog required American Airlines to delete health and medical information on Swedish passengers after each flight unless American had obtained “explicit consent” from the Swedish passengers.[3] Under the Swedish order, American was not allowed to send this information to its SABRE central reservation system in the U.S. American has lost twice in the Swedish courts, and the case is now on appeal to Sweden’s Supreme Administrative Court. In the meantime, American cannot export the medical data of Swedish passengers to American’s reservation system in the U.S.[4]

Background on the Directive

The Directive is a comprehensive law enacted to establish common rules for the use of personal data. Passed by the European Commission in October, 1995, it gave member states of the EU three years to enact national legislation in harmony with its requirements or face having its citizens and businesses excluded from participating in electronic commerce across national boundaries. All EU member states have complied, although admittedly there is still work to be done to harmonize their different approaches.

More to the point for Americans, after October 25th the Directive also restricts any international data flows from Europe to a country outside the EU if that country does not provide an “adequate level of protection” for privacy of data about individuals. According to some analysts, this could mean that such things as the exchange of information from marketing databases that include personal information on customers would be barred, even between subsidiaries of the same international company. Market analysts could no longer send unlimited data about key European individuals to the U.S., and American consultants in the U.S. might not be able to receive a client’s records if those records contain personal information about European customers or employees.

Potentially, human resource records of transnational companies could no longer be centralized, and American auditors might not be able to examine records from Europe if they contain private data about any individual.[5]

While many Americans might, in fact, applaud such privacy protections, these restrictions would definitely impact how American and transnational companies would otherwise do business with Europeans.

General Provisions of the EU Directive

The Directive is a comprehensive all-encompassing law that sets very specific criteria for the use and transfer of personal data, with the objective of protecting the right to privacy in the processing of such data. It requires that any person or organization responsible for processing personal data must implement appropriate technical and organizational measures to protect the information from unauthorized disclosure, especially when that processing involves transmission over a network. It further establishes data protection principles and conditions that must be met before personal data may be processed.

To meet the Directive’s requirements, EU member states must ensure that personal data are:

processed fairly and lawfully,

collected for specified, explicit and legitimate purposes,

adequate, relevant and not excessive in relation to the purposes for which they are collected,

accurate and, where necessary, kept up to date, taking every reasonable step to ensure that data which is inaccurate or incomplete is erased or corrected, and kept in a form that permits identification of data subjects for no longer than necessary for the purposes for which the data was collected.

There are even more stringent conditions for the processing of “sensitive” data, such as information about racial or ethnic origin.

Furthermore, individuals about whom data are collected must be provided with certain information about the purpose of the processing and given the right to access their personal files and have inaccurate data amended. In fact, individuals may object even to the lawful processing of this information and to its being used for direct marketing purposes.

Those who suffer damages as a result of unlawful processing of personal data are entitled to seek damages, and sanctions are established for those who do not comply with the Directive’s terms. Any of Europe’s 350 million-plus citizens will be able to file a claim over abuse of personal data, and that claim may be taken all the way to the European Court of Human Rights – one of the highest courts in the European Union. During this process, business contracts can be suspended and injunctions can stop the transfer of data to non-EU countries.[6]

Transfers of Personal Data to Non-EU Countries

Although the EU has not yet developed specific criteria as to what “adequacy of protection” means with regard to the transfer of personal data transferred outside of the EU, it has delivered a policy paper to American officials that sets out tentative meanings of the phrase. The EU would determine which countries would be on an approved list for data transfers by considering both the privacy rules in place in that country and the means for enforcing those rules.[7] If a country does not ensure an adequate level of protection, the member states of the EU are required to prevent any transfer of data to the country in question. Given that provision, there can be no free Internet or any other data highway access to personal data in any system within the territory of the EU.

There is a provision, however, whereby data may be transferred to a country which does not ensure an adequate level of protection if certain conditions are met. First, the person must unambiguously give his or her consent to the proposed transfer. Then, the transfer must be necessary for the performance of a contract, or legally required on important public interest grounds, or necessary to protect the vital interests of that person. Thus, transactions such as transferring money to a foreign bank, making hotel reservations or travel arrangements, or concluding an employment or insurance contract with an employer or insurer in a third country would probably be allowed under the new Directive.

The American Response

American law regarding information privacy is sporadic, piecemeal and sector-specific.[8] To the extent that privacy rights exist in the U.S., they are created by a myriad of constitutional doctrines and narrow-purpose federal and state laws.[9] There is no national law comparable to the EU Directive. Even under a liberal interpretation of the EU Directive, it is not clear that the American regulatory scheme qualifies.

In March of 1998, Ira Magaziner, the White House advisor on Internet matters, stated that he is optimistic that the U.S. and the EU will be able to agree on a data privacy policy before the Fall deadline.[10] However, neither the administration nor Congress seems to want national legislation, fearing that such legislation would be a negative influence on the development of Internet commerce.

Instead, Mr. Magaziner has urged American businesses to accelerate their self-policing efforts on privacy issues. In his view, EU officials have indicated that self-regulation would be acceptable, as long as an established system was put in place.

The Clinton administration has taken a hard line on the question of appointing a government privacy watchdog, stating that it does not recognize the validity of that approach. Instead, the Administration prefers to meet European demands through a combination of self-regulation schemes, privacy-friendly business-to-business contracts, and technology-based privacy-protection systems.[11] Vice-President Al Gore recently proposed a broad consumer privacy plan, including a “privacy czar” to coordinate policy, but adoption of such a plan is deemed unlikely.[12]

Self-Regulation Efforts of American Businesses

U.S. businesses are trying to find non-legislative solutions. For example, in December of 1997, a self-regulatory code of conduct for individual reference services, such as Metromail, CDB Infotek, and Lexis-Nexis’s P-Trak, was announced. This code of conduct limits the use and collection of personal information, while relying on independent auditors to monitor compliance.[13]

Another attempt at self-regulation in the private sector occurred in July of 1998 when the Online Privacy Alliance proposed an electronic seal of approval to enforce information collection and use policies.[14] Under this proposal, member companies would post a privacy seal on their Web sites clearly disclosing how they gather and use the marketing data of their online consumers. In addition, member companies would agree to work with the manager of the seal of approval program to resolve complaints. This alliance includes approximately 50 companies doing business on the Internet, including Xerox, Yahoo, AT&T, Microsoft and American Online.[15]

The problem with industry self-regulation is that is it difficult to get universal adherence to privacy protection policies and hard to craft an effective enforcement mechanism.

Possible Regulation by the Federal Trade Commission (FTC)

Although the FTC is still hopeful that self-regulation might work, the head of the agency said that additional government regulation may be necessary if private industry is not able to demonstrate that it has implemented an effective self-regulatory program by the end of the year.[16]

If self-regulation fails, the FTC legislative model to protect data privacy would require all commercial Web sites that collect personal information from or about consumers to: (a) provide consumers notice of their information practices, (b) offer consumers choices as to how their personal information is used beyond the purpose for which the information was provided, (c) offer consumers reasonable access to their information and an opportunity to correct inaccuracies, and (d) take reasonable steps to protect the security and integrity of personal information.[17]

The Reaction of Congress

Congress does not yet appear to take the issue of data privacy seriously. Lawmakers have consistently refused to pass legislation limiting the use of personal data. Even a bill to limit the use of Social Security numbers for identification purposes failed to pass this year.

Currently, there are about eighty privacy bills scheduled to be considered by Congress in 1998, but it appears Congress will not take action until it determines what action the private sector intends to take regarding data privacy.[18]

It is unclear what will happen this October. Both sides of the Atlantic seem ready to fight. The Clinton Administration has said that the United States will take its case to the World Trade Organization if necessary.

The Europeans, on the other hand, appear to be determined to pursue the privacy directive’s goals, and they have suggested that America must take the issue of data privacy much more seriously in order to have electronic access to the European Union’s consumers. The EU has warned that if the United States does not take adequate measures to provide similar data privacy protections as are contained in the European Directive, it would prevent any U.S.-based company from conducting electronic commerce in its member states via the Internet.[19]

It is unlikely that the United States will adopt a comprehensive data protection law like the EU Directive in the next few years. Changes in American policy may well depend on how serious the Europeans are about blocking data transfers to the U.S. If the Europeans truly intend to block data transfers to the U.S., this may focus concern about how private data are handled in the U.S. and provoke a change in approach.

In the meantime, the United States appears to be on a collision course with the EU Directive. Operations could be disrupted, lawsuits could be filed and markets could be lost. Unless a way forward is found quickly, a huge chunk of business between the world’s two biggest economic blocs may hit a major roadblock. At stake is the future of banking, travel, credit card transactions, electronic commerce, and maybe even government business.

Since there has been some activity towards compliance on the American side, no one in Europe wants to talk openly about a trade war, but it appears that America has a long way to go towards protecting data privacy before the EU will be satisfied. The Europeans want America to adopt its comprehensive legislation regarding data privacy, complete with a separate governing body to hear and investigate complaints. The Americans so far have responded that they prefer piecemeal legislation and self-regulation because they fear that privacy rules that are too heavy-handed will stifle trade. They believe that voluntary codes of conduct and a “seal of approval” will be more effective since industry will have incentive to do this.

After all, American consumers also want to know their data are secure and will refuse to do business with companies that do not adequately protect them. From the American perspective, Europe would be unwise to isolate itself from worldwide Internet commerce by the strict enforcement of their data privacy policy.

“The European Union is launching the biggest privacy gambit in history. If the European plan succeeds, every country on Earth will soon adhere to a global privacy code. If it fails, the United States and Europe could end up in the throes of an ugly trade war over the international transfer of personal information.”[20]

[1] The official name is the “Directive on the Protection of Individuals with Regard to the Processing of Personal Data and on the Free Movement of Such Data,” Official Journal N. L 281, 23.1195, p.31. See, generally, Paul M. Schwartz and Joel Reidenberg, Data Privacy Law, (MICHIE 1996, ISBN 1-55834-377-6); Susan E. Gindin, As The Cyber-World Turns: The European Union’s Data Protection Directive and Trans-Border Flows of Personal Data, at http://www.info-law.com/eupriv.html, European Union Data Protection Directive (No. 95/46/EC), at http://www.open.gov.uk/dpr/insnet2.htm

[2] James Packard Love, Data Protection Laws Spark U.S. and E.U. Tussle, at http://gbr.pepperdine.edu/983/ (no longer accessible). Another commentator has also noted that the open structure of the Internet appears to be incompatible with these requirements. Frank A. Koch, European Data Protection – Against the Internet at http://www.privacy.org/pi/conference/copenhagen/koch.html (no longer accessible).

[6] Simon Davies, Europe to us: No Privacy, No Trade, on the Phil Agre egroups website under the title “Wired Story on EU Privacy” at http://www.egroups.com/list/rre/ Once at the website, click “date” in the navigation bar, then “previous” to get back to the 5/05 to 7/10 date, then scroll down to “Wired Story.”

[8] Joel R. Reidenberg, The Movement Toward Obligatory Standards for Fair Information Practices in the United States, VISIONS FOR PRIVACY IN THE 21ST CENTURY (ed. Colin Bennet & Rebecca Grant, forthcoming Univ. of Toronto Press), at http://home.sprynet.com/sprynet/reidenberg/oblig_dp.htm. See generally, Data Privacy Law, supra note 2, for a full discussion of the American approach to protecting privacy.

[15]Id. It is instructive to note that this plan only came about after the FTC revealed only 14% of the Websites reviewed by the FTC made “even a passable attempt” at publishing their information collection and practices online.

The new issue of the Graziadio Business Review includes a variety of articles written by knowledgeable authors that we believe you will find informative, insightful, and engaging. While women make up half the workforce in nearly every country, women do not comprise half of the leadership. In “A Values Approach to Advancing Women in Leadership,” […]