This chapter is from the book

This chapter is from the book

The first requisite for success is the ability to apply your physical and mental energies to one problem incessantly without
growing weary.

Thomas Edison (18471931)

Systems architecture can best be thought of as both a process and a discipline to produce efficient and effective information
systems.

It is a process because a set of steps is followed to produce or change the architecture of a system.

It is a discipline because a body of knowledge informs people as to the most effective way to design.

A system is an interconnected set of machines, applications, and network resources. Systems architecture unifies that set
by imposing structure on the system. More importantly, this structure aligns the functionality of the system with the goals
of the business.

The basic purpose of systems architecture is to support the higher layers of the enterprise architecture. In many companies,
the software and hardware represent a significant portion of the enterprise's total assets. It is important that enterprise
architects do not equate their duties with the objects, the applications, or the machines that comprise their domain. The
fundamental purpose is to support and further the business objectives of the enterprise. Hardware and software objects are
fundamentally transient and exist only to further the purposes of the business.

Systems architecture is also used as part of the process of keeping the enterprise architecture aligned with the business
goals and processes of the organization. It is important to understand the technical details of the infrastructure and the
applications running within it but also to have the knowledge to participate in the process of architectural change with the
enterprise architectural team. That involves the following:

Defining the structure, relationships, views, assumptions, and rationales for the existing systems architecture and the changes
in relationships, views, assumptions, and rationales that are involved in any changes required for moving from what is to
what is desired.

Creation of models, guides, templates, and design standards for in use in developing the systems architecture.

Canaxia Brings an Architect on Board

Let us set the stage for further discussion by focusing on Canaxia. As part of Kello James's engineering of Canaxia's enterprise
architecture, James has brought on board an architect, the first one in Canaxia's history.

We are sitting in on a seminal meeting with some of Canaxia's C-level executives and Myles Standish, the new architect. He
is being introduced to a semi-skeptical audience. It will be Myles's task in this meeting to establish the value to Canaxia
of systems architecture and, by extension, a systems architect. As part of his charter, Myles was given oversight responsibility
for recommended changes in the physical infrastructure and for the application structure of the enterprise. That means he
will be part of the architecture team as Canaxia evolves.

Myles begins his remarks by giving a quick rundown of what his research has uncovered about the state of Canaxia's systems
architecture. The discussion proceeded as follows:

Myles noted that a large part of Canaxia's fixed assets, like those of most large corporations, is in the applications and
machines that comprise Canaxia's systems architecture. The corporations in Canaxia's business environment that best manage
their systems architectures will thereby gain a competitive advantage.

He noted that the majority of the budget of the information technology (IT) department is swallowed by costs related to Canaxia's
infrastructure. At the present time, most of these costs are fixed. If Canaxia wishes to gain any agility in managing its
information technology resources, it will have to put into its systems architecture the necessary time, effort, and attention
to sharply reduce those costs.

Major business information technology functionality at Canaxia, Myles pointed out, is divided among silo-applications that
were, in some cases, created three decades ago. Here, Myles was interrupted by an executive vice president who asked, "You
are not going to tell us that we have to rewrite or replace these applications are you? We have tried that." Myles firmly
replied that he had no plans to replace any of the legacy applications. Rather than replace, he was going to break down the
silos and get the applications talking to each other.

The terrorist attack that completely destroyed the World Trade Center in New York City has put disaster recovery on the front
burner for Canaxia. Systems architecture issues are crucial to a disaster recovery plan (DRP). (The systems architecture aspects of data recovery planning are discussed later in this chapter.) As part of his presentation,
Myles briefed the business side on the systems architecture section of a disaster recovery plan that had been developed by
the architecture group.

Myles then discussed the issues surrounding storage. He asked the executives if they realized that, at the rate that Canaxia's
data storage costs were rising, by 2006 the cost of storage would equal the current IT budget. The stunned silence from the
executives in the room told Myles that this was new information for the executive team. Myles then said he would also ask
them for their support in controlling those costs and thereby gaining a further competitive advantage for Canaxia.

Data storage and the issues surrounding it are discussed later on in this chapter.

The recent flurry of bankruptcies among large, high-profile companies has led to the imposition of stringent reporting rules
on the top tier of publicly owned companies in the United States. These rules demand almost real-time financial reporting
capabilities from IT departments. Canaxia is one of the corporations subject to the new reporting stringencies. Myles bluntly
told the gathered managers that the current financial system was too inflexible to provide anything more than the current
set of reports and that the fastest those reports can be turned out is monthly. Myles next stated that accelerated financial
reporting would be a high priority on his agenda. This brought nods from the executives.

Myles also brought up the issue that visibility into the state of Canaxia's infrastructure is an important component of a
complete view of the business. Myles intended to build a reporting infrastructure that allowed line managers and upper-level
management to have business intelligence at their disposal so they could ascertain if the system's infrastructure was negatively
impacting important business metrics, such as order fulfillment. In the back of his mind, Myles was certain that Canaxia's
antiquated systems architecture was the root cause of some of the dissatisfaction customers were feeling about the auto manufacturer.
He knew that bringing metrics of the company's infrastructure performance into the existing business intelligence (BI) system was an important component in restoring customer satisfaction levels.

The cumulative impact of thousands of low-level decisions has encumbered Canaxia with an infrastructure that doesn't serve
business process needs and yet devours the majority of the IT budget. In many cases, purchase decisions were made just because
the technology was "neat" and "the latest thing." "Under my tenure that will stop," Myles stated firmly. "We are talking about
far too much money to allow that. Moving forward, how well the project matches with the underlying business process, and its
return on investment (ROI) will determine where the system infrastructure budget goes."

Then the subject of moneythe budget issuecame up. "You are not expecting a big increase in budget to accomplish all this,
are you? No money is available for budget increases of any size," Myles was informed. He responded by pointing out that the
system infrastructure of an enterprise should be looked at as an investment: a major investment, an investment whose value should increase with time, not as an object whose value depreciates. The underperforming
areas need to be revamped or replaced. The areas that represent income, growth, or cost-reduction opportunities need to be
the focus.

After the meeting, both Kello and Myles felt greatly encouraged. Both had been involved in enterprise architecture efforts
that had faltered and failed. They both knew that the prime reason for the failure was an inability to engage the C-level
managers in the architecture process and keep them involved. This meeting made it clear that, as far as Canaxia's top management
was concerned, the issues of increased financial reporting, disaster recovery, and cost containment were on the short list
of concerns. Myles and Kello knew that they would have the support they needed to build their architecture. They also knew
that that architecture would have to translate into fulfillment of Canaxia's business concerns.

Architectural Approach to Infrastructure

It is important to take a holistic view of all the parts in a system and all their interactions.

We suggest that the most effective path to understanding a system is to build a catalogue of the interfaces the system exposes or the contracts the system fulfills. Focusing on the contract or the interface makes it much easier to move away from a preoccupation with
machines and programs and move to a focus on the enterprise functionality that the systems must provide. Taken higher up the
architecture stack, the contract should map directly to the business process that the system objects are supposed to enable.

When done properly, creating and implementing a new systems architecture or changing an existing one involves managing the
process by which the organization achieves the new architecture as well as managing the assembly of the components that make
up that architecture. It involves managing the disruption to the stakeholders, as well as managing the pieces and parts. The
goal is to improve the process of creating and/or selecting new applications and the process of integrating them into existing
systems. The payoff is to reduce IT operating costs by improving the efficiency of the existing infrastructure.

Additional Systems Architecture Concerns

The following are some considerations to take into account when building a systems architecture to support the enterprise:

The business processes that the applications, machines, and network are supposed to support. This is the primary concern of
the systems architecture for an enterprise. Hence, an architect needs to map all the company's business processes to the applications
and infrastructure that is supposed to support it. This mapping should be taken down the lowest level of business requirements.
In all likelihood, a 100 percent fit between the business requirements and the implementation will not be possible.

Any part of the overall architecture that is being compromised. Areas that often suffer are the data and security architectures
and data quality. The data architecture suffers because users don't have the knowledge and expertise to understand the effects
of their changes to procedure. Security architecture suffers because security is often viewed by employees as an unnecessary
nuisance. Data integrity suffers because people are pressured to produce more in less time and the proper cross-checks on
the data are not performed.

The stakeholders in the systems architecture. The people involved, including the following:

The individuals who have created the existing architecture

The people tasked with managing and enhancing that architecture

The people in the business who will be affected positively and negatively by any changes in the current systems architecture

The business's trading partners and other enterprises that have a stake in the effective functioning and continued existence
of this corporation

Any customers dealt with directly over the Internet

The needs and concerns of the stakeholders will have a large impact on what can be attempted in a systems architecture change
and how successful any new architecture will be.

The context that the new system architecture encompasses. In this text, context refers to the global enterprise realities in which systems architects find themselves. If you're a company that is limited
to a single country, language, and currency, you're the context will probably be relatively simple. Canaxia is a global corporation,
and its stakeholders speak many different languages and live in a variety of time zones; thus its corporate culture is a complex
mix of many regional cultures. This diversity is, on balance, a positive factor that increases Canaxia's competitiveness.

The data with which the systems architecture deals.

See Chapter 11, Data Architecture, for full details about data architecture considerations.

Working with Existing Systems Architectures

All companies have not had a single person or group that has had consistent oversight of the businesses systems architecture.
This is the situation that Myles found himself in. Canaxia's systems architecture wasn't planned. The company had gone through
years of growth without spending any serious time or effort on the architecture of its applications, network, or machines.
Companies that have not had a systems architecture consistently applied during the growth of the company will probably have
what is known as a stovepipe architecture.

Suboptimal architectures are characterized by a hodgepodge of equipment and software scattered throughout the company's physical
plant. This equipment and software were obtained for short-term, tactical solutions to the corporate problem of the moment.
Interconnectivity was an afterthought at best. Stovepipe architecture is usually the result of large, well-planned projects
that were designed to fill a specific functionality for the enterprise. These systems will quite often be mission critical,
in that the effective functioning of the company is dependent upon them. Normally these systems involve both an application
and the hardware that runs it. The replacement of these systems is usually considered prohibitively expensive. A number of
issues accompany stovepipe architecture:

The systems don't fit well with the business process that they are tasked with assisting. This can be due to any of the following:

The software design process used to build the application imperfectly capturing the business requirements during the design
phase

The inevitable change of business requirements as the business's competitive landscape changes

The monolithic design of most stovepipe applications making it difficult to rationalize discrepancies between business processes
and application functionality.

The data not integrating with the enterprise data model, usually due to vocabulary, data format, or data dictionary issues.

See Chapter 11, Data Architecture, for a more complete discussion of the data architecture problems that can arise among stovepipe applications.

Stovepipe applications definitely not being designed to integrate into a larger system, be it a result of enterprise application
integration (EAI) or supply-chain management.

It is difficult to obtain the information necessary for business intelligence or business process management.

The marriage of application and hardware/operating system that is part of some stovepipe applications creating a maintenance
and upgrade inflexibility that results in stovepipe applications having a high total cost of ownership (TCO).

Many times it is useful to treat the creation of enterprise systems architecture as a domain-definition problem. In the beginning,
you should divide the problem set so as to exclude the less-critical areas. This will allow you to concentrate your efforts
on critical areas. This can yield additional benefits because smaller problems can sometimes solve themselves or become moot
while you are landing the really big fish. In the modern information services world, the only thing that is constant is change.
In such a world, time can be your ally and patience the tool to reap its benefits.

Systems Architecture Types

Few architects will be given a clean slate and told to create a new systems architecture from scratch. Like most architects,
Myles inherited an existing architecture, as well as the people who are charged with supporting and enhancing it and the full
set of dependencies among the existing systems and the parts of the company that depended on them. It must be emphasized that
the existing personnel infrastructure is an extremely valuable resource.

Following is an enumeration of the systems architecture types, their strengths, their weaknesses, and how to best work with
them.

Legacy Applications

Legacy applications with the following characteristics can be extremely problematic:

A monolithic design. Applications that consist of a series of processes connected in an illogical manner will not play well
with others.

Fixed and inflexible user interfaces. A character-based "green screen" interface is a common example of a legacy user interface
(UI). Interfaces such as these are difficult to replace with browser-based interfaces or to integrate into workflow applications.

Internal, hard-coded data definitions. These data definitions are often specific to the application and don't conform to an
enterprise data model approach. Furthermore, changing them can involve refactoring all downstream applications.

Business rules that are internal and hard-coded. In such a situation, updates to business rules caused by changes in business
processes require inventorying applications to locate all the relevant business rules and refactoring affected components.

Applications that store their own set of user credentials. Application-specific credential stores can block efforts to integrate
the application with technologies to enable single sign-on and identity management.

While many legacy applications have been discarded, quite a few have been carried forward to the present time. The surviving
legacy applications will usually be both vital to the enterprise and considered impossibly difficult and expensive to replace.

This was Canaxia's situation. Much of its IT department involved legacy applications. Several mainframe applications controlled
the car manufacturing process. The enterprise resource planning (ERP) was from the early 1990s. The inventory and order system, a mainframe system, was first created in 1985. The customer relationship
management (CRM) application was recently deployed and built on modular principles. The legacy financials application for Canaxia was of
more immediate concern.

Canaxia was one of the 945 companies that fell under Securities and Exchange Commission (SEC) Order 4460, so the architecture team knew it had to produce financial reports that were much more detailed than the system
currently produced.

The existing financials would not support these requirements. The architecture team knew that it could take one of the following
approaches:

It could replace the legacy financial application.

It could perform extractions from the legacy application into a data warehouse of the data necessary to satisfy the new reporting
requirements.

Replacing the existing financial application was looked at long and hard. Considerable risk was associated with that course.
The team knew that if it decided to go the replace route, it would have to reduce the risk of failure by costing it out at
a high level. However, the team had other areas that were going to require substantial resources, and it did not want to spend
all its capital on the financial reporting situation.

The data extraction solution would allow the legacy application to do what it is used to doing when it is used to doing it.
The data mining approach buys Canaxia the following:

Much greater reporting responsiveness. Using a data warehouse, it is possible to produce an accurate view of the company's
financials on a weekly basis.

More granular level of detail. Granular access to data is one of the prime functions of data warehouses. Reconciliation of
the chart of accounts according to the accounting principles of the enterprise is the prime function of legacy financial applications,
not granular access to data.

Ad hoc querying. The data in the warehouse will be available in multidimensional cubes that users can drill into according
to their needs. This greatly lightens the burden of producing reports.

Accordance with agile principles of only doing what is necessary. You only have to worry about implementing the data warehouse
and creating the applications necessary to produce the desired reports. You can rest assured that the financial reports that
the user community is accustomed to will appear when needed and in the form to which they are accustomed.

In terms of the legacy applications that Myles found at Canaxia, he concurred with the decision to leave them intact. In particular,
he was 100 percent behind the plan to leave the legacy financial application intact, and he moved to implement a data warehouse
for Canaxia's financial information.

The data warehouse was not a popular option with management. It was clearly a defensive move and had a low, if not negative,
ROI. At one point in the discussion of the cost of the data warehouse project, the architecture team in general and Myles in
particular were accused of "being gullible and biased toward new technology." This is a valid area to explore any time new
technology is being brought on board. However, the architecture team had done a good job of due diligence and could state
the costs of the project with a high degree of confidence. The scope of the project was kept as small as possible, thereby
substantially decreasing the risk of failure. The alternative move, replacing the legacy application, was substantially more
expensive and carried a higher degree of risk. Also, it was true that bringing the financial data into the enterprise data
model opened the possibility of future synergies that could substantially increase the ROI of the data warehouse. In general terms, the good news with legacy applications is that they are extremely stable and they
do what they do very well. Physical and data security are usually excellent. Less satisfying is that they are extremely inflexible.
They expect a very specific input and produce a very specific output. Modifications to their behavior usually are a major
task. In many cases, the burden of legacy application support is one of the reasons that so little discretionary money is
available in most IT budgets.

Most architects will have to work with legacy applications. They must concentrate on ways to exploit their strengths by modifying
their behavior and focus on the contracts they fulfill and the interfaces they can expose.

Changes in any one application can have unexpected and unintended consequences to the other applications in an organization.
Such situations mandate that you isolate legacy application functionality to a high degree. At some point, a function will
have been decomposed to the point where you will understand all its inputs and outputs. This is the level at which to externalize
legacy functionality.

Client/Server Architecture

Client/server architecture is based on dividing effort into a client application, which requests data or a service and a server
application, which fulfills those requests. The client and the server can be on the same or different machines. Client/server
architecture filled a definite hole and became popular for a variety of reasons.

Access to knowledge was the big driver in the rise of client/server. On the macro level, with mainframe applications, users
were given the data, but they wanted knowledge. On the micro level, it came down to reports, which essentially are a particular
view on paper of data. It was easy to get the standard set of reports from the mainframe. To get a different view of corporate
data could take months for the preparation of the report. Also, reports were run on a standard schedule, not on the user's
schedule.

Cost savings was another big factor in the initial popularity of client/server applications. Mainframe computers cost in the
six- to seven figure range, as do the applications that run on them. It is much cheaper to build or buy something that runs
on the client's desktop.

The rise of fast, cheap network technology also was a factor. Normally the client and the server applications are on separate
machines, connected by some sort of a network. Hence, to be useful, client/server requires a good network. Originally, businesses
constructed local area networks (LANs) to enable file sharing. Client/server architecture was able to utilize these networks.
Enterprise networking has grown to include wide area networks (WANs) and the Internet. Bandwidth has grown from 10 Mbs Ethernet
and 3.4 Mbs token ring networks to 100 Mbs Ethernet with Gigabit Ethernet starting to make an appearance. While bandwidth
has grown, so has the number of applications chatting on a given network. Network congestion is an ever-present issue for
architects.

Client/server architecture led to the development and marketing of some very powerful desktop applications that have become
an integral part of the corporate world. Spreadsheets and personal databases are two desktop applications that have become
ubiquitous in the modern enterprise.

Client/server application development tapped the contributions of a large number of people who were not programmers. Corporate
employees at all levels developed some rather sophisticated applications without burdening an IT staff.

The initial hype that surrounded client/server revolution has pretty much died down. It is now a very mature architecture.
The following facts about it have emerged:

The support costs involved in client/server architecture have turned out to be considerable. A large number of PCs had to
be bought and supported. The cost of keeping client/server applications current and of physically distributing new releases
or new applications to all the corporate desktops that required them came as a shock to many IT departments. Client/server
applications are another one of the reasons that such a large portion of most IT budgets is devoted to fixed costs.

Many of the user-built client/server applications were not well designed. In particular, the data models used in the personal
databases were often completely unnormalized. In addition, in a significant number of cases, the data in the desktop data
store were not 100 percent clean.

The huge numbers of these applications and the past trend of decentralization made it extremely difficult for an architect
to locate and inventory them.

The rapid penetration of client/server applications into corporations has been expedited by client/server helper applications,
especially spreadsheets. At all levels of the corporate hierarchy, the spreadsheet is the premier data analysis tool. Client/server
development tools allow the direct integration of spreadsheet functionality into the desktop client. The importance of spreadsheet
functionality must be taken into account when architecting replacements for client/server applications. In most cases, if
the user is expecting spreadsheet capabilities in an application, it will have to be in any replacement.

In regard to client/server applications, the Canaxia architecture team was in the same position as most architects of large
corporations in the following ways:

Canaxia had a large number of client/server applications to be supported. Furthermore, the applications would have to be supported
for many years into the future.

Canaxia was still doing significant amounts of client/server application development. Some were just enhancements to existing
applications, but several proposals focused on developing entirely new client/server applications.

One of the problems that Myles has with client/server architecture is that it is basically onetoone: a particular application
on a particular desktop talking to a particular server, usually a database server. This architecture does not provide for
groups of corporate resources to access the same application at the same time. The architecture team wanted to move the corporation
in the direction of distributed applications to gain the cost savings and scalability that they provided.

The architecture team knew that it had to get some sort of grip on Canaxia's client/server applications. It knew that several
approaches could be taken:

Ignore them. This is the recommended approach for those applications that have the following characteristics:

Are complex and would be difficult and/or expensive to move to thin-client architecture.

Have a small user base, usually meaning the application is not an IT priority.

Are stable, that is, don't require much maintenance or a steady stream of updates, usually meaning that support of this application
is not a noticeable drain on the budget.

Involves spreadsheet functionality. Attempting to produce even a small subset of a spreadsheet's power in a thin client would
be a very difficult and expensive project. In this case, we suggest accepting the fact that the rich graphical user interface
(GUI) benefits that the spreadsheet brings to the table are the optimal solution.

Put the application on a server and turn the users' PCs into dumb terminals. Microsoft Windows Server, version 2000 and later,
is capable of operating in multi-user mode. This allows the IT department to run Windows applications on the server by just
providing screen updates to individual machines attached to it. Putting the application on a server makes the job of managing
client/server applications several orders of magnitudes easier.

Extract the functionality of the application into a series of interfaces. Turn the client/server application into a set of
components. Some components can be incorporated into other applications. If the client/server application has been broken
into pieces, it can be replaced a piece at a time, or the bits that are the biggest drain on your budget can be replaced,
leaving the rest still functional.

Service-Oriented Architecture (SOA) is a formalized method of integrating applications into an enterprise architecture. For more information on SOA, see Chapter 3, Service-Oriented Architecture.

Client/server applications are still the preferred solution in many situations. If the application requires a complex GUI, such as that produced by Visual Basic, PowerBuilder, or a Java Swing application or applet, many things can be done that
are impossible to duplicate in a browser application, and they are faster and easier to produce than in HTML. Applications
that perform intensive computation are excellent candidates. You do not want to burden your server with calculations or use
up your network bandwidth pumping images to a client. It is far better to give access to the data and let the CPU on the desktop
do the work. Applications that involve spreadsheet functionality are best done in client/server architecture. Excel exposes
all its functionality in the form of Common Object Model (COM) components. These components can be transparently and easily incorporated into VB or Microsoft Foundation Classes (MFC) applications. In any case, all new client/server development should be done using interfaces to the functionality rather
than monolithic applications.

Move them to thin-client architecture. At this time, this usually means creating browser-based applications to replace them.
This is the ideal approach to those client/server applications that have an important impact on the corporate data model.
It has the important side effect of helping to centralize the data resources contained in the applications.

The Canaxia architecture team decided to take a multistepped approach to its library of client/server applications.

About a third of the applications clearly fell in the "leave them be" category. They would be supported but not enhanced.

Between 10 and 15 percent of the applications would no longer be supported.

The remaining applications are candidates for replacement using a thin-client architecture.

One matter that is often overlooked in client/server discussions is the enterprise data sequestered in the desktop databases
scattered throughout a business. One can choose among several approaches when dealing with the important corporate data contained
in user data stores:

Extract it from the desktop and move it into one of the corporate databases. Users then obtain access via Open Database Connectivity
(ODBC) or (JDBC) supported applications.

For the client/server applications that are moved into the browser, extraction may be necessary.

While client/server architecture is not a good fit for current distributed applications, it still has its place in the modern
corporate IT world. We are staunch advocates of using interfaces rather than monolithic applications. As we have recommended
before, concentrate on the biggest and most immediate of your problems.

Thin-Client Architecture

Thin-client architecture is one popular approach to decoupling presentation from business logic and data. Originally thin clients were a rehash of time-share computing with a browser replacing the dumb terminal. As the architecture has matured, in some
cases the client has gained responsibility.

As noted, client/server applications have substantial hidden costs in the form of the effort involved to maintain and upgrade
them. Dissatisfaction with these hidden costs was one of the motivating factors that led to the concept of a "thin client."
With thin-client systems architecture, all work is performed on a server.

One of the really attractive parts of thin-client architectures is that the client software component is provided by a third
party and requires no expense or effort on the part of the business. The server, in this case a Web server, serves up content
to users' browsers and gathers responses from them. All applications run on either the Web server or on dedicated application
servers. All the data are on machines on the business's network.

Thin-client architecture has several problematic areas:

A thin-client architecture can produce a lot of network traffic. When the connection is over the Internet, particularly via
a modem, round-trips from the client to the server can become unacceptably long.

The architecture of the Web was initially designed to support static pages. When the next request for a static page comes
in, the server doesn't care about the browsing history of the client.

Control of which browser is used to access the thin-client application is sometimes outside the control of the team developing
the application. If the application is only used within a corporation, the browser to be used can be mandated by the IT department.
If the application is accessed by outside parties or over the Internet, a wide spectrum of browsers may have to be supported.
Your choices in this situation are to give up functionality by programming to the lowest common denominator or to give up
audience by developing only for modern browsers.

Users demand rich, interactive applications. HTML is insufficient to build rich, interactive GUI applications. When developing thin-client applications, the development team must strive to provide just enough functionality
for users to accomplish what they want and no more. Agile methods have the correct approach to this problem and do no more
than what clients request.

Data must be validated. The data input by the user can be validated on the client and, if there are problems with the page,
the user is alerted by a pop-up. The data also can be sent to the server and validated there. If problems arise, the page
is resent to the user with a message identifying where the problems are.

The most common Web development tools are usually enhanced text editors. More advanced Web rapid application development (RAD) tools exist, such as DreamWeaver and FrontPage; however, they all fall short on some level, usually due to the facts that
they only automate the production of HTML and they give little or no help with JavaScript or with the development of server-side
HTML generators such as Java server pages (JSPs) and Java servlets.

Server resources can be stretched thin. A single Web server can handle a surprising number of connections when all it is serving
up are static Web pages. When the session becomes interactive, the resources consumed by a client can rise dramatically.

The keys to strong thin-client systems architecture lie in application design. For example, application issues such as the
amount of information carried in session objects, the opening database connections, and the time spent in SQL queries can
have a big impact on the performance of a thin-client architecture. If the Web browser is running on a desktop machine, it
may be possible to greatly speed up performance by enlisting the computing power of the client machine.

Data analysis, especially the type that involves graphing, can heavily burden the network and the server. Each new request
for analysis involves a network trip to send the request to the server. Then the server may have to get the data, usually
from the machine hosting the database. Once the data are in hand, CPU-intensive processing steps are required to create the
graph. All these operations take lots of server time and resources. Then the new graph has to be sent over the network, usually
as a fairly bulky object, such as a GIF or JPEG. If the data and an application to analyze and display the data can be sent
over the wire, the user can play with the data to his or her heart's content and the server can be left to service other client
requests. This can be thought of as a "semithin-client" architecture.

Mobile phones and personal desktop assistants (PDAs) can be considered thin clients. They have browsers that you can use to
allow interaction between these mobile devices and systems within the enterprise.

The rise of the browser client has created the ability and the expectation that businesses expose a single, unified face to
external and internal users. Inevitably, this will mean bringing the business's legacy applications into the thin-client architecture.
Integrating legacy applications into thin-client architecture can be easy or it can be very difficult. Those legacy applications
that exhibit the problematic areas discussed previously can be quite challenging. Those that are fully transactional and are
built on relational databases can often be an easier port than dealing with a client/server application.

Exposing the functionality of the legacy systems via interfaces and object wrappering is the essential first step. Adapting
the asynchronous, batchprocessing model of the mainframe to the attention span of the average Internet user can be difficult.
We discuss this in greater detail later in this chapter.

Using Systems Architecture to Enhance System Value

As software and hardware systems age, they evolve as maintenance and enhancement activities change them. Maintenance and enhancement
can be used as an opportunity to increase the value of a system to the enterprise. The key to enhancing the value of systems
architecture via maintenance and enhancement is to use the changes as an opportunity to modify stovepipe applications into
reusable applications and components. This will make your system more agile because you will then be free to modify and/or
replace just a few components rather than the entire application. In addition, it will tend to "future proof" the application
because the greater flexibility offered by components will greatly increase the chances that existing software will be able
to fulfill business requirements as they change.

The best place to begin the process is by understanding all the contracts that the stovepipe fulfills. By contracts, we mean all the agreements that the stovepipe makes with applications that wish to use it. Then, as part of the enhancement
or maintenance, the stovepipe externalizes the contract via an interface. As this process is repeated with a monolithic legacy
application, it begins to function as a set of components. For example, some tools consume COBOL copy books and produce an
"object wrapper" for the modules described in the copy books.

Messaging technology is a useful technology for integrating legacy functions into your systems architecture. With messaging,
the information required for the function call is put on the wire and the calling application either waits for a reply (pseudosynchronous)
or goes about its business (asynchronous). Messaging is very powerful in that it completely abstracts the caller from the called function. The language in which the server program is written, the actual data types
it uses, and even the physical machine upon which it runs are immaterial to the client. With asynchronous messaging calls,
the server program can even be offline when the request is made and it will fulfill it when it is returned to service. Asynchronous
function calls are very attractive when accessing legacy applications. They can allow the legacy application to fulfill the
requests in the mode with which it is most comfortable: batch mode.

To be of the greatest utility, object wrappers should expose a logical set of the underlying functions. We think you will find that exposing every function that a monolithic application has is a waste
of time and effort. The functionality to be exposed should be grouped into logical units of work, and that unit of work is
what should be exposed.

What is the best way to expose this functionality? When creating an object wrapper, the major consideration should be the
manner in which your systems architecture plan indicates that this interface or contract will be utilized. The effort should
be made to minimize the number of network trips.

Wrappering depends on distributed object technology for its effectiveness. Distributed objects are available under Common
Object Request Broker Architecture (CORBA), Java, .Net, and by using Messaging Oriented Middleware (MOM) technology. Distributed objects rely upon an effective network (Internet, intranet, or WAN) to operate effectively.

In addition to increasing the value of a stovepipe application to other segments of the enterprise, exposing the components
of that application provide the following opportunities:

The possibility of sunsetting stagnant or obsolete routines and the incremental replacement of them by components that makes
more economic sense to the enterprise.

The possibility of plugging in an existing component that is cheaper to maintain in place of the legacy application routine.

Outsourcing functionality to an application services provider (ASP).

If the interface is properly designed, any application accessing that interface will be kept totally ignorant of how the business
contract is being carried out. Therefore, it is extremely important to design them properly. The Canaxia architecture group
has an ongoing project to define interfaces for legacy applications and important client/server applications that were written
in-house.

The best place to start when seeking an understanding of the interfaces that can be derived from an application is not the
source code but the user manuals. Another good source of knowledge of a legacy application's contracts is the application's
customers, its users, who usually know exactly what the application is supposed to do and the business rules surrounding its
operation. Useful documentation for the application is occasionally available. Examining the source code has to be done at
some point, but the more information gathered beforehand, the faster and easier will be the job of deciphering the source.