EA Deliverable: Architecture Strategy: Portability (sample)

In order to enable ABCModel2 Company to purchase packaged software that meets the business needs, it is imperative that the architecture addresses portability, interoperability and integration.

The reader may believe that portability consists of the ability to run applications on different platforms, ideally, without change. This is important; but portability is more about data, people and skills. Portability is valuable to the business in order:

to have the freedom to change hardware and software and yet protect the investment in application software (see the business model and the business pressures);

to have flexibility in “right-sizing”, “up-sizing”, or adjusting all parts of the system to meet the changing business needs;

to allow the same application to be used across several operating companies, each of which may have chosen different systems;

to be able to buy and use “shrink-wrapped” application packages;

to allow copies of data to be readily transferred and reused; and

to shorten the learning curve for developers and users when deploying new applications.

These items are all in support of the business needs and the IS drivers documented in the Business Model.

To achieve portability, interfaces must be standard across systems. To port applications, IS will require standard interfaces in addition to standard languages:

User Interfaces -- the architecture adopts the Microsoft Visual Design guide and Microsoft Windows. In the future, it may be necessary to provide for UNIX workstations. Human factors design (not addressed in the EWAS study) will increase productivity and allow users to move between functions more easily without having to learn multiple navigation techniques.

Network Independence -- to access multiple platforms, TCP/IP is the accepted standard. However, HLLAPI and CPI-C will be incorporated into the architecture to support access to the mainframe. The Wide Area Network (WAN) will support many protocols including all the protocols identified in the Enterprise Networking project.

Platform Independence -- to provide applications that run on multiple hardware devices, a common system development environment is required. However, the Technical Architecture (TA) will be simplified by using NT for all development servers, and MS Windows 95 for all clients. (It is important to understand that the server hardware and software must be sized when the application is designed.)

Database Independence -- to access multiple relational and flat file systems, data interface standards are required. To port data, the architecture addresses transformation of data from incompatible data structures. SAG’s CLI and Microsoft’s ODBC will also be used. In addition, for applications that require complex or high-volume transactions, XA compliant transaction monitors will be employed. Messaging, through OLE and COM, will be supported along with Extended Microsoft Application Programming Interfaces for mail-enabled applications.

Standardization is rarely complete and portability is not perfect. The TA will aim high, and then the team will pragmatically choose to implement proprietary standards where it is necessary. It is the level of standardization on the ISO model which determines the amount and ease of portability. Alas, the higher the level on the model, the less standardization there is. Therefore, the architecture will concentrate on Enterprise-Wide Application Architecture standards at the higher levels of the ISO model.

The System Development Environment (SDE) must support open interfaces which let data and services communicate and operate with other software systems. ABCModel2 Company will need to interface with:

Legacy systems

Bar-code and scanners

Application programs -- electronic mail, spreadsheets, etc.

The TA will use an object-oriented approach to explain how to facilitate integration. The object-oriented approach will allow the SDE project team to create methods and events. These methods and events are used to access other applications, even if the targeted application system is not object-oriented.

Object Approach

Object-oriented programming languages have not fulfilled the promise of a world of reusable business objects to aide in software development and system deployment. NextStep, Forte, Dynasty and IBM provide environments that extend the programming languages; while Microsoft’s strategy is to use components.

Microsoft suggests that there are two areas of object technology:

Object-oriented programming languages:

“useful for building self-contained, custom applications by creating object definitions in the form of source code.” In theory, these language-based object definitions can be shared and reused in other applications.

Languages do not provide the means to separate applications and integrate them with other custom applications. Language objects are very often language dependent, and it may be difficult to use objects from another vendor or language. There are vendors who provide reusable objects, but many more vendors solely provide components.

Object-enabling system software:

This means object-enabling technology which the developers use to incorporate system software. The architect designs the applications to use the extended capability.

The above figure describes how two of the Enterprise Applications would be accessed in an object-oriented approach. A key feature of the COM model is the ability to facilitate communication across the network. All OLE2 applications developed will be able to take advantage of distributed objects in the future with minimal modifications.

The EWAS team’s long term strategy is to use Microsoft’s COM and OLE2. Therefore, what is designed today must be moved to COM and OLE2 with minimal effort. Microsoft, Digital, and Candle are working on the COM and CORBA interoperability.

By using these strategies, developers will be able to generate applications which enable the deployment of strategic applications rapidly.

ABCModel2 Company may plan and develop their own system development environment using Powerbuilder, VBX or CRX controls, and their own team-development enterprise environment.

Departmental Systems Approach

Simple departmental systems may use Visual Basic and a two-tier architecture with SQL Server. Again, Powerbuilder may be substituted for Visual Basic; however, the rationale is that Visual Basic is becoming pervasive at the user level. It will take less expertise to program very sophisticated GUI programs in VB than in Powerbuilder when the users become accustomed with VB in all the office products. In addition, IS can set up advanced training for users using Visual Basic, if IS uses VB as well. The theory is that IS becomes a service provider and partner to the users. (It is important to remember that this requires a very robust infrastructure as described previously.)

The TA standards have produced a very favorable condition for the development of portable applications. In addition, support for increased compatibility between application systems is enabled.

ODBC provides a portability layer buffering the application and the database. The standards do not encourage the access of the database directly. The application formulates the queries which then converts this dialect into statements that the RDBMS understands. ODBC 2.0 takes much of the burden off the overall systems performance by providing caching capabilities. ODBC provides two ways of submitting an SQL statement:

Direct execution -- used for statements that are executed a single time. The application formulates the SQL statement and sends it to the RDBMS.

Prepared execution -- useful if the application will be executed several times or the application needs information about a result set prior to the execution of the statement. Under prepared execution, upon receiving the SQLPrepare function, the data source will compile the statement, produce an access plan, and return the access plan to the driver. The data source will then use this plan when it receives an SQLExecute statement.

Stored procedures in the database can be accessed, or Call Level Interface (CLI) can be programmed adding to the functionality (see the database workshop to understand how this is accomplished).

The application must ensure the integrity of mission-critical data in a transactional system. Integrity, in terms of transaction processing, means that transactions observe the ACID properties:

Atomicity -- all components of a transaction must succeed or fail as a unit.

Consistency -- the actions performed by a transaction must take data from one consistent state to another.

Isolation --Transactions performed simultaneously must not interfere with each other.

Durability -- Once a transaction is committed, it is permanent. Subsequent failures of the system must not result in the loss of data.

In a distributed environment ACID properties are difficult to achieve. Multiple RDBMS and recoverable queues are all required to act on behalf of the transaction. The resource managers must act in concert when participating in a distributed transaction.

Consistency is the only item that remains the responsibility of the programmer. All else is under control of the TP monitor. Most transaction monitors use two-phase commit protocols to coordinate the resource managers, although asynchronous transactions through a queuing mechanism is also used.

When a transaction aborts, the monitor must roll back the database information and the data in the program. This will minimize error handling and restore handling. This allows for reliable, atomic operations with simplified error handling. For complex applications, the TA recommends using an XA compliant transaction monitor to be discussed in further detail later.

Load Balancing and Failover

The services must stay up and running. In a client/server environment, there is rarely a time when 100% of all the elements are available. The entire system, from end-to-end, must be highly reliable.

Load Balance

Load balancing enables application resources to be used efficiently. Application servers may be installed where needed in the network, and the environment should allow replicated services. The load balancing must be highly flexible, allowing powerful servers to take on more of the load.

Failover

Replication should be used to have physical nodes as backup servers. If a primary partition fails, the requests must be re-routed to the available server.

The same physical node should be able to have replicated services as well, thereby taking care of software failures.

Flexibility: An overriding design principle

According to Stage 1, ABCModel2 Company is under increasing pressure to adapt and improve. The desire of operating companies and end-users to develop their own localized solutions carries with it the danger that the architectural coherence will be lost. This will result in future constraints on the business.

Where the future flexibility of the architecture is of vital importance, this can best be achieved by concentrating on standardization of the interfaces rather than the products.

A development and execution environment which makes it easy to build flexibly is desired. This kind of environment will allow applications to be:

built rapidly,

modified easily, and

connected to other systems.

This environment will also:

allow new technical architectures to be evolved, and

involve users directly in the design and development in order to ensure maximum usability.

Business Messages and Queuing Middleware:

The business messages must be designed to provide flexibility, but at the same time they must minimize network traffic. Scalability should be built into all applications. Queuing Middleware will allow asynchronous transactions with ACID capability. When a link is down, a transaction fails or other problems arise, the queues will retain the data.

Messages and Queuing allow business components to be shared when the interfaces are well defined.

Flexibility is about allowing the developers to construct applications with a variety of tools, while at the same time performing the development in a team environment. The developers create classes (libraries) and projects. Reusing classes and components reduces the overall effort and creates consistency.

System generation allows flexibility by allowing partitioning of the application. Partitioning means determining where the application is going to run and apportion it for the specified environment. If the code runs on a physical server, then the developer must be able to compile if for performance and scalability reasons.

Deployment allows the application partition to be placed on the appropriate hardware, monitor performance and enable system management.

Disclaimer: Blog contents express the viewpoints of their independent authors and
are not reviewed for correctness or accuracy by
Toolbox for IT. Any opinions, comments, solutions or other commentary
expressed by blog authors are not endorsed or recommended by
Toolbox for IT
or any vendor. If you feel a blog entry is inappropriate,
click here to notify
Toolbox for IT.