Migrating to a service-oriented architecture, Part 2

by Kishore Channabasavaiah, Kerrie Holley and Edward M. Tuggle, Jr.

This is the second part of the introduction intended to help you better understand the value of a service-oriented architecture (SOA), and to develop a realistic plan for evaluating your current infrastructure and migrating it to a true service-oriented architecture. It will help you understand why it is claimed that a SOA is the best platform for carrying existing assets into the future, as well as enabling the rapid and correct development of future applications. Further, you should have a better understanding of the major considerations in planning such a migration.

This is the second part of the introduction in a series of articles intended to help you better understand the value of a service-oriented architecture (SOA), and to develop a realistic plan for evaluating your current infrastructure and migrating it to a true service-oriented architecture. It is intended that after reading this paper, you will understand why it is claimed that a SOA is the best platform for carrying existing assets...

By submitting your personal information, you agree to receive emails regarding relevant products and special offers from TechTarget and its partners. You also agree that your personal information may be transferred and processed in the United States, and that you have read and agree to the Terms of Use and the Privacy Policy.

into the future, as well as enabling the rapid and correct development of future applications. Further, you should have a better understanding of the major considerations in planning such a migration. Part one of this paper described some of the forces driving consideration of a SOA, and the requirements that might be placed on the architecture. Part 2 continues now with the discussion of services and interfaces.

The nature of a service What then is a service? As previously stated in
part one in this series, typically within a business environment, that means business functions, business transactions, and system services. Examples of business functions might be getStockQuote, getCustomerAddress, or checkCreditRating. Examples of business transactions might be commitInventory, sellCoveredOption, or scheduleDelivery. Examples of system services might be logMessageIn, getTimeStamp, or openFile. Note the difference in the types of services. Business functions are, from the application's perspective, non-system functions that are effectively atomic. Business transactions might seem like a simple function to the invoking application, but they might be implemented as composite functions covered by their own transactional context. They might involve multiple lower-level functions, transparent to the caller. System functions are generalized functions that can be abstracted out to the particular platform, for instance, Windows or Linux. A generic function such as openFile might be provided by the application framework to effectively virtualize the data source and used regardless of the type and location of the real source of the data.

This might seem like an artificial distinction of the services; you could assert that from the application's perspective, all the services are atomic, and it is irrelevant whether they are business or system services. The distinction is made merely to introduce the important concept of granularity. The decomposition of business applications into services is not just an abstract process; it has very real practical implications. Services might be low-level or complex high-level (fine-grained or course-grained) functions, and there are very real trade-offs in terms of performance, flexibility, maintainability, and re-use, based on their definition. This process of defining services is normally accomplished within a larger scope -- that of the Application Framework. This is the actual work that must be done; that is, the development of a component-based Application Framework, wherein the services are defined as a set of reusable components that can in turn be used to build new applications or integrate existing software assets.

There are many such frameworks available today; within IBM, several frameworks such as EWA, JADE, and Struts (from Jakarta) are being used in customer integration scenarios. Taking EWA (pronounced "Eva"), from the IBM Software Group Advanced Technology Solutions team, for example, at a very high level, the framework looks like Figure 1. Within this framework, a configuration defines an application, describing the components of the application, as well as the sequence and method of their invocation. Input is received and passed to the application in a source-neutral way. So, for instance, the addition of an Internet connection to a bank application with existing ATM access is transparent to the application logic. The front-end device and protocol handlers make that possible. The core provides system-level services, and special-purpose access components enable connection to backend enterprise applications, so that they may remain in place, or be migrated over time. While EWA is fully J2EE-compliant, it can connect to external DCOM or CORBA component-based systems.

Today, EWA contains over 1500 general and special-purpose components, thus greatly reducing the amount of code that must be written for a new application. Another paper in this series will examine Application Frameworks in detail, along with what a user might expect in the process of developing one.

Addressing the old problems Let's return now to the first integration scenario discussed and the search for a scheme that minimizes the number of required interfaces, such as is drawn in
Figure 2.

This might look like an overly simplistic view, but it should now be clear that within a framework such as EWA, this view is the starting point. Now add the architectural concept of the Service Bus, represented in Figure 3 by the heavy center line, and a service or flow manager to connect the services and provide a path for service requests. The flow manager processes a defined execution sequence, or service flow, that will invoke the required services in the proper sequence to produce the final result. The Business Process Execution Language, or BPEL, is an example of such a technology for defining a process as a set of service invocations.

From here, you would need to determine how to call the services, so you would add application configuration. Next, virtualize the inputs and outputs. Finally, provide connectivity to backend processes, thus allowing them to run as-is and migrate in the future. Now the high-level picture is at least structurally complete, and looks like Figure 4.

It should not be at all surprising that this picture bears some resemblance to a block diagram of EWA; at the highest level, any robust application framework must provide these functions. From here, however, the real work begins -- building the 1500 components that put flesh on this skeleton. This is why many IT architects choose to implement within an existing framework; the process of decomposing the existing applications into components for the framework is work enough, without reinventing all the other general-purpose and system components known to be needed. However you approach it, you can implement the architecture using technologies and frameworks that exist today, and so you come full circle, back to the beginning, where the process starts with an analysis of the business problems that must be solved. You can do this now, confident in the knowledge that your architecture will be, in fact, implementable.

Integration requirements within the architecture So far in this discussion, integration has been confined to application integration via component-based services, but integration is a much broader topic than this. When assessing the requirements for an architecture, several types or "styles" of integration must be considered. You must consider the following:

Application integration

Integration at the end-user interface

Application connectivity

Process integration

Information integration

A build to integrate development model

Integration at the end-user interface is concerned with how the complete set of applications and services a given user accesses are integrated to provide a usable, efficient, and consistent interface. It is an evolving topic, and the new developments, for the near term, will be dominated by advances in the use of portal servers. While portlets can already invoke local service components via Web services, new technologies, such as Web Services for Remote Portlets will enable content and application providers to create interactive services that plug and play with portals via the Internet, and thereby open up many new integration possibilities.

Application connectivity is an integration style concerned with all types of connectivity that the architecture must support. At one level, this means things such as synchronous and asynchronous communications, routing, transformation, high speed distribution of data, and gateways and protocol converters. On another level, it also relates to the virtualization of input and output, or sources and sinks, as you saw in EWA's Channel and Protocol Handlers. Here the problem is the fundamental way data moves in and out of, and within, the framework that implements the architecture.

Process integration is concerned with the development of computing processes that map to and provide solutions for business processes, integration of applications into processes, and integrating processes with other processes. The first requirement might seem trivial, that is, that the architecture allow for an environment within which the basic business problems can be modeled, but insufficient analysis at this level will spell doom for any implementation of the architecture, regardless of its technical elegance. Integration of applications into processes might include applications within the enterprise, or might involve invocation of applications or services in remote systems, perhaps those of a business partner. Likewise, process-level integration might involve the integration of whole processes, not just individual services, from external sources, such as supply chain management or financial services that span multiple institutions. For such application and process integration needs, technologies such as BPEL4WS can be used, or the application framework can use a program configuration scheme such as the one seen in EWA. In fact, a higher-level configuration scheme can be constructed using BPEL4WS at a lower level, and then driven by an engine that provides more function than just flow management. Before any of this is built, however, you must first understand the architectural requirements, and then build the appropriate infrastructure.

Information integration is the process of providing a consistent access to all the data in the enterprise, by all the applications that need it, in whatever form they need it, without being restricted by the format, source, or location of the data. This requirement, when implemented, might involve adapters and a transformation engine, but typically it is more complex than that. Often the key concept is the virtualization of the data, which might involve the development of a data bus from which data is requested using standard services or interfaces by all applications within the enterprise. Thus the data can be presented to the application regardless of whether it came from a spreadsheet, a native file, an SQL or DL/I database, or an in-memory data store. The format of the data in its permanent store might also be unknown to the application. The application is further unaware of the operating system that manages the data, so native files on an AIX or Linux system are accessed the same way they would be on Windows, OS/2, ZOS, or any other system. The location of the data is likewise transparent; since it is provided by a common service, it is the responsibility of the access service to retrieve the data, locally or remotely, not the application, and then present the data in the requested format.

Lastly, one of the requirements for the application development environment must be that it takes into account all the styles and levels of integration that might be implemented within the enterprise, and provide for their development and deployment. To be truly robust, the development environment must include (and enforce) a methodology that clearly prescribes how services and components are designed and built in order to facilitate reuse, eliminate redundancy, and simplify testing, deployment, and maintenance.

All of the styles of integration listed above will have some incarnation within any enterprise, even though in some cases they might be simplified or not clearly defined; thus you must consider them all when embarking on a new architectural framework. A given IT environment might have only a small number of data source types, so information integration might be straightforward. Likewise, the scope of application connectivity might be limited. Even so, the integrating functions within the framework must still be provided by services, rather than being performed ad hoc by the applications, if the framework is to successfully endure the growth and changes over time that all enterprises experience.

Benefits of deploying a service-oriented architecture A SOA can be evolved based on existing system investments rather than requiring a full-scale system rewrite. Organizations that focus their development effort around the creation of services, using existing technologies, combined with the component-based approach to software development will realize several benefits:

Leverage existing assets -- This was the first, and most important, of the requirements. A business service can be constructed as an aggregation of existing components, using a suitable SOA framework and made available to the enterprise. Using this new service only requires knowing its interface and name. The service's internals are hidden from the outside world, as well as the complexities of the data flow through the components that make up the service. This component anonymity lets organizations leverage current investments, building services from a conglomeration of components built on different machines, running different operating systems, and developed in different programming languages. Legacy systems can be encapsulated and accessed via Web service interfaces.

Infrastructure, a commodity -- Infrastructure development and deployment will become more consistent across all the different enterprise applications. Existing components, newly-developed components, and components purchased from vendors can be consolidated within a well-defined SOA framework. Such an aggregation of components will be deployed as services on the existing infrastructure resulting in the underlying infrastructure beginning to be considered more as a commodity element.

Faster time-to-market -- Organizational Web services libraries will become the core asset for organizations adapting the SOA framework. Building and deploying services with these Web services libraries will reduce the time-to-market dramatically, as new initiatives reuse existing services and components, thus reducing design, development, testing, and deployment time.

Reduced cost -- As business demands evolve and new requirements are introduced, the cost of enhancing and creating new services by adapting the SOA framework and the services library, for both existing and new applications, is greatly reduced. The learning curve for the development team is reduced as well, as they might already be familiar with the existing components.

Risk mitigation -- Reusing existing components reduces the risk of introducing new failures into the process of enhancing or creating new business services. As mentioned earlier, there is a reduced risk in the maintenance and management of the infrastructure supporting the services, as well.

Continuous Business Process improvement -- A SOA allows a clear representation of process flows identified by the order of the components used in a particular business service. This provides the business users with an ideal environment for monitoring business operations. Process modeling is reflected in the business service. Process manipulation is achieved by reorganizing the pieces in a pattern (components that constitute a business service). This would further allow for changing the process flows while monitoring the effects, and thus facilitates continuous improvement.

Process-centric architecture -- The existing architecture models and practices tend to be program-centric. Applications are developed for the programmer's convenience. Often, process knowledge is spread between components. The application is much like a black box, with no granularity available outside it. Reuse requires copying code, incorporating shared libraries, or inheriting objects. In a process-centric architecture, the application is developed for the process. The process is decomposed into a series of steps, each representing a business service. In effect, each service or component functions as a sub-application. These sub-applications are chained together to create a process flow capable of satisfying the business need. This granularity lets processes leverage and reuse each sub-application throughout the organization.

And the future -- new models, new requirements So far, this discussion has centered around concepts related to meeting existing business requirements, better utilization and reuse of resources, and integration of existing and new applications. But what if a completely new model for application development emerges? Will the notion of a service-oriented architecture still be meaningful or required? Actually two new concepts are already beginning to be implemented: Grid computing and on-demand computing. While these models are distinct and have developed separately, they are closely related, and each make the evolution to a SOA even more imperative.

Grid computing An in-depth discussion of Grid computing is beyond the scope of this introduction, but a couple of points are worth mentioning. First of all, Grid computing is much more than just the application of large numbers of MIPS to effect a computing solution to a complex problem. It involves the virtualization of all the system resources including hardware, applications, and data, so that they can be utilized wherever and however they are needed within the grid. Secondly, previous sections have already discussed the importance of virtualization of data sources and the decomposition of applications into component-based services, so it should be easily understood that a true SOA should better enable getting maximum resource utilization in a Grid environment.

On-demand computing On-demand is also not in the scope of this discussion but again we would be remiss in not providing a brief connection between on-demand and SOA. Web services is an enabling technology for SOA, and SOA is an enabling architecture for on-demand applications. Applications must operate in a SOA framework in order to realize the benefits of on-demand.

Web services on-demand is a subset of the on-demand message which covers a wide spectrum. On one end of this spectrum there is a focus on the application environment, and on the other end, a focus on the operating environment which includes items like infrastructure and autonomic computing. Business transformation leverages both the application and operating environments to create an on-demand business. At the heart of this on-demand business will be Web services on-demand, where application-level services can be discovered, reconfigured, assembled, and delivered on demand with "just-in-time" integration capabilities.

The promise of Web services as an enabling technology is that it will enhance business value by providing capabilities such as services on-demand and, over time, will transform the way IT organizations develop software. It quite possibly might even transform the way business is conducted and products and services are offered over the Web in communities of interest that include trading partners, customers, and other types of business partnership. What if all of your applications shared the same transport protocol? What if they all understood the same interface? What if they could participate in, and understand the same transaction model? What if this were true of your partners? Then you would have applications and an infrastructure to support an ever changing business landscape; you would have achieved on-demand. Web services and SOA make this possible for applications.

Summary Service-oriented architecture is the next wave of application development. Services and SOA are all about designing and building systems using heterogeneous network addressable software components. SOA is an architecture with special properties, comprised of components and interconnections that stress interoperability and location transparency. It often can be evolved based on existing system investments, rather than requiring a full scale system rewrite; it leverages an organization's existing investment by taking advantage of current resources, including developers, software languages, hardware platforms, databases, and applications, and will thus reduce costs and risks while boosting productivity. This adaptable, flexible style of architecture provides the foundation for shorter time-to-market and reduced costs and risks in development and maintenance. Web services is a set of enabling technologies for SOA, and SOA is becoming the architecture of choice for development of responsive, adaptive new applications.

About the authors Kishore Channabasavaiah received a Bachelors degree in Mechanical Engineering from Bangalore University, India. He is currently an Executive Architect in the Chicago Innovation Center of IBM Global Services. He provides thought leadership for e-business Integration solutions with a focus on Web services and end-to-end solutions. His current focus is in Web application solutions, conducting technical solution reviews, Web services, service-oriented architecture, and Pervasive Computing. You can contact Kishore at kishorec at us.ibm.com.

Kerrie Holley received a Bachelor of Arts degree in Mathematics and a Juris Doctorate in law degree from DePaul University. He is currently a Distinguished Engineer in IBM Global Services and a Chief Architect in the e-business Integration Solutions where he provides thought leadership for the Web services practice. His current focus is in software engineering best practices, end-to-end advanced Web development, adaptive enterprise architecture, conducting architecture reviews, Web services, and service-oriented architecture. You can contact Kerrie at klholley at us.ibm.com.

Edward M. Tuggle, Jr. received a Bachelor of Science degree in Mathematics from the University of Oklahoma, and is currently a Senior Software Engineer on the IBM Software Group jStart Emerging Technology Solutions team. He worked with IBM in operating systems design, development, and maintenance for 23 years, for the past 6 years in Java technology and other emerging technologies, and is now specializing in Web services and service-oriented architecture. You can contact Edward at b391747 at us.ibm.com.

E-Handbook

0 comments

E-Mail

Username / Password

Password

By submitting you agree to receive email from TechTarget and its partners. If you reside outside of the United States, you consent to having your personal data transferred to and processed in the United States. Privacy