• What does an autonomic computing system mean to you and your organization and research? How do you sell this idea to your customers or immediate research community?

• What key technologies are necessary to establish multivendor autonomic computing systems? How far away are we from realizing these technologies—or do they already exist?

• What key barriers—political, economic, or social—might prevent the use of autonomic computing systems in real-world business or scientific applications? How do you think these can be overcome?

The panelists provided a variety of viewpoints on these topics from both industry and academic perspectives.

Do multivendor autonomic computing systems exist?

White suggested that enterprise IT systems are already multivendor, especially if you consider both desktop and server-based applications. Furthermore, there's a growing trend toward multivendor outsourcing, whereby different companies providing outsourcing support must independently handle networking, databases, server, storage, and so on. According to White, many of these individual subsystems have requirements for autonomic behavior. However, current systems fall far short of being autonomic: their subsystems aren't interoperable across system tiers and vendors. White indicated that it will therefore be necessary to deploy autonomic system ideas in the context of such existing IT ecosystems.

Why autonomic systems?

According to White, addressing the "complexity" of managing such system components (networking, databases, server, and so on) is the strongest motivation for using an autonomic system. Using autonomic systems to handle such complexity will entail (among other things) extending Web Services Distributed Management (WSDM) to endow self-managing resources with quality-of-service-based interfaces, and focusing on both automating and eliminating the steps of end-to-end IT processes.

The other panelists shared this vision. McCann indicated that an autonomic system shouldn't be just a static resource manager or optimizer. It should take a much wider perspective, attempting to define various aspects of an autonomic computing (or a self-adapting system) architecture. She also mentioned that interoperability would be easier to achieve if such an architecture decoupled autonomic components from the functional components.

Goswami agreed, advocating a service-oriented architecture with self-describing services as the basic unit of modularity. In his view, formal models, complete with a description of intermodule dependencies, are an essential component of autonomic systems. Yousif added that such systems must include descriptions for other aspects of the environment, such as the business, workloads, and workflows. Also key to autonomic computing's success, in Goswami's view, are the development of the full life cycle of real-time monitoring, analytics that process raw monitored data into higher-level system-state descriptions, and automated algorithms that determine and execute appropriate actions as needed. Machine-learning techniques would play an important role in the analytic and decision-making phases.

All panelists emphasized the business benefits of developing autonomic systems. For example, streamlining data-center management by making use of autonomic computing concepts could increase the number of servers a single system administrator could maintain. Yousif and Goswami both stressed such business needs, suggesting that autonomic computing should aim to avoid the need for system administrators to perform repetitive, boring tasks so that they can focus on more cognitive skills.

Enabling adoption of autonomic systems

Managing user expectations remains an important objective to enable wider adoption of autonomic computing systems. McCann discussed another aspect of trust: not only must people trust autonomic systems, but autonomic services must trust (and a fortiori understand) one another's capabilities. Moreover, they must tolerate fuzziness across different semantic mappings because there's unlikely to be any one lingua franca. Coping with such fuzziness (and resolving disagreements between what's requested and what's feasible) will require negotiation between service requesters and providers.

Yousif focused on autonomic computing's value for data centers, indicating that it's already an important component of dynamic data centers currently being planned. A key reason for this adoption is the recognition that autonomic computing can improve management of service-level agreements and simplify administration of existing systems. In this context, all panelists agreed on the need to develop suitable policies—especially to better synchronize business and IT metrics.

All panelists recognized the importance of standards for autonomic computing. They also recognized that standards could be considered at different levels of the software stack: Web services standards (such as WSDM and WS-Transfer), standards for defining policies (WS-Policy, for instance), and standards for capturing resource information (Common Information Model and Job Submission Description Language). All these standards play an important part in supporting multivendor interaction between autonomic components. White argued that to achieve self-managing resources, we should strive to extend existing standards (such as WSDM) rather than attempt to create new ones. McCann proposed a wider perspective, identifying possible areas where standardization is necessary:

• the types of probes needed for monitoring (what data, when, and how often),

• the types of events generated in the system (which ones and when), and

• actions the system performs (what they would be and how to define them).

The panelists also identified the need for tools to let application and component developers define autonomic capability.

However, Yousif commented that the Web services stack now has a very large number of specifications—and it's important to consider precisely which standards are relevant for implementing multivendor autonomic systems.

All the panelists also recognized the importance of virtualization. You can view virtualization as

• a means to aggregate computational and data resources and enable multiple viewpoints on these resources to coexist or

• an abstraction for a more uniform access mechanism to resources.

In the second scenario, the intention would be to let multiple vendors share the same abstraction but implement that abstraction in different ways (perhaps with different specializations). Fortes emphasized that such virtualization could benefit from current related research in the Grid community.

Barriers to deployment

For autonomic capabilities to be effectively deployed in real systems, Goswami indicated the importance of trust issues in automatic system administration. He mentioned the "people equation" that could act as a barrier to the adoption of autonomic systems in real environments. Factors such as taking away control from system administrators, being able to automatically manage interactions between components, and trusting the system to automatically self-adapt must be considered. Fear of automation replacing your job can also motivate administrators to eschew automation, no matter how effective it might prove in practice.

Yousif identified lack of interoperability between systems as a key barrier to deployment of autonomic systems. For instance, information models and schemas for data exchange between autonomic system components from different vendors still can't be exchanged seamlessly. He identified this as the semantic stack—essentially, the sets of information models that could exist at different levels of an autonomic system, ranging from low-level components such as routers and servers to intermediate-level operating environments to information associated with applications. McCann indicated that achieving interoperability at different levels in the semantic stack is a difficult challenge because it remains an important requirement of a number of other computing communities.

Audience response

A lively Q&A session followed the panelists' statements. The panelists agreed that we must attack problems from both short-term and long-term perspectives. A lot of low-hanging fruit remains—we can automate many obvious things in the near term. One particular customer requirement that's still important (compared to high reliability and performance) is to make components easier to manage. We must also think about the long term and about how to solve problems more holistically than we tend to do when solving them tactically.

Karsten Schwan from the audience challenged the panelists to define what they meant by end-to-end, as each panelist used this term repeatedly. White defined it in terms of the full life cycle of IT processes, such as change management. Yousif agreed, giving further examples, such as the problem management life cycle. McCann and Fortes pointed out that many valid perspectives exist because many different system users exist, each with his or her own role. This topic engendered further discussion about the need for an application scenario that would demonstrate autonomic computing ideas' effectiveness. Data centers and Grid computing applications provide useful candidates for such a scenario.

Overall, the panelists concurred on these points:

• Autonomic systems are already in place—to a limited extent, and mainly at the subsystems level. To achieve wider adoption by customers and researchers, we need to identify what autonomic computing means to such individuals and to build better trust in autonomic systems.

• Interoperability and virtualization are key requirements. Interoperability is required at different levels, and the various approaches to virtualization must be reconciled.

• Following infrastructure standards and extending existing standards (rather than creating new ones) would also lead to further adoption of multivendor autonomic systems.

• We must identify key application scenarios, such as data centers and Grid-computing applications, that would enable wider adoption of multivendor autonomic systems.

Autonomic computing concepts are already making a significant impact on the future of large-scale IT systems administration, especially with the increasing complexity of the software and hardware components that make up such systems. A need now exists for software engineering approaches specifically aimed at designing such systems.

Omer F. Rana is a reader at the Cardiff School of Computer Science and the deputy director at the Welsh eScience Center. Contact him at o.f.rana@cs.cardiff.ac.uk.

Jeffrey O. Kephart is a manager of the Agents and Emergent Phenomena group at the IBM T.J. Watson Research Center. Contact him at kephart@us.ibm.com.