We are here live at the TUCON 2008 conference, TIBCO Software’s user event in San Francisco, to look into the issues around SOA integrity, particularly in the context of widespread enterprise use, the myriad demands that are going to be put on services, and how infrastructure is going to need to adapt and perform in a way that probably has not been the case for infrastructure up until now.

Helping us to weed through service performance management and how it relates to SOA governance and other issues of total architecture, we are joined by a panel of industry analysts, experts and representatives from TIBCO.

Let's start by introducing our panel. We are joined by Joe McKendrick, an independent analyst and SOA blogger. Welcome to the show, Joe.

Joe McKendrick: Hi, Dana, happy to be here.

Gardner: We are also joined by Sandy Rogers, the program director for SOA, Web services and integration at IDC. Welcome, Sandy.

Gardner:We saw and listened to some presentations this morning at the TIBCO conference. One of the things that struck me is this notion of pulling together what had been not only in disparate technology silos -- but really joining what had been in functional and organizational silos. Particularly, I mean the design and process creation phases that we have heard so much about with SOA.

And then how that relates to the secondary aspect of functional activities, which is the operations -- keeping the trains running on time, and making sure that service-level agreements (SLAs) are met. That means making sure that users get very fine-grained services coming through uninterrupted in aggregated applications -- without hiccups, without slowdowns.

These processes are moving to mission-critical, and so there needs to be more opportunity for these two aspects of SOA, design and operations, to work together. Performance management of services gives more insight into what takes place beneath those services, and is, therefore, becoming essential.

First, let's take a look at this landscape of what's going on in SOA, and why, as we move toward enterprise-wide deployment, service performance management becomes so important.

Sandy at IDC, what do you see in terms of enterprises that are early adopters of SOA? How concerned are they that, when they throw the switch, so to speak, with these composite business processes -- made up of services from a variety of different sources with a variety of different support infrastructure -- that they really feel confident that this is going to hold up in real world situations?

Rogers: What I find is interesting is that even if you have one service that you have deployed, you need to have as much information as possible around how it is being used and how the trending is happening regarding the up-tick in the consumption of the service across different applications, across different processes.

So, most organizations need to present an environment where individuals and stakeholders in the company feel more comfortable in relying on services and also allowing others to potentially handle the operational dynamics of those services, once they are in production.

They need a lot more visibility and an understanding of the strains that are happening on the system, and they need to really build up a level of trust. Once they can add on to the amount of individuals that have that visibility, that trust starts to develop, more reuse starts to happen, and it starts to take off.

Eventually they get to a stage, where they are concerned about the scalability and how far they can push the limits of these deployments. It could be the way that they’ve designed it architecturally, or it could be just that they are getting familiar with the new technologies to support SOA infrastructure.

Gardner: It seems that at the very time when SOA is putting more emphasis on a diversified portfolio of services -- repurposing those services, extending visibility -- that, at the same time, IT as an organization is being tasked with behaving more maturely as a business within a business.

Joe, just now that we are looking at the need for IT to perform like a mature business, is there a risk here of finger-pointing -- that when something goes wrong and so many constituents are involved with the support of a service, that no one will really be able to take responsibility?

McKendrick: Yes, that’s been a problem all along. There is always a lot of finger-pointing, and IT tends to get blamed for everything. Sandy made an excellent point that the foundation of SOA is trust.

The business units are being asked to sign on to a SOA to provide support, and perhaps even some of them to provide funding, and they are looking to the services that they will consume to be scalable, looking to the services to have uptime to be available, perhaps 24x7. And if this trust is not there, the SOA, the whole foundation of the SOA, breaks down and IT will get the blame again.

It very much hinges on IT and performance management. We're actually talking about two levels here, governance and performance management. They are integrated, and they need each other. But governance deals more with how the business addresses SOA. Performance management is an IT challenge and is rightly put into the IT "sphere of influence."

Gardner: I was struck, when I heard Anthony’s presentation this morning, by your example of what things used to be like, where you would get 40 people on a conference call when something went wrong, and you would be yelling out URLs in order to find the right server to either shut down or replace.

What's the issue from your perspective now on solving this issue when things go wrong? Is this something that we can rely on people to solve, or do we need to move more toward a systems-based approach?

Abbattista: First, I'd like to wind through an earlier question you asked. When we went to SOA, when we put in our enterprise service bus (ESB), and when we chose TIBCO for our bus, a lot of people thought of SOA as, "Well, I am just going to construct some WSDL and call some SOAP or HTTP, and that’s SOA."

But the first thing we did is talk through the governance part of why we want to "get on the toll road" and "pay a toll for the bus," and really that became the consistency in measurement and governance, and lets us operate the things once we have created them.

So the first thing we had to do was get through the whole idea of that. It was worth it, and it wasn’t a matter of whether the bus would work or not. For the first year-and-a-half that we put our ESB in and we started to market services on it, we would hear the words, "TIBCO is down."

It didn’t matter whether the back-end service is down. It didn’t matter whether the mainframe was broken, they would say "TIBCO is down." We finally started to get the root cause, saying, "No, so and so service is down." The basis for us having good measurement of performance is helping to "pay the toll," of getting on the bus and actually having measurement points that are well understood.

I also don’t agree exactly that governance is a business-unit thing. Governance for us is also a lot about the SLAs around the services, of having good expectations up front about how they will behave and how they will be called. That way, we have a benchmark or baseline to compare ourselves to on this. All of sudden, if we get a 100,000 calls a day to something that is designed for, or is expected to have, 1,000 -- we at least understand what to be looking for.

Gardner: Let's provide a level-set for our listeners. You are representing Allstate, which is a very large organization, with 17 million customers, $156 billion in assets. Give us a sense of the scale that we are talking about in terms of your IT organization?

Abbattista: Our claims organization, for example, has an IT shop of about 400 people that are employees, and we are not counting offshore or other people to support that. Each of our business units is a substantial IT shop in of itself, each with 500 to 1,000 people.

Then, what we choose to federate becomes an issue, because they need to talk to each other. They need to talk to themselves. They need to talk to the outside world. So what we layer then in my area is an infrastructure of components on how to do those tasks.

The massiveness of it is how do you measure and monitor that to get end-to-end composite services that we really can monitor and supply a good customer experience from? The massiveness is amazing. We have about 5,000 servers -- UNIX, Windows, mainframes, AS400s -- we have them all at this point.

Gardner: How many services do you have that have to "pay their toll" on the service bus, so to speak?

Abbattista: About 750.

Gardner: Wow! That’s pretty good.

Abbattista: We actually front our document management services and collapse all that into Oracle, but we fronted that with TIBCO. We did that so that we would have the measurement from day one, and it’s worked amazingly well.

People argued it would be just as easy to shove the document to the database and make an HTTP-SOAP call, but this governed ESB approach has paid off a 1,000 times over, because we now predicatively know when something is going awry.

Gardner: All right, now let's go to Rourke. We understand that enterprises are hesitant about going toward SOA on a holistic basis, if they haven’t got performance backstops in place. We are a little bit weary of finger pointing, because there is such a complex stew of components and services that makes it very difficult after the fact to point out and say who is responsible.

And, we're dealing with organizations like Allstate, which have massive size and scale, with 750 services. What do people need to be considering, as we moving into to yet more complexity with virtualization, cloud computing, utility grids? Give us a little bit of level-set about what's important to consider when moving toward a solution before the fact?

McNamara: SOA, virtualization, and governance -- all of these technologies have pluses and minuses. And, on the whole, when you finish computing out the equation, you are definitely on the plus side, you are definitely on the positive side.

But, you need to make sure that, as you move from the older ways of doing things -- from the siloed applications, the siloed business unit way of doing things -- to the SOA, services-based way of doing things, you don’t ignore the new complexities you are introducing.

Don’t ignore the new problems that you are introducing. Have a strategy in place to mitigate those issues. Make sure you address that, so that you really do get the advantage, the benefits of SOA.

What I mean by that is with SOA you are reusing services. You are making services available, so that that functionality, that code, doesn’t need to be rewritten time and time again. In doing so you reduce the amount of work, you reduce the cost of building new applications, of building new functionality for your business organization.

You increase agility, because you have reduced the amount of time it takes to build new functionality for your business organization. But, in so doing, you have taken what was one large application, or three large applications, and you have broken them down into dozens or tens of separate smaller units that all need to intercommunicate, play nice with each other, and talk the same language.

Even once you have that in production, you now have a greater possibility for finger-pointing, because, if the business functionality goes down, you can’t say that that application that we just put on is down.

The big question now is what part of that application is down? Whose service is it? Your service, or someone else’s service? Is it the actual servers that support that? Is it the infrastructure that supports that? If you are using virtualization technology, is it the hardware that’s down, or is it the virtualization layer? Is it the software that runs on top of that?

You have this added complexity, and you need to make sure that doesn’t prevent you from seeing the real benefit of doing SOA.

Gardner: So after the fact of failures, in trying to do forensics and root cause analysis and putting more agents and agent-less systems in place, if it's all telling you what's wrong after the fact that it’s wrong, it’s probably too late.

McNamara: Absolutely.

Gardner: How do we get to this vision of proactive, anticipatory systems awareness via service performance management? Let me first take this to Sandy. How important is it for us to get to this sense that something isn't quite right, in advance of it failing?

Rogers: Obviously, there are different use cases and different companies that are really interested in that dynamic, autonomic type of environment, where you can adjust to the demands of the environment, but we are also becoming much more Web-based.

What we are seeing is that, as services are exposed externally to customers, partners, and other systems, it affects the ability to fail-over, to have redundant services deployed out, to be able to track the trends, and be able to plan, going forward, what needs to be supported in the infrastructure, and to even go back to issues of funding. How are you going to prove what's being used by whom to understand what's happening?

So, first, yes, it is visibility. But, from there, it has to be about receiving the information as it is happening, and to be able to adjust the behavior of the services and the behavior of the infrastructure that is supporting. It starts to become very important. There are levels of importance in criticality with different services in the infrastructure that’s supporting it right now.

But, the way that we want to move to being able to deploy anywhere and leverage virtualization technologies is to break away from the static configuration of the hardware, to the databases, to where all this is being stored now, and to have more of that dynamic resourcing. To leverage services that are deployed external to an organization you need to have more real-time communication.

Gardner: So, the proposition remains, how do you do that? It’s clear that you want to get out in front of these problems, but with so many interdependencies, the large scale in number of services, different environments, probably inside and outside the organization, it raises questions. How do we move up in abstraction toward understanding the context of an entire business process, in order to go back and look for the signals that will tell us when something is approaching a breakdown, or when we need to provision more hardware and software resources?

Let me take this to you, Anthony. Where do you think that abstraction needs to be in order to forecast appropriately issues of SOA integrity?

Abbattista: I'll go back to the point on having some expectations or benchmarks of how the service should run when it’s designed and deployed in the first place. Then, you can understand if your baseline is correct and then, over time, you can look for fragmented behavior. But, I do think you need some level of end-to-end view of the process and of who is the customer on the end.

Ultimately, where these things show up en masse is at the end-points, and typically that’s in the consumer space, as we are frustrating an employee or someone on a website with a bad client experience. Those are unforgivable.

So, starting with the customer at the end-point of that business process and looking at some of those interactions, is part and parcel of deploying the service in the first place. If you don’t do that, you will be chasing your tail for the rest of your life in operations, until you go back and do that mapping. So I think it pays to do it upfront.

Gardner: You mentioned in your presentation that the "Walls must come down" between IT operations and development-deployment-requirements-test functions. It sounds like you're also saying it needs to go from end-to-end, beyond just that wall, but also across the entire event-processing landscape.

Abbattista: In that respect, I view our function in running the applications and supplying the applications as a utility. It's our job to point back to the groups that deploy the stuff. If I let them deploy junk, I am as complicit in that junk being delivered as anybody else. That’s a responsibility we take seriously. If you're going to put it in the shop and expect us to run it, I won't take junk.

Gardner: Right. So there is the adage of, "Garbage in, garbage out." Now, if garbage appears anywhere in the context of a complex process, it's garbage out. That’s even more difficult.

Let's go to Joe McKendrick. Tell us about the concept of complex event processing (CEP). How do you get any handle on a process? Do you look for the description of the process from a modeling perspective, through what's been done on the ESB, all of the above?

Rourke and I were talking about that a little bit earlier. You need to be able to predict what's going to happen, not only in the business, but in the systems. TIBCO is making some progress in this area in terms of being able to predict when the system may go down or when there will be spikes in demand. Predictive analytics, which is a subset of business intelligence (BI), is now moving into the systems management space.

Gardner: We're actually moving above the systems management space by an abstraction level or two. Let's go back to Rourke. You had a couple of product enhancement announcements today here at the TIBCO conference. You are getting out in front of service performance management, and your interest is to accomplish some of the things we have been describing, provide what the market is demanding for SOA in order to be trusted.

Tell us about CEP and why that is an important part of this predictive solution.

McNamara: One of our customers said it best last night over dinner, when I introduced the concept of the product I am going to mention in just a second. They saw immediately what problem it solved for them.

They said that their biggest fear is that their SOA initiative will be a victim of its own success. A service will be reused so many times so rapidly that the hardware it's deployed on, the manner in which it was deployed, won't be able to handle the load. That service, which is now used in a dozen different business applications, or exposed in a dozen different business applications, will go down or will degrade in its performance level.

That could make SOA a victim of its own success. They will have successfully sold the service, had it reused over and over and over and over again. But, then, because of that reuse, because they were successful in achieving the SOA dream, they now are going to suffer. All that business users will see from that is that "SOA is bad," it makes my applications more fragile, it makes my applications slow down because so many people are using the same stuff.

McNamara: The key is that we just can’t simply wait for the problem to develop or the problem to happen, because it will happen very quickly. We won't have a week’s warning, a month’s warning, or even necessarily a few hours’ warning. And we won't understand, when we deploy that service, all the places or all the manners in which it will used. So, we need to be able to predict these problems before they occur and do something to prevent those problems from occurring.

TIBCO is taking our CEP technology, the business events technology that we have, and applying the problem to our internal software, our infrastructure, the same way our customers apply it to their business problems.

We are using business events to monitor what's going on with service load and performance -- what the load profiles look like in a given organization, allowing it to understand some of the programs and marketing efforts that are going on within that company. Then, when it sees that a service load is approaching a dangerous level; when it sees that based on the events that are occurring that the service will become overloaded and will violate its SLAs, it’s able to tell other parts of the infrastructure to take action to prevent that problem.

Gardner: Let me see if I understand this. This sounds like a schematic about a business process and, by reverse engineering from that process down to the constituent ingredients to support it, you can predict where the loads will be building or will become erratic. Therefore, you can also detect what's going on within that system, put the two together, and come up with a heads-up?

McNamara: That’s exactly right. You need to understand what the interdependencies are between your services and what the load characteristics of the different component parts in that dependency graph in that environment are. Then, based on that, you need to understand what sort of events in your business or in your IT infrastructure will cause performance problems or overload conditions.

Gardner: Let's go back to Sandy. You mentioned earlier about how to automate toward these goals. It sounds like it’s going to be a bit of journey to get to full automation. On the other hand, having 40 people on a conference call to try to manually bear-wrestle these problems down doesn’t work either. How do we find a balance between too much automation, automation that can’t be attained, and purely manual, after-the-fact approaches?

Rogers: Everyone has to walk before they run with any type of new technology implementation. But, we are finding that most organizations are keying in on those services that are most important, and making sure that they are instrumented appropriately regarding the technologies that support management as being able to define what those thresholds are.

Being able to correlate those thresholds to real business needs and business value -- that’s one of the interesting things about what we were doing in a service level. We can start to associate the services that are most relevant and what there are going to have the most impact for.

We can make sure that information that is contained either in the payload or form the service itself as provided. So, you have that insight. I think that organizations are starting to realize that, in order to prove the value of the services, in order to prove that the value of having this level of coordination around management, they need to be able to make that association.

From an inventing point of view, what’s interesting is that there are a lot of parallel types of processing going on in this environment. Rather than wait until something happens in some linear, straight-through process, we're seeing the ability to watch and correlate some of those events vis-à-vis the thresholds, understand which thresholds are the most important, and start automating how to define the behavior, how the system is going to react to those conditions, and do it from a cost-benefit perspective in moving forward.

Gardner: Okay, so companies can take this approach, use a moderate pace, learn as they go, and use complex event processing to offer insights into the context of what’s going on. But, if human nature is any indication, people usually react to whatever the rules are about their job, and for IT this is going to be the view from the SLAs.

It strikes me that what’s going to happen is a lot of these organizations are going to reverse-engineer from the SLA, and that the rules and the models in the SLA become extremely important. Am I going out on the limb here, Anthony, or do you think that it will pan out that the SLAs will be the rules that the service performance management then needs to line up around?

Abbattista: That’s right. Again, it's back to what do you expect, and are you living up to it? You talk about failure not being from the SOA, but we could have a case where a service got deployed, people learned about it, and, before you know it, we are taking 100,000 hits a day on a service that no body ever gave any design thought to.

I would have to reach in there and get some agent information once in a while. And all the sudden, the supplier of the service, who did us a favor, put this on the bus, and then did a point-to-point interface, calls up and says, "Help!"

Someone might publish this thing and it had no modeling, because they thought it was some low-volume thing and it wasn’t important. All of a sudden, it becomes important because everybody found it. So, as we get to composite services, SOA performance is about the service expectation.

Gardner: And the governance?

Abbattista: The governance, and do you let them do it? Do you have governors? Do you have a cost model that burdens the caller, rather than the supplier? These are real questions we'll get into, and they are why I was talking about breaking down the walls. If it's truly a valuable service, then it's my job to figure out and pay for upgrades -- or to help you redesign it.

We take very much an advocate approach to, "Okay, if you come on the bus, we will help you with being successful." And the SLA is the baseline for that. But it also sets up that, "Hey, did you do a good enough job? And what if you are wildly successful?"

Gardner: Right. Let's throw this back to TIBCO. There's clearly a need in the market for a full lifecycle approach, feedback loops, many moving parts. What is it that you can do from a product perspective that helps get to that level of automation? That, in a sense, fills in the cracks about whom and what performs some of these necessary communications between the operations side and those associated with the ongoing requirements?

McNamara: Taking a step back, TIBCO offers a single user interface from the business analyst all the way through to the operational administrators who are running our applications. The idea is that, when you sit down to build out your services, when you sit down to build out your business processes, you use one tool to define what the business processes look like, what the touch points are between folks. Then, that diagram gets handed off from the business analyst to the implementer, who sits down and actually builds the services or builds the business process management (BPM) process that meets those requirements.

There is a direct link between the two. There is a round-tripping built into that tool, largely because it's a single data model and a single user interface with different views for people with different roles in your enterprise. That’s one major thing we do to help facilitate that communication, and that’s part of what we call the TIBCO ONE initiative. The product in question is the TIBCO Business Studio product, which forms that single user interface.

Gardner: And you’ve got hooks in a lot of the other parts of the SOA infrastructure for service enablement and delivery. How do you pull these parts together in a concerted effort?

McNamara: The other side of things is, even once you’ve built things out and deployed things to production, you need to make sure you can keep track, because a number of the folks on this panel have said exactly what’s going on. Ideally, you want to identify early on, as Sandy and Anthony said, which services are important to your enterprise and which services will have heavy load.

Unfortunately, you can’t always do that. Sometimes a little service, as Anthony said, where you think it's just helpful turns out to be a service used in 60 percent of the applications you are deploying. All of a sudden, you’ve got an issue.

You need to understand what the usage characteristics are on your services, not just the designed usage characteristics on your services. We’ve embedded both policy and performance management capabilities in our underlying service infrastructure. All the TIBCO ActiveMatrix products, all of our SOA enablement products, will transparently monitor for performance and usage of the services deployed in that environment.

So, even if you build that little service and you don’t think it's important, and you don’t want to go to the extra trouble to build some governance into it, it's there. It's already been embedded in that infrastructure. When you need it, you can just turn it on and make use of it, and you will automatically have some information about how people are using it with a fairly nice visual dashboard?

The key here is not just the ability to see some numbers in a report, because people miss that. You can have a report on, as Anthony said, more than 750 services running. By going through the performance numbers on each of those services on a regular basis, things get lost if it's just numbers. You need to have very good visualization tool, so you can see in "living color" what's going on with those services and how that relates to the SLAs and the rules you’ve set -- the expectations you’ve set for those services.

Gardner: All right, let's go back to Allstate. Anthony you’ve heard the announcements today, you’ve understood this vision, and you understand the need very well. Do you think that we are getting very close to realizing more of an automated approach to service performance management in an SOA environment?

Abbattista: Yes, we are getting closer and making rapid strides. We need to be careful though. We are being careful to manage the service deployment, the service bus grid, and the parts about how to operate it. What makes me a little nervous or restless is the idea that we start taking all that back into the system parameters and the Java environments and Oracle databases, and that sort of thing. I would hate to see us not solve this first.

I really don’t think we’re at a stage where I want to automatically be adjusting heap sizes in Java virtual machines, or Oracle database parameters, which could be a next logical extension. I did see a little twinkle in people's eyes today, when they looked at products like the BMC Suite and Matrix. I don’t know that I want to have system programmer types around, trying to debug the debugging environment. I think it could become very complicated, very quickly.

Gardner: So we need to keep this at that higher abstraction in order to appreciate the whole and not get down into the weeds?

Abbattista: That’s my belief. I would say that if this service is not performing, then maybe we get the three people on the phone, the database administrator, the platform person, and the network person -- and we take a look at it. But I don’t think we should drill too far into that, until we solve the other layer.

Gardner: I suppose the good news and bad news about all of this is that the metrics for success or failure will be quite evident. You are not going to be able to cover this up across a service-support environment and the business processes that those contribute to, if it doesn’t work. Any failures are going to be readily apparent, not just to a systems administrator, but also to the entire organization that’s affected.

Joe, let's go to you on this whole notion of metrics of success. We have seen some caution, but we also see great promise around SOA. If we got into an economic environment where the pressure becomes higher for better productivity -- of doing more with less -- it's likely we are going to see more companies look to virtualization, outsourced services, software as a service. When do you think the switch on wider SOA use will get thrown, and to what degree does service performance management contribute to that?

McKendrick: Wow, that’s the $64-billion question. It's interesting, I was speaking with the enterprise architect for a major distribution company a little bit earlier. She pointed out to me that, when they started out their service enablement years ago, even before Web services came on the scene and evolved over the past 10 years, they built their infrastructure to be service-enabled from the get-go.

There was no effort to identify what can be service enabled and try to build a service around and try to get acceptance of it. And I asked her, "Well, what do you consider to be success in terms of adoption of the SOA, and in terms of reuse -- and do you even measure a reuse success?"

Basically, to that company, if a service gets reused, fine. If it doesn’t get reused at all, that’s fine too. It doesn’t matter. The reason I'm bringing that up is because reuse is often brought up as the ultimate metric for a SOA success, as the most tangible metric, I should say. But, I think the best approach is to design applications or pieces of applications from the initial start to be service-enabled and employing the latest standards.

Gardner: Okay, so the risks are high, the rewards are high. It sounds like we are getting closer to a less manual, more automated approach, something that has visibility and hooks up and down, deep and wide. Let's wrap up with some last thoughts on this subject.

Sandy, if you are a CIO, a decision maker in the enterprise, and you are listening to this, what do you think that you want to hear that’s going to make you confident, given that you’ve already made a lot of investments in services-enablement? You have to recognize that this is the way for the future, but what are you going to want to put in place in order to start protecting yourself when it comes to your performance management?

Rogers: What we are seeing with IT executives today is a real interest in leveraging what you have, of being able to have speed for deployment, not having to worry about all of the issues, and to have people on board that understand all of the technical dynamics of how everything needs to be implemented from an infrastructure point of view.

So, they need to be able to support fast time to market, and not worry about throwing something out there. When you are deploying it, you have to step back and make sure all of the resources that you need are lined up to make that happen. You want to have an automated way to handle deployment, to handle governance, to handle all of these different issues.

There is also the self-service nature that’s starting to happen -- the ability to create services and allow anyone in the enterprise to be able to get at the information they need as quickly as possible, not have to have a whole army of developers out there. That means you need to feel comfortable that you are creating an infrastructure that could be consumed by multiple parties.

Setting up that infrastructure is really going to save cost. It's going to save time to market, and you need that level of assurance, so that you don’t need to baby sit every single service. There is also an issue of being able to outsource to different parties. You want to be able to leverage that, cost-effectively.

You need to set up an infrastructure, all the processes and rules that everyone needs to follow. And by doing that, you can now leverage whatever resources you want to develop and create what's necessary, and not have to worry about everyone falling in line and having their own infrastructure and having all of that reference architecture put together at each different resource.

It’s really that whole concept of creating a centralized type of platform and a framework to consume all these services. It’s going to be very, very important going forward. Everyone is talking about the issues of the economy, and it’s really the trade-offs of what do you need to do in order to move forward and think about things in more of a total cost of ownership (TCO) manner versus that of direct return on investment (ROI) -- that immediate cost-per-service type of measurement.

Gardner: It sounds like we are describing what could be thought of as insurance. You’ve already gone on the journey of SOA. It’s like going on a plane ride. Are you going to spend the extra few dollars and get insurance? And wouldn't you want to do that before you get into the plane, rather than afterward? Is that how you look at this? Is service performance management insurance for SOA? I am throwing that out to Anthony at Allstate.

Abbattista: It’s interesting to think of it as insurance. I think it’s a necessary operational device, for lack of better words.

Gardner: Service performance management -- not an option?

Abbattista: I don’t think it’s an option, because what will hurt if you fall down has been proven over and over again. As the guy who has to run an SOA now that's on insurance -- it’s not an option not to do it.

Gardner: Last words from you, Rourke? Do you view this as an insurance policy? I guess you have the choice of different insurers, right?

McNamara: I do. I actually do look at service performance management as insurance -- but along the lines of medical insurance. Anthony said people fall down and people get hurt. You want to have medical insurance. It shouldn't be something that is optional. It shouldn't be something you consider optional.

It’s something that you need to have, and something that people should look at from the beginning when they go on this SOA journey. But it is insurance, Dana. That’s exactly what it does. It prevents you from running into problems. You could theoretically go down this SOA path, build out your services, deploy them, and just get lucky. Nothing will ever happen. But how many go through life without ever needing to see a doctor?

Gardner: Okay, now we are going to take some questions from the audience.

Tony Baer: This is Tony Baer with OnStrategies. I want to seize on something that Anthony Abbattista from Allstate had mentioned before, which is that you hope that service performance management doesn’t degrade into getting down to "Java heap sizes." I surely don’t blame you on that one, but what I am wondering is, at what point does this become an IT service management issue?

Abbattista: Because we have gathered that responsibility together, I guess it all falls under one roof in our particular organization. I would think it was external services. One thing we are doing is measuring some of our external providers outside the organization. I guess it’s sort of the same phone call. You are calling yourself or you are calling the person who is responsible and holding him accountable. So, I don’t know it changes much.

McNamara: I would like to add something to that. With something like a Tivoli or a BMC solution, something like a business service management technology, your operational administrators are monitoring your infrastructure.

They are monitoring the application at the application layer and they understand, based on those things, when some thing is wrong. The problem is that’s the wrong level of granularity to automatically fix problems. And it’s the wrong level of granularity to know where to point that finger, to know whom to call to resolve the problem.

It’s right, if what's wrong is a piece of your infrastructure or an entire application. But if it’s a service that’s causing the problem, you need to understand which service -- and those products and that sort of technology won’t do that for you. So, the level of granularity required is at the service level. That’s really where you need to look.

Rogers: What I find is that it’s inevitable that we are going to go down that path, but standards between the systems that do IT management traditionally and this level of detail really haven’t been fleshed out. Most organizations are looking for a single, unified type of dashboard on some of the key indicators. They might want to have that for the operations team that has traditionally run IT service management.

A lot of the initiatives around ITIL Version 3.0 are starting to get some of those teams thinking in terms of how to associate the business requirements for how services are being supported by the infrastructure, and how they are supported by the utility of the team itself. But, we're a long way away from having everything all lined up, and then having it automatically amend itself. People are very nervous about relinquishing control to an automated system.

So, it is going to be step-by-step, and the first step is getting that familiarity, getting those integrations starting to happen and then starting to let loose. What's interesting is in some of the areas of virtualization technologies, where you might have some level of management that’s abstracted from the physical infrastructure, and then you have this level of abstracted management of services how they come together. It hasn't really been defined in the industry, but down the road -- two, three, four, five years from now -- I think you will be seeing a lot more around that.

McKendrick: Let me add that we're still in the very early stages of SOA. In fact, a lot of companies out there think they have SOA, when they actually have just the bunch of Web services, JBoss architecture, and point-to-point types of interfaces and implementations. A lot of companies are just starting to get their arms around exactly what SOA is and what it isn't.

Gardner: Very good. We have been discussing the issues around service performance management for SOA environments. We are talking with a panel of industry analysts and practitioners. I want to thank our panelists, Joe McKendrick, Sandy Rogers, Anthony Abbattista, and Rourke McNamara. Thanks.

This is Dana Gardner, principal analyst at Interarbor Solutions, and you have been listening to a sponsored BriefingsDirect podcast. Thanks and come back next time.

Transcript of BriefingsDirect podcast on service performance management recorded live at TUCON 2008 in San Francisco on April 30, 2008. Copyright Interarbor Solutions, LLC, 2005-2008. All rights reserved.

Today, a sponsored podcast discussion about Apache CXF, an open-sourceWeb services framework that recently emerged from incubation into a full project. We are going to be discussing where CXF is, what are the next steps, how it is being used, what the market is accepting from open-source Web services and service-oriented architecture (SOA) infrastructure, and then, lastly, a road map of where CXF might be headed next.

Joining us to help us understand more about CXF, is Dan Kulp, a principal engineer who has been deeply involved with CXF for a number of years. He works at IONA Technologies. Welcome back to the show, Dan.

Gardner: Let's start with you, Benson. Tell us a little bit about Basis Technology. I want to hear more about your company, because I understand you are a CXF user.

Margulies: Basis is about a 50-person company in what we call linguistic technologies. We build software components that do things like make high-quality, full-text search possible in languages such as Arabic and Chinese -- or do things like tag names and text, which is part of information retrieval.

We have customers in the commercial and government spaces and we wound up getting interested in CXF for two different reasons. One is that some of our customers have been asking us over time to provide some of our components for integration into a SOA, rather than through a direct application programming interface (API), or some sort of chewing gum and baling wire approach. So, we were looking for a friendly framework for this purpose, and CXF proved to be such.

The other reason is that, for our own internal purposes, we had developed a code generator that could read a Web-service description file WSDL and produce a client for that in JavaScript that could be loaded into a browser and tied back to a Web service. Having built it, we suddenly felt that we would like some help maintaining it. We went looking for an open-source framework to which we could contribute it, and CXF proved to be a friendly place for that too.

Over a period of time, to make a long story short, I wound up as a CXF committer. So, Basis is now both a corporate user of CXF as a delivery vehicle for our product, and also I am a committer focused on this JavaScript stuff.

Gardner: Great. You used the word "friendly" a couple of times. Let's go to Raven Zachary. Raven, why do people who go to open-source code and projects view it as friendly? What's this "friendly" business?

Zachary: Well, there are different motivations for participating in an open-source community. Generally, when you look at why businesses participate, they have a common problem among a peer set. It could be an underlying technology that they don't consider strategic. There are benefits and strength in numbers here, where companies pool together resources to work together on a common problem.

I think that for individual developers, they see it as a chance to do creative problem-solving in off hours, being involved in the team project. Maybe they want to build up their current opportunities of expertise in another area.

In the case of a CXF, it certainly has been driven heavily by IONA and its acquisition of LogicBlaze, but you had other individuals and companies involved -- Red Hat, BEA, folks from Amazon and IBM, and Benson from Basis, who is here talking about his participation. The value of this opportunity for many different commercial entities is coming together to solve a common set of problems.

Gardner: Let's go to Dan Kulp. Dan, tell us a little bit about CXF and its current iteration. You emerged from incubation not that long ago. Why don't you give our listeners, for those who are not familiar with it, a little bit of the lineage, the history of how CXF came together, and a little bit about the current state of affairs in terms of its Apache condition or position?

Kulp: CXF was basically a merger of the Celtix project that we had at ObjectWeb, which was IONA sponsored. We had lot of IONA engineers producing a framework there. There was also the XFire Project that was at Codehaus. Both of these projects were thinking about doing a 2.0 version, and there was a lot of overlap between the two. So, there was a decision between the two communities to pool the resources and produce a better 2.0 version of both XFire and Celtix

As part of that whole process of merging the communities, we decided to take it to Apache and work with the Apache communities as a well-respected open-source community.

So that's the long-term history of CXF. We spent about 20 months in the incubator at Apache. The incubator is where all the new projects come in. There are a couple of main points there, and one is the legal vetting of the code. Apache has very strong requirements about making sure all of the code is properly licensed, but is compatible with the Apache license, that the people that are contributing it to have done all of the legal requirements to make sure that the code meets those things. That's to protect the users of the Apache projects, which, from a company and user standpoint, is very important.

A lot of other projects don't do that type of legal requirement. So, there are always iffy statements around that. That was one important thing. Another very important part of the Apache incubator is building the community. One of the things they like to make sure is that any project that goes out of the incubator is in a very diverse community.

There are people representing a wide range of companies with a wide range of requirements, and the idea is to make sure that that community is long-term stable. If one company should suddenly be acquired by another company, just goes bankrupt and out of business, or whatever, the community is going to still be there in a healthy state. This is so that you can know that that the Apache project is a long-term thing not a short term.

Gardner: Could I pause there, and could you tell us who are the major contributors involved with CXF at this point?

Kulp: IONA is still heavily involved, as is Basis Technology, a couple of IBMers, as was mentioned earlier, and a couple of Red Hat people. There is one person who is now working for eBay who is contributing things, and there are a few people who I don't even know what company they work for. And that's a good thing. I don't really need to know. They have a lot of very good ideas, they are doing a lot of great work, and that's what's important about the community. It's not really that important, as long as the people are there participating.

Gardner: Okay. Things move quickly in this business. I wonder if any of our panelists recognize any shifts in the marketplace that have changed what may have been the optimum requirement set for a fully open-source Web-services framework from, say, two or three years ago, when these projects came together. What has shifted in the market? Does anyone have some thoughts on that?

Margulies: Well, Dan and Glen, who was another one of our contributors, and I, were having a lunch today and we were discussing the shift in the direction from old JAX-RPC framework to JAX-WS/JAXB, the current generation of SOA standards. That has very much become the driving factor behind the kits.

CXF gets a lot of attention, because it is a full open-source framework, which is completely committed to those standards and gives easy-to-use, relatively speaking, support for them and, as in many other areas, focuses on what the people in the outside world seem to want to use the kit for, as opposed to some particular theoretical idea than any of ours about what they ought to want to use it for.

Gardner: Thank you, Benson. Anyone else?

Kulp: Yes, one of the big things that comes to mind when this question comes up is, is the whole "code first" mentality. Several years ago, in order to do Web services, you had to know a lot of stuff about WSDL, extensible markup language (XML) schema. You had to know a lot of XMLisms. When you started talking about interop with other Web Services stacks, it was really a big deal, because these toolkits exposed all of this raw stuff to you.

Apache CXF has is a fairly different approach of making the code-first aspect a primary thing that you can think about. So, a lot of these more junior level developers can pick up and start working with Web services very quickly and very easily, without having to learn a lot of these more technical details.

Gardner: Now, SOA is a concept, a methodology, and an approach to computing, but there are a number of different infrastructure components that come together in various flexible ways, depending on the end user's concepts and direction. Tell us a little bit about how CXF fits into this, Dan, within other SOA infrastructure projects, like ServiceMix, Camel, ActiveMQ. Give us the larger SOA view, the role CXF plays in that. Then, I am going to ask you how that relates to IONA and FUSE?

Kulp: Obviously, nowadays, if you are doing any type of SOA stuff, you really need some sort of Web-service stack. There are applications written for ServiceMix and JBI that don't do any type of SOAP calls or anything like that, but those are becoming fewer and farther between. Part of what our Web services bring is the ability to go outside of your little container and talk to other services that are available, or even within your company or maybe with a business partner or something like that.

A lot of these projects, like Camel and ServiceMix, require some sort of Web-services stack, and they've basically come to CXF as a very easy-to-use and very embeddable service stack that they are using to meet their Web-services needs.

Gardner: Alright, so it fits into other Apache projects and code infrastructure bases, but as you say "plug-in-able," this probably makes it quite relevant and interesting for a lot of other users where Web-services stack is required. Can you name a couple of likely scenarios for that?

Kulp: It's actually kind of fascinating, and one of the neatest things about working in an open-source project is seeing where it pops up. Obviously, with open-source people, anybody can just kind of grab it and start using it without really telling you, "Hey, I'm using this," until suddenly they come to you one day saying, "Hey, isn't this neat?"

One of the examples of that is Groovy Web service. Groovy is another dynamic language built in Java that allows you to do dynamic things. I'm not a big Groovy user, but they actually had some requirements to be able to use Groovy to talk to some Web services, and they immediately started working with CXF.

They liked what they saw, and they hit a few bugs, which was expected, but they contributed back to CXF community. I kept getting bug reports from people, but was wondering what they were doing. It turns out that Groovy's Web-services stack is now based on CXF. That's type of thing is very fascinating from my standpoint, just to see that that type of stuff developed.

Margulies: I should point out that there has been a tendency in some of the older Web-service platforms to make the platform into a rather heavy monolithic item. There's a presumption that what you do for a living with a Web service is stand up a service on the Web in one place. One of CXF's advantages is what you want to do is deliver to some third party a stack that they put up containing your stuff that interacts with all of their existing stuff in a nice light-weight fashion. CXF is un-intrusive in that regard.

Gardner: And, just as a level-set reality check, over to Raven. Tell me a little bit about how this mix-and-match thing is working among and between the third parties, but also among and between commercial and open source, the infrastructure components.

Zachary: The whole Apache model is mix and match, when you are talking about not only a licensing scheme. The Apache license, is a little easier for commercial vendors to digest, modify, and add in, compared to the GPL, but also I think it's the inherent nature of the underlying infrastructure technologies.

When you deploy an application, especially using open source, it tends to be several dozen distinct components that are being deployed. This is especially true in Java apps, where you have a lot of components or frameworks that are bundled into an application. So, you would certainly see CXF being deployed alongside of other technologies to make that work. Things like ServiceMix or Camel, as you mentioned, ActiveMQ, Tomcat, certainly Apache Web Server, these sorts of technologies, are the instrument to which these services are exposed.

Gardner: Now, let's juxtapose this to the FUSE set. This is a commercially supported, certified, and tested SOA and Web-services component set. The FUSE services framework is derived from CXF. Dan, tell us a little bit about what is going on with FUSE and how has that now benefited from CXF moving from incubation into full Apache?

Kulp: As you mentioned, the FUSE services framework is basically a re-branded version of Apache CXF. If you go into a lot of these big customers, like banks or any of the major type of customers, and they deploy an application, they want to have some level of support agreement with somebody that says if a bug is found or a problem crops up, can they get somebody on the phone and get a bug fixed relatively quickly.

That's what the FUSE product line is basically all about. It's all open-source, and anybody can download and use the stuff, but you may not get the same level of support from the Apache community, as you do with the FUSE product.

The Apache communities are pretty much all volunteer-type people. Pretty much everybody is working on whatever their own agenda is, but they have their own expertise. So, they may not even have time, and they may be out on leave or on vacation or something like that. Getting the commercial-level of support from the Apache community can sometimes be a hard sell for a lot of these corporations, and that's why what FUSE really brings is a support agreement. You know that there is somebody there to call when there is a problem.

It's a two-way relationship. Obviously, if any of those customers come back with bugs and stuff, the IONA people will fix them and get them pushed into both Apache and FUSE. So, the bugs and stuff get fixed, but the other thing that IONA gets from this is that there's a lot of ideas in the Apache communities that we may not have thought of ourselves.

One good example of this is that JavaScript thing that Benson mentioned earlier. That's not something IONA really would have thought of at the beginning, but this is something that we can give back to our customers saying, "Hey, isn't this a neat idea?" So, there are a lot of benefits coming from the other people that aren't IONA in these communities actually providing new features and new ideas for the IONA customers.

Gardner: Okay, you came off incubation in April, is that correct?

Kulp: Yes.

Gardner: Tell us about what's going on now. What's the next step, now that it's out of that. Is this sort of a maintenance period, and when will we start to think about seeing additional requirements and functionality coming in?

Kulp: There are two parts to that question. Raven and I graduated, and we were ready to push out 2.1. Apache CXF 2.1 was released about a week after we graduated, and it brought forth a whole bunch of new functionality. The JavaScript was one of them. A whole new tooling was another thing, also CORBA binding, and there is a whole bunch of new stuff, some REST-based APIs. So, 2.1 was a major step forward, compared to the 2.0 version that was ready last August, I believe.

Right now, there are basically two tracks of stuff going on. There are obviously a lot of bug fixes. One of the things about graduating is that there are a lot of people who don't really understand what the incubator is about, and so they weren't looking in the incubator.

The incubator has nothing to do with the quality of the code. It has more to do with the state of the community, but people see the word "incubator" and just say, "No, I'm not going to touch that." But, now that they we're graduated, there are a lot more people looking at it, which is good. We're getting a lot more input from users. There are a lot of people submitting other ideas. So, there is a certain track of people just trying to get some bug fixes and getting some support in place for those other people.

Gardner: I am impressed that you say "bug fixes" and not "refinement." That's very upfront of you.

Kulp: Well, a lot of it is refinement too, and, to be honest, there is a bit of documentation refinement that is going on as well, because with new people using it, there are new expectations. Their old toolkits may have done things one way, and the documentation may not reflect well enough, "Okay, if you did it this way in the old toolkit, this is how you do the same thing in CXF."

Margulies: If I could pipe up with a sociological issue here with open source which says, it's a lot easier to motivate someone to run in and diagnose a defect or a missing feature in the code and make the fix than to get the additional motivation to go over to the "doc" side and think through, "How the heck are we going to explain this, and who needs to have it explained to them." We're really lucky, in fact. We have at least one person in the community who almost entirely focuses on improving the doc as opposed to the code.

Gardner: Okay. So, we're into this maturity move. We've got a lot more people poking at it and using it. We're going to benefit from that community involvement. We've mentioned a couple of things that struck me a little earlier -- the Groovy experience and JavaScript. I guess there's this perception by many whom I've talked to that Web services is interesting, but there's a certain interest level too in moving into more dynamic languages, the use of RESTful for getting out to clients, and thinking about Web services in a broader context.

So, first let's go to Benson. Tell us why this JavaScript element was important to you and where you think the kind of mindset is in the field around Web services and traditional Web services-star specifications and standards?

Margulies: We went here originally, because while we built these components to go into the middle of things, we have to show them off to people, who just want to see the naked functionality. So, we built a certain amount of demo functionality as Web applications, with things from Web pages. And, the whole staff was saying, "Oh gosh, first we have to write a JSP page, and then we have to write some Beans, and then we have to package it all up, and then we have to deploy it."

It got really tiresome. So we went looking for a much thinner technology for taking our core functionality and making it visible. It dawned on us that perhaps you could just call a Web service from a browser.

Historically, there's been such a mentality in the broad community because you "couldn't possibly do that." "Those Web service, XML messages, they are so complicated." "Oh, we could never do that." And, several of the dynamic languages, SOAP, or Web-service kits that have shown up from time to time in the community were really weak. They barely worked, because they're very old versions of the Web-service universe. As Web-service standards have moved into stronger XML, they got left behind.

So, not knowing any better, we went ahead and built a code generator for JavaScript that could actually talk to a JAX-WS Web service, and I think that's an important direction for things to go. REST is a great thing. It allows very simple clients to get some data in and out of Web services, but, people are building really big complicated applications and dynamic languages these days, things like Ruby. For Web services to succeed in that environment, we need more of what we just did with the JavaScript. We need first class citizenship for dynamic languages as clients and even servers of Web services.

Gardner: Let's take it over to Raven. Tell us, from the analyst perspective, what you see going on mentality wise and mindshare wise with Web-services specs, and do you think that there's a sort of "match made in heaven" here between something like CXF and some of these dynamic languages?

Zachary: Well, looking back on the history of CXF being the merging of two initiatives -- Celtix from IONA and XFire from Codehaus -- and spending last few years in the incubator, and now coming out of the incubator in April, bringing together those two initiatives is very telling in terms of the stronger initiative, based on the basis of two existing open-source initiatives.

I like the fact that in CXF they are looking at a variety of protocols. It's not just one implementation of Web services. There's SOAP, REST, CORBA, other technologies, and then a number of transports, not just HTTP. The fact is that when you talk to enterprises, there's not a one-size-fits-all implementation for Web services. You need to really look at services, exposing them through a variety of technologies.

I like that approach. It really matches the needs of a larger variety of enterprise organizations and just it's a specific technology implementation of Web services. I mean, that's the approach that you're going to see from open-source projects in the space. The ones that provide the greatest diversity of protocols and transports are going to do quite well.

Gardner: Dan, you've probably heard this. Some of the folks who are doing more development with dynamic languages and who are trying to move toward light-weight webby applications have kind of an attitude going on with Web-services specs. Have you noticed that and what do you think is up with that? Has that perhaps prevented some of them from looking at CXF in evaluating it?

Kulp: Yeah, in a way, it has prevented them, but Web Services are pretty much everywhere now. So, even though they may not really agree with some of Web-service ideas, for their own user base to be able to grow, they have to start thinking about how do we solve that problem, because the fact is that they are there.

Now, going forward, REST is obviously a big word. So, whatever toolkit you're looking at you need to be able to talk REST as well, and CXF is doing a little bit there. If you go back, there's CORBA stuff that needs to be talked to. With CXF, you don't just get the SOAP part of SOA, you get some of these additional technologies that can help you solve a wider range of problems. That's very important to certain people, especially if you're trying to grow a user base.

Gardner: Alright, so you've obviously benefited, the community has benefited from Benson and Basis Technology offering in what they did with JavaScript. I assume you'll be interested in committers to further that across more languages and more technologies?

Kulp: Oh, definitely. One of the nicest things about working in Apache projects is that it's an ongoing effort to try to keep the community growing and getting new ideas. As you get more people in, they have different viewpoints, different experiences, and all that can contribute to producing new ideas and new technologies, and making it easier to solve a different set of problems.

I always encourage people that, if they're looking in the CXF code, and they hit a bug, it's great if we see them submit a patch for that, because that shows that they're actually digging in there. Eventually, they may say, "Okay, I kind of like how you did that, but wouldn't it be neat if you could just do this?" And then maybe they submit some ideas around that and become a committer. It's always a great thing to see that go forward.

Gardner: Let's go around the table one last time and try to predict the future when it comes to open-source Apache projects, this webby application environment, and the larger undertaking of SOA. Dan, any prophecies about what we might expect in the CXF community over, say, the next 12 months?

Kulp: Obviously, there's going to be this ongoing track of refinements and fixes. One of nice things of the CXF community is that we're very committed to supporting our existing users and making sure that any bug fixes or bugs that they encounter get fixed in a relatively timely manner. CFX has a very good history of doing very frequent patch releases to get fixes out there. So, that's an ongoing thing that should remain in place and it's a benefit to the communities and to the users.

Beyond that, there's a whole bunch of other ideas that we're working on and fleshing out. The code first stuff that I mentioned earlier, we have a bunch of other ideas about how to make code-first even better.

There are certain tool kits that you kind of have to delve down into either configuration or WSDL documents to accomplish what you want. It would be nice if you could just embed some annotations on your code, or something like that, to accomplish some of that stuff. We're going to be moving some of those ideas forward.

There's also a whole bunch of Web-services standards such as WS-I and WS-SecureConversation that we don't support today, but we are going to be working on to make sure that they are supported. As customers or users start demanding other WS technologies, we'll start considering them, as well. Obviously, if new people come along, they'll have other great ideas, and we would welcome those as well.

Gardner: Alright. Raven Zachary, what do you see as some of the trends that we should expect in Open Source infrastructure particularly around SOA and Web services interoperability over, say, the next 12 months?

Zachary: We've had for the last decade or so a number of very successful open-source infrastructure initiatives. Certainly, Apache Web Server Linux as an operating system and the application middleware stack -- Tomcat, Geronimo, JBoss -- have done very well. Open source has been a great opportunity for these technologies to advance, and we're still going to see commercial innovation in the space. But, I think the majority of the software infrastructure will be based on open standards and open source over time and then you'll see commercialization occur around the services side for that.

We're just starting to see the emergence of open-source Web services to a large extent and I think you're going to see projects coming out of the Apache Software Foundation leading that charge as other areas of the software infrastructure have been filled out.

When you look at growth opportunities, back in 2001, JBoss app server was a single-digit market share, compared to the leading technologies at the time, WebSphere from IBM and WebLogic from BEA. In the course of four years, that technology went from single-digit market share to actually being the number one deployed Java app server in the market. I think it doesn't take much time for a technology like CXF to capture the market opportunity.

So, watch this space. I think this technology and other technologies like it, have a very bright future.

Gardner: I was impressed and I wrote a blog recently about this emerging from incubation. I got some really high numbers, which indicated some significant interest.

Last, I am going to Benson at Basis Technology as a user and a committer. How do you expect that you'll be using something like CXF in your implementations over the next 12 months?

Margulies: Well, we're looking at a particular problem, which is coming up with a high-performance Web-service interface to some of our functions, where you put a document and you get some results out. That's quite challenging, because documents are sort of heavyweight, large objects, and the toolkits have not been wildly helpful on this.

So, I've scratched some of the necessary services on CXF and I expect to be digging deeper. The other thing I put in as a comment as a committer is that one of the most important things we're going to see is a user support community.

Long before you get to the point where someone is a possible committer on the program, there is the fact that the users help each other in using the product and using the package, and that's a critical success factor. That community of people who read the mailing list just pitch in and help those newbies find their way from one end to the other.

Gardner: Well, great. Thank you so much. I think we've caught up with CXF, and have quite a bit to look forward to over the coming quarters and months. I want to thank our panel. We've been joined by Dan Kulp, principal engineer at IONA Technologies; Raven Zachary, open source research director for The 451 Group; and Benson Margulies, the CTO at Basis Technology. Thank, everyone.

Kulp: You're very welcome.

Zachary: Thank you.

Margulies: Thank you.

Gardner: This is Dana Gardner, principal analyst at Interarbor Solutions. You have been listening to a sponsored BriefingsDirect Podcast on Apache CXF. Thanks and come back next time.