Opinari

Service Component Architecture (SCA) is commonly described as being a technology for SOA. The group that created it christened themselves the Open SOA Collaboration, a name that clearly indicates their stance. In fact, the first sentence of this group’s SCA web page says that it describes “a model for building applications and systems using a Service-Oriented Architecture”.

I understand the need to link new technologies to SOA today—it’s a great way to get attention. The problem is that talking about SCA and SOA in the same breath is at best confusing. At worst, it’s downright misleading.

To see why, understand that SCA defines an abstract component model, then specifies how those components can be assembled into larger groups called composites. The components that make up a single composite might all run in the same process, they might be spread across different processes on the same machine, or they might run on different machines. In all of these cases, SCA’s assembly model defines how components in a composite relate to one another, specifying things like how those components should be wired together. In effect, it allows treating a group of possibly distributed components as a single unified application.

The applicability to SOA is obvious, isn’t it? As the notion of “application” increasingly refers to a group of services provided by software on various machines, having some way to define the relationships among this software is appealing. And because an SCA composite also defines a deployment unit, it can help make managing this new kind of application easier.

But there’s one problem: While the SCA specs don’t strictly mandate this, in practice all of the components in a composite must run within a single vendor’s SCA environment. This limitation isn’t obvious from reading the specs, so just to make sure, I asked this question of the participants on the SCA panel I moderated at this year’s JavaOne conference. All of them publicly agreed. While things could change in the future, at least for now, an SCA composite is a single-vendor construct.

A primary reason for this is that the SCA 1.0 specs focus on portability, not interoperability, and so they don’t fully define the interactions between components necessary to create composites that cross vendor boundaries. This means that I’m free to create a distributed application that’s wrapped into an SCA composite, and even to allow the services that application provides to be accessed from other vendor platforms, as long as all of its components run on a single vendor’s SCA container.

What does this have to do with SOA? The answer is simple: not much. Multi-vendor services are the sine qua non of SOA . If an SCA composite can't be assembled from them, it’s not an SOA technology.

So does SCA improve the SOA picture in any way? I think the answer is yes, for a couple of reasons. First, even though its internals remain a vendor-specific black box, how a composite interacts with the outside world can be specified in a service-oriented way. And along with its assembly model, SCA also defines a new programming model for creating Java components. This component model allows building business logic in a modern, service-oriented style, similar to Microsoft’s Windows Communication Foundation (WCF). Just as WCF makes it easier to create service-oriented applications on the .NET Framework, SCA’s Java component model can make it easier to create service-oriented applications in the Java world. Although it might not be fully supported by all of SCA’s creators, this approach nonetheless counts as a step forward.

SCA has the potential to be an important technology. By defining a modern, explicitly service-oriented approach to creating business logic, its Java component model can help Java developers build better applications for an SOA world. But as long as composites can contain only components running in a single-vendor environment, SCA’s value as a technology for SOA is limited.

You are right, the primary goal up to now has been portability. If you write an application that uses only the assembly model, the standardized programming models (for Java, BPEL & C++) and the standardized policies and bindings, then you should be able to port your composite services to other vendors’ SCA runtimes. This freedom from vendor lock-in is what many people look for in a standard.

If you want multiple vendors’ technology within one composite, are you still expecting to achieve this portability? Are you including two different vendor’s BPEL engines? Probably not. Most likely you are including another vendor’s technology because that technology is specific to that vendor – it is proprietary. You have now lost your freedom from vendor lock-in.

If what you want is interoperability, we already have WS-* standards for that, and SCA explicitly points to WS-* as the way of achieving interoperability between different vendor runtimes. However, as you note, you can’t have two technologies that can’t run in the same vendors SCA environment share the same SCA composite (or even the same SCA “domain”).

This means that you can’t deploy such a mix of technology as a unit, or secure it as a unit, or manage it as a unit. To accomplish these goals, the SCA standardization process will have to produce many additional specs, and vendors would have to implement them. However, even without these, we believe the portability value (especially “skills portability”) makes even this smaller set of specifications valuable as a standard.

I absolutely agree that SCA's current set of specifications are valuable--the technology embodies lots of good ideas. I also agree that the skills portability SCA can allow has value. I think it's quite possible--likely, even--that SCA will become a popular foundation for creating business logic.

What I'm less in agreement with is the traditional description of SCA as an SOA technology. While it does offer some support for a more service-oriented world, SCA products will also provide more single-vendor lock-in than customers expect. As always, there are benefits to being in a one-vendor-world, but customers need to keep their eyes wide open.

Is that vendor tie-in of SCA is inherent to the specification, or is it only a lack of "back-end" specification for interconnect?I think its the latter, which means the standard can mature and add it.With this respect, saying that SCA is misleading about its relation to SOA is a bit too strong a word.That is, if the standard is to continue to evolve, and add that interoperability features.

You're right: SCA could standardize more in the future. Historically, though, this has proven difficult. Once vendors have proprietary implementations in place, they're reluctant to change them to be "standard". Even if they do, there are strong incentives to make sure their proprietary solutions are better. The CORBA experience is instructive here, where after-the-fact interoperability was never very effective.

The great thing about the future is that anything is possible. Today, though, SCA and SOA (as most people think of it) really don't have much overlap. To argue that they might, someday, if only the standards get better, isn't very meaningful.

I'm a little bit confused about the comment concerning "all of the components in a composite must run within a single vendor's SCA environment"

It is true that component metadata is defined within a single SCA container, and that the container must have access to the interface of a component. However, this interface can simply be a URI to a WSDL document located anywhere on the Internet. Likewise, a local Java interface could be pointing to a remotely hosted EJB/RMI component. So while the component definition must be hosted within a single container, the actual component implementation itself can be located anywhere. I agree that this means you cannot simply deploy a composite and all of its reference as a single unit, but in my mind this isn't necessarily desirable or possible in a multi-partner B2B scenario. My opinion, SCA is definitely a SOA related technology. In fact I find I find SCA to be much more useful specification from a developer standpoint then many of the more infrastructure oriented standards created up to date. (soap, WS -* , etc.) SCA offers advice on how to assemble components out of disparate technologies and how to annotate them so that they broadcast their intent to participate in some of these more infrastructure oriented specification. SCA defines how to take a legacy asset and "expose it" to a variety of client platforms, using WSDL and a variety of other methods. In my opinion, this is one of the primary mandates of SOA.

In one sense, you're right, Jeff: The component implementation can be running anywhere, as long as "anywhere" means on the same vendor's SCA runtime as every other component in the composite. Because the SCA specs don't define how components in a composite interact, each vendor does it differently.

This means that a composite can't contain one component running on, say, IBM's SCA runtime and another running on Oracle's SCA runtime (unless one of those products has implemented the proprietary behaviors of the other). The SCA specs don't standardize enough to make this possible. It's this lack of cross-vendor connection within a composite that, to me, keeps SCA from having much to do with SOA.

If you haven't already, you might find it useful to read my paper Introducing SCA.

I am a bit confused on the interoperability part. If SCA were realized using web services, I assume it would build upon the web services standards, so messages would be exchanged using SOAP. From reading the posts I get the impression that services that are to be integrated within an SCA environment, or invoked from within SCA composites cannot interoperate (maybe because they use vendor specific protocols to exchange data)? More concrete, I am looking at integrating .NET (.NET clients could simply integrate with standard web services) and C/C++ on a SOA platform. I assume SCA would not be of much help then?

"While the SCA specs don’t strictly mandate this, in practice all of the components in a composite must run within a single vendor’s SCA environment" - I don't think this statement is correct. When components are exposed via SOA service interfaces. SOA composites can have competents(services) running on different vendor environments.

I haven't looked at SCA in a while, Vish--this post is four years old--but it was true when I wrote it. While SCA components can certainly interact via services, they can't be part of the same SCA composite unless they're running on the same vendor's infrastructure. I'd be interested to know what makes you think otherwise -

@David: From what little I understand of Oracle SOA Suite's SCA implementation, a composite can combine multiple external web services to a single composite. These web services can be anywhere on the web. c.f. e.g. http://www.andrew.cmu.edu/user/mm6/95-843/homework/Fall2011/Fall2011Homework4.txtWere you referring to something else?

A composite for Websphere Integration Developer is basically visualised as Assembly diagram .We have exports and Imports in it , imports and exports can call any Legacy system hosted on different technology does that not differ to what you mentioned about same vendor sca runtime

I'm afraid that I haven't kept up on SCA, Ritushree--it doesn't appear to have become very widely used. Two things, though:- It was always possible to connect components running on different SCA runtimes using SOAP; that's not what I was talking about in this post.- Note the comments at the beginning of this section from some of SCA's creators--they agree that SCA was focused on portability. It's not a point that the SCA vendors made very loudly, since it didn't support the SOA story they wanted to tell, but the lack of cross-vendor interoperability was certainly an aspect of SCA when it was created.