SOA Talk

The Object Management Group (OMG) approved a final version of SoaML (short for Service-oriented architecture modeling language) at a meeting in Jacksonville, Florida, last month. SoaML, a profile of UML, is designed to help users design and implement a service-oriented architecture.

A part of many discussions of massively-scaled cloud computing architecture these days is the notion of “NoSQL.” That is because the trustworthy and ubiquitous SQL data base seems to be playing a less-than-central role in big cloud apps built around Gooble’s Big Table, Facebook’s Cassandra and Amazon’s Dynamo data schemes.

Some of the NoSQL crowd can be pretty strident about SQL’s shortcomings, much as was the object data base crowd when it stood ready to undo the relational data base back in the 1990s. But a toning down of rhetoric is probably due.

”The idea of ‘NoSQL’ started out in a bit of a negative way. But now people tend to mean ‘Not Only SQL’ rather than just ‘NoSQL.’ That is the discussion I am seeing,” said distributed computing veteran Nati Shalom, CTO and Founder of GigaSpaces.

With long experience in the type of applications that require impressive scaling, Shalom is in a unique position to view emerging data architectures for the cloud. He sees scaling issues and RDB issues driving the push to NoSQL – or, Not Only SQL.

”The NoSQL thing came from the realization that scaling comes first. The likelihood that you are going to need to scale is much more acute than in the past,” said Shalom, who added that the demands of social networking apps, particularly, have made distributed application scaling issues more vivid. Like others, Shalom foresees distributed applications evolving that employ both SQL and non-SQL data stores.
For its part, GigaSpaces continues to add capabilities to its flagship XAP application and caching server.

This week marked the roll out of XAP 7.1, which includes Elastic Middleware Services, described by GigaSpaces documents as a simple high-level abstraction of a deployment now exposed via a GigaSpace Administration and Monitoring API. Agents running on machines take care of provisioning and job partitioning. XAP 7.1 also employs an updated version of the light-weight Jetty Web container. Joe Ottinger has more detail.

Progress Software’s Sonic ESB 8.0 came out today and the product has come to embrace a number of open standards. Most notably, Sonic now supports a “RESTful” approach to integration. The company also added some managed provisioning and scalability features, allowing users to model possible runtime environments in a sandbox to check for dependencies and issues before deployment.

The move to open standards means being able to use Eclipse instead of proprietary APIs, said Jonathan Daly, product marketing manager, integration infrastructure, Progress Software.

“Now you can have handle the total application lifecycle in the same tooling environment,” said Daly. “So moving something from design to testing to production is much more streamlined.”

Specifically, Sonic now supports JAX-RS, JAX-WS, SOAP 1.1 and 1.2, JSON, and Spring (among others). It no doubt helps Progress’ case to let users integrate the Sonic ESB into their architectures with many of the same standards used in open source development. With something like an ESB, which many consider the “backbone of SOA,” avoiding vendor lock-in is always welcome.

It seems like you can’t throw a stone in an enterprise IT shop without hitting an Apache Tomcat server these days. Jeffrey Hammond at Forrester Research recently told me around 30% of developers use Tomcat based on findings from two surveys. In another survey from Replay Solutions, 50% of more than 1,000 Java EE users said they would deploy Tomcat app servers in 2010. In light of Tomcat’s popularity, it is interesting to look at where commercial open source implementations of the technology are headed.

This week, VMware made its Tomcat-based SpringSource tc Server 2.0 available for download. The release represents a continuing integration of SpringSource’s application development platform into VMware’s virtualization business, following its acquisition of SpringSource last year. As part of the release, the company introduced the tc Server Spring Edition, which is supported on VMware’s virtualization products.

Ovum analyst Tony Baer told me today that SpringSource/VMware integration is a work in progress. Continued »

One sales person at IBM is no doubt having a very good week. The company just announced the inking of a modernization deal with Travelport, a company that processes travel transactions for more than 60,000 travel agencies worldwide.

Through this multi-million-dollar agreement, Travelport hopes to double its transactional processing capabilities. The deal includes upgrading to the IBM z/Transactional Processing Facility (zTPF), as well as WebSphere, Rational, Tivoli and a boatload of IBM hardware.

The two companies already had a working relationship before this storage and middleware expansion. Last year, Travelport finished upgrading the underlying infrastructure of its global distribution system with IBM. Through this project, the company consolidated all of its data center operations into Atlanta, Georgia. Travelport must have been pleased with the results.

This is a pretty big deal for IBM. Travelport handles travel transactions that involve any of 420 airlines, 88,000 hotel properties, 25 rental car companies and a host of other entities across the travel industry. Representatives say Travelport’s system processes up to 1.6 billion messages a day for transactions like airline and hotel bookings.

There are a variety of caching techniques to be considered in grid, cloud and other types of distributed computing architecture for analytics. Among these, the object data base can show some advantages said Carl W. Olofson, Research Vice President, IDC. In fact, some of the early object data base companies are positioning their wares for cloud computing.

“It is definitely an alternative in some data caching uses, particularly those where you need to share the data across a wide variety of systems, and especially where there are long running transactions involved,” said Olofson.

With the ODB technique, a Java developer, for example, can embed the interaction of the code and the data and use the natural functions in the Java language to create the application, Olofson noted.

Some viewers may be skeptical of an ODB resurgence – in the 1990s, object data bases were often touted as an alternative to relational bases, and they never found nearly as much use. But today’s ODB survivors are more likely to be muted in their pronouncements. ODBs for new distributed apps are likely to be called supplements to RDBs and data caches in special cases.

A lot of data architectures – both familiar and new – are vying today for consideration by development teams pursuing new apps that provide very fast analytics on very large sets of data. Object databases (ODBs) and persistent object stores represent one of these data architectures. Continued »

Noting here that Ed Roberts, inventor of Altair personal computer, died at 68. There is no one person that can claim the invention of the personal computer. But the Altair inspired many others. Roberts is far less known than Bill Gates and Steve Jobs – both paid tribute at the time of his passing.

When exposing Web services to entities external to your organization, it is easy enough to roll your own security and management policies – if you don’t have a lot of services. As B2B integration points increase in number, however, many find it useful to handle the various areas of policy management from a single utility. Most companies would probably prefer their developers spend more time on business logic than infrastructural concerns.

The folks over at SOA Software recently released Policy Manager 6 with revamped security federation and policy management interoperability. The company has also refactored the product to run on top of the highly-extensible OSGi framework. Continued »

Retrieving data demands both time and compute power of an application, and that demand can grow when data is housed in disparate data centers, as in a cloud or grid computing environment. Distributed data caches can make the data a little easier to find. A distributed data cache can act as an intermediary storage layer, holding frequently used data so that the application doesn’t have to constantly query multiple databases. Continued »