To top it all off, Big Blue's new mainframe-friendly DI features take advantage of several Big Iron specialty engines -- including both the zSeries Application Assist Processor (zAAP) and the zSeries Integrated Information processor (zIIP).

This gives Big Iron shops a cost-effective way to host and perform an array of different DI scenarios, IBM officials maintain.

InfoSphere is IBM’s brand for its once-separate WebSphere and DB2 data integration products. The WebSphere data integration stack, for the record, was mostly the product of IBM's acquisition of best-of-breed ETL and data-quality player Ascential Software. Since then, Big Blue has added DataMirror's best-of-breed CDC and replication capabilities into the mix.

Big Blue's mainframe integration story predates both Ascential and DataMirror. Five years ago, IBM purchased CrossAccess, chiefly to acquire that company's legacy connectivity assets. Five years later, the CrossAccess deal is still paying dividends, officials maintain.

"[The CrossAccess] stuff allows you to take any mainframe data source -- like a VSAM or an Adabase source -- and publish that as a service in an SOA. Even if it's not a SQL-capable data source [such as] a VSAM file, it represents itself looking like SQL, so you just write a standard SQL query against that [VSAM] data source," explains Michael Curry, director of product strategy and management for IBM's Information Platform and Solutions.

It's pretty much a straightforward proposition, Curry claims. "Once you've written that query, you say, 'This is something I'm going to want to reuse as a service,' and -- just like that -- you hit a button and … that gets published," he says.

Big Blue's VSAM-to-SOA feature isn't for everyone. IMS or other non-relational programmers probably won't warm to it, for example, because it requires a good working knowledge of SQL. Similarly, relational database programmers -- who as a rule aren't up to speed on VSAM or VSAM data structures -- probably won't be able to meaningfully interact with it, either. "They have to understand the data enough to write a SQL query against it and know what it is," Curry says.

By "mainframe-ready" CDC, IBM really does mean mainframe-ready CDC, Curry maintains. "We have improved our changed data capture capabilities on the mainframe. We did things like add log-based capture for IMS, for example. We've improved the way we use CDC with DataStage [InfoSphere's built-in data quality technology] from the mainframe. We've delivered VSAM-to-VSAM replication, so we've given [mainframe users] the ability to ensure high availability or improve their access to information by replication."

InfoSphere is able to run on three low-cost mainframe processor engines, Curry points out. "We run on zIIP, zAAP, and on the IFLs. Our repository sits in DB2 and takes advantage of the zIIP engine. The zAAP engine is focused on Java applications, which is great for us, because there's a large portion of our stuff that sits inside the application server and runs on the mainframe." The "core" InfoSphere platform exploits Big Blue's all-but-ubiquitous Integrated Facility for Linux, or IFL.

"The core of our engine runs on that. That also does not incur any kind of MIPS charges whatsoever," Curry says. "The combination of those [engines] means that there are no MIPS charges whatsoever running our software on the mainframe."

For processing-intensive DI scenarios, that's an important differentiator, Curry claims. "We are designed to be processor-intensive. We just crunch through data. So this is great for that [mainframe] market."