IBM buys Platform Computing, gets HPC and private cloud boost

IBM announced today that the company will acquire Toronto-based Platform Computing, a software company specialized in software for managing grid computing systems. The buy is "an important part of our smarter computing strategy," IBM Systems Software General Manager Helene Armitage said in the official announcement of the acquisition.

The acquisition will give IBM a significantly larger toolbox for tackling high-performance and technical computing applications such as "big data" analytics, simulation, and product design. Platform Computing also brings along technology that will help round out IBM's cloud computing offerings.

Platform Computing's software manages power usage, message-passing between distributed systems, and compute workloads across clusters, grids, and clouds of computing resources. The software's ability to provision and manage large MapReduce tasks, Monte Carlo simulations, and other compute-intense distributed analytical and visualization tasks has already given Platform a significant footprint in research, financial services and computer-aided engineering.

Platform has also started to move into more general-purpose computing. In 2009, the company introduced Platform ISF, a management platform for enterprise private clouds that works with multiple types of hypervisors and provides self-service provisioning capabilities for users and policy-based automated management of workloads. That capability will give IBM a tool that competes with other dynamic data center management platforms, such as Dell's Virtual Integrated System products (acquired by Dell with Scalent Systems last year) and Cisco's Unified Computing System.

One of Platform's highest-profile customers is CERN, which uses Platform's grid and cloud software to manage computing resources for simulations of experiments before they're conducted on the Large Hadron Collider, among other tasks. CERN's IT director Dr. Helge Meinhard said at Computerworld's Honors Laureate event in June that his team was looking at further uses for Platform's software to scale up the LHC's high performance computing capabilities. The Sanger Institute is another Platform customer; it used the company's software to manage the grid sequencing the human genome-completed two years ahead of schedule.

Load sharing, batch jobs. Honestly this sounds like the reimplementation of Mainframe technology. Its really weird that IBM needs to buy stuff like that, they should have all the knowhow they need in their System Z development teams. (Unless those have died of old age by now)

Load sharing, batch jobs. Honestly this sounds like the reimplementation of Mainframe technology. Its really weird that IBM needs to buy stuff like that, they should have all the knowhow they need in their System Z development teams. (Unless those have died of old age by now)

Having redundant firms and software groups can be problematic and even counter-productive, but it's a bit of a hedge bet. At worst, it's wasted money, but it's possible that Platform Computing may have some expertise previously unknown to any of IBM's components.

IBM's massive stature allows them to do things like this with minimal risk.

Yes, LSF is better known. But ISF is aimed at a broader audience, and I think that it's worth calling out because of the fact that it gives IBM another tool to throw at customers who might be drawn into the never-ending embrace of Cisco for cloud computing. While IBM is playing nice with Cisco on other fronts, it's clear that they see Cisco as being a threat in the data center.

I used to work there for 2 and half years. It was the first job i got out of university, and i learned a lot from it. I am happy for the guys that are still there... must've been a good deal! (especially with the IBM stocks these days)

Load sharing, batch jobs. Honestly this sounds like the reimplementation of Mainframe technology. Its really weird that IBM needs to buy stuff like that, they should have all the knowhow they need in their System Z development teams. (Unless those have died of old age by now)

Yes, the rest of computing is starting to look like mainframe computing. And this probably gives IBM an edge that is maybe under appreciated in the technology media. But that doesn't mean that everyone is using a mainframe. IBM's mainframe tools have a long legacy based on proprietary hardware and operating systems. It's probably more painful to "port" these than to structure a new solution to the "same" problem on another system.

Load sharing, batch jobs. Honestly this sounds like the reimplementation of Mainframe technology. Its really weird that IBM needs to buy stuff like that, they should have all the knowhow they need in their System Z development teams. (Unless those have died of old age by now)

The concepts are similar, but the implementation is different. Not to mention that Z development goes on at a good clip so those developers are probably occupied full-time on other things. Just because you have a bunch of developers with a certain skill set doesn't mean it's weird to have more developers with similar skill sets.

Although the concept is borrowed from the mainframe, scheduling jobs in a distributed computing environment with commodity hardware is a very different ball game. (1) It is a distributed environment with CPU and memory not shared across 1,000s - 10,000s of servers. (2) The hardware is far less reliable than the mainframe, hence the software needs to handle all the possible failures in the system (3) The nature of the distributed computing environment is heterogeneous. Unifiying them to present a single machine image is one of the biggest values LSF provides.

Sean Gallagher / Sean is Ars Technica's IT Editor. A former Navy officer, systems administrator, and network systems integrator with 20 years of IT journalism experience, he lives and works in Baltimore, Maryland.