vendors – HPCwirehttps://www.hpcwire.com
Since 1987 - Covering the Fastest Computers in the World and the People Who Run ThemTue, 20 Mar 2018 00:17:45 +0000en-UShourly1https://wordpress.org/?v=4.9.460365857Vendor Showdown Puts HPC Clouds in Spotlighthttps://www.hpcwire.com/2011/10/03/vendor_showdown_puts_hpc_clouds_in_spotlight/?utm_source=rss&utm_medium=rss&utm_campaign=vendor_showdown_puts_hpc_clouds_in_spotlight
https://www.hpcwire.com/2011/10/03/vendor_showdown_puts_hpc_clouds_in_spotlight/#respondMon, 03 Oct 2011 07:00:00 +0000http://www.hpcwire.com/?p=8923Vendors in the high performance cloud space were put in the hot seat during last week's ISC Cloud event in Mannheim, Germany. Representatives from twelve companies, including HP, Intel, SGI, Bull and others, took part in a "gameshow" event that featured tough questions and a competitive reason to answer them thoroughly.

]]>Most conferences provide an opportunity for event sponsors to get their messages across to attendees in one way or another, at the very least by providing a platform to talk amidst the glow of a PowerPoint presentation. Oftentimes, these overviews address audiences at large—and avoid the “big questions” about potential problems, drawbacks or other points of weakness.

At last week’s ISC Cloud event in Mannheim, Germany, twelve vendors who might otherwise have given long detailed presentations were instead corralled to present a rapid-fire two-slide overview of their relevance to the HPC cloud space, then were put on the hot seat to answer some of the most complex questions floating around about HPC clouds.

This wasn’t your standard “vendor hot seat” session; the two teams, each with six vendors were given questions and had a short amount of time to answer. Three expert judges from different perspectives (press, academic, and industry) then evaluated the responses, awarding points to the most successful, comprehensive answer.

For those without time to watch the highlights from the vendor showdown, there are a few key phrases that emerged from nearly all the vendors present. The overwhelming emphasis from each of the vendors is best described as making new strides to simplify access, create openness and portability, and refine critical hardware and software elements that make clouds possible for a user group with complex needs. This last element means everything from solving software licensing problems to creating hyper-efficient cloud hardware that provides the performance needed for HPC applications while recognizing the power consumption challenges of cloud providers.

There was definite value in hearing answers about the stickiest issues in high performance computing for clouds from the specialist vendors who cater to different regions of the cloud ecosystem. For instance, pitting Mellanox and QLogic against one another on the question of what is needed on the interconnect side for robust HPC clouds or asking system vendors like SGI how clusters need to change to meet these specific needs creates the opportunity for direct answers to questions that users are asking most often.

HP and Intel, for instance discussed the intersection between HPC and clouds, with HP’s Frank Baetke claiming that HPC has always revolved around issues of access. He says that cloud provides this access and in this lies the potential of cloud in coming years. From the user perspective, Intel’s Stephan Gillich pointed to access models that are based on the needs of HPC users with applications, interfaces and environment tailored to ease of use. He says that the benefits can’t be realized without careful selection of the underlying technologies that simplify access since that is the important goal.

Other compelling questions put to the panel revolved around the evolution of on-demand computing and the role of the hardware backbone for cloud providers hoping to draw in HPC customers. SGI, Penguin, Bull, and Gompute, all of whom have on-demand HPC offerings noted that the evolution of cloud computing has been swift, but for users, creating interfaces that are open, easily accessible, and tailored to the needs of HPC applications and users remain critical challenges. While that might sound like the general answer for such a question, the back and forth answers made vendors challenge one another on such generalities. If you have time to watch the panel, a great example of this is embodied in the exchanges between Mellanox’s Gilad Shainer and Alastair McKeeman from QLogic as well as the ping pong of answers between SGI’s Tanasescu and Bull’s Olivier David.

While the video provides a sense of the action by capturing highlights from the hour and a half showdown, it is difficult to capture the audience’s take on the material on film. Needless to say, some of the questions definitely provoked strong conversation between voting rounds from the galley—and definitely showed the vendors that these are questions that matter.

We have a great deal more coverage from last week’s event coming during the course of this week, but this panel elicited a strong response from the attendees and provided a rich sense of the directions a number of vendors in this small community are heading.

]]>https://www.hpcwire.com/2011/10/03/vendor_showdown_puts_hpc_clouds_in_spotlight/feed/08923Storage Stories Take Center Stage: “To the Moon, Alice!”https://www.hpcwire.com/2011/03/11/storage_stories_take_center_stage_to_the_moon_alice/?utm_source=rss&utm_medium=rss&utm_campaign=storage_stories_take_center_stage_to_the_moon_alice
https://www.hpcwire.com/2011/03/11/storage_stories_take_center_stage_to_the_moon_alice/#respondFri, 11 Mar 2011 08:00:00 +0000http://www.hpcwire.com/?p=9103This week has proven to be a bumpy ride for storage companies with a number of big acquisitions and shakeups in the industry. To make short sense of it all, we recap and analyze...

Consolidation continues at rapid pace this week with two big announcements. A hell of a week or week in hell–depending, of course, on which side of the consolidation you sit.

The week opened up with a big bang with Western Digital’s swift acquistion of Hitachi Global Storage Technologies in a cash and stock deal valued at about $4.3B. Although a major exchange, it is fair to say that this came as no surprise to the industry or customers… “Finally someone has purchased this Hitachi Company.” The resulting company will retain the Western Digital (WD) name and remain headquartered in Irvine, California.

WD’s market HDD share saw a big boost. The top vendors according to iSuppl, prior to this, were WD 31%, Seagate 29%, Hitachi 17%, Toshiba/Fujitsu 11% and Samsung 10%. Clearly this potentially gives WD a huge leap in market share, almost 50%. They have become the big dog in the HDD market, leaving everyone else choking on their dust. It will be interesting to see how this impact pricing over the next few months.

Everyone wins here, as prices for enterprise and consumer products are likely to shift downward before any stabilization.

The other really big news is that NetApp is buying LSI’s external storage systems business for $480M in cash, it is expected to close in couple months.

This divestiture of the external storage systems business will result in LSI becoming a pure-play semiconductor company and reportedly improve their profitability – sounds like the storage unit was a real dog.

NetApp will take over all the assets of the LSI external storage systems business, which develops and delivers Engenio, a bandwidth intensive storage play–nothing that unique or different necessarily, but it’s part of a product line NetApp doesn’t have and maybe they can step up the food chain in the Isilon and BlueArc camps, increasing their addressable market and generate higher revenues.

Just in case you were wondering or questioning–storage is still a hot market. Nothing is thrown away; it just keeps doubling in one itself every 18 months or so. The market is moving toward more and more cloud offerings and the nirvana of real storage virtualization.

But that’s not all the news from this past week. Mid-week saw the announcement by Fusion-io that it has filed for an initial public offering. Again nothing new here. Even the idea of Flash and solid-state storage is not new, it has been around since late 1970s early 80’s, remember Cray X-MP SSD or the Texas Memory System with TI computers for Seismic processing in the 80s. Texas Memory System is still chugging away today.

The SEC filing indicates that Fusion-io is losing money, surprise surprise. Generated a meager $58.2 million in revenue for the six months ended Dec. 31, with a net loss was $8.2 million. Apparently “ineffective management of our inventory levels could adversely affect our operating results.”

However, Flash based storage solutions are hot a topic with lots of solutions. The speed and bandwidth of CPU, memory, and networks, all major building block for datacenters, are increasing 2x or more every year or so. Disc performance is lagging behind creating an IO bottleneck and slowing down performance and throughput.

Several start-ups, such as Violin Memory Inc. and Virident Systems Inc., both have innovative solutions to this problem and are pounding on the Fusion installed base and many of the same prospect.

As a closing note:- LSI also has a Flash base product line. Wonder what will happen to that going forward.

If you live and breathe storage then strap yourself in tight–You’re in for fast and bumpy ride – to the moon, Alice!

]]>https://www.hpcwire.com/2011/03/11/storage_stories_take_center_stage_to_the_moon_alice/feed/09103Adoption of SaaS in the Life Scienceshttps://www.hpcwire.com/2011/02/28/adoption_of_saas_in_the_life_sciences/?utm_source=rss&utm_medium=rss&utm_campaign=adoption_of_saas_in_the_life_sciences
https://www.hpcwire.com/2011/02/28/adoption_of_saas_in_the_life_sciences/#respondMon, 28 Feb 2011 08:00:00 +0000http://www.hpcwire.com/?p=9134In this entry Bruce Maches takes a high-level look at SaaS and how it can be leveraged by these organizations to help them meet their research objectives in a more cost effective and timely manner.

]]>Pharmaceutical research has always been a time consuming and expensive endeavor with each new therapeutic advance requiring millions of dollars in expenditures and years of effort. As I have mentioned in prior blogs all sectors of the life science community are looking at ways to reduce costs, IT complexity, and speed time to market. As the types of research being done become more complex and the amount of data generated grows these companies simply cannot afford to keep expanding their IT infrastructure and application environments to meet all of the demands being placed upon them.

The advent of genomic sequencing alone has had a dramatic impact on the amount of data being created. Overall it is estimated that data storage demands by life science companies are doubling about every 3 months. Companies with hard pressed research budgets simply cannot afford to keep throwing more and more money at the problem along with the associated issues of storage management, backup/recovery, and disaster recovery planning.

The same holds true with the types and complexity of the applications being used. The need for systems to perform complex data modeling, gene sequencing, target identification, and computational chemistry places an ever increasing support burden on the IT department. Many of these applications require significant hardware investments and experienced support staff to maintain in house. Not to mention the expense and effort required to validate and implement these systems to meet FDA regulatory requirements.

In a prior blog I discussed how IaaS is helping many life science organizations deal with some of the infrastructure demand and support issues. In this entry we will take a high level look at SaaS and how that can be leveraged by these organizations to help them meet their research objectives in a more cost effective and timely manner.

Many life science organizations are taking a harder look at utilizing SaaS as a means of provisioning the complex applications required for their research. There are many vendors offering specialized applications for life science research via the SaaS model and more industry vendors are moving in this direction. Life science companies can take advantage of systems in functional areas such as:

– genomic sequencing

– molecular modeling

– toxicology studies

– drug stability

– clinical trials

– document management

– regulatory submissions

What advantages can these companies realize from utilizing SaaS based applications?

– There is no hardware required as the storage and compute resources will be provisioned at the vendors data center

– Many of these applications are either validated or since a portion of the application environment is already validated that process can be completed much quicker

– Reducing time spent on acquiring and implementing applications can allow a life science company to reduce time to market for new products

– Professional data centers can usually provide much better security than in-house environments

– The vendor takes care of all support requirements, patching, and administrative duties

– The primary disaster recovery responsibilities are also handled by the vendor

Of course nothing is perfect, so what are some of the downsides to following a SaaS based strategy?

– Some companies are concerned with the security of their data in a shared environment

– Depending on a SaaS based application that is critical for your business requires extra care when negotiating contracts and service level agreements

– Using off premise applications may cause latency issues

– You never own the application so you are continually paying

– There may be regulatory issues with storing patient data from clinical trials or other private information in a 3rd party data center

While all of these factors require consideration, with careful planning and foresight life science companies can utilize SaaS based systems to reduce costs and complexity while enhancing the R&D process and speeding time to market for new therapies.

]]>https://www.hpcwire.com/2011/02/28/adoption_of_saas_in_the_life_sciences/feed/09134Entering Enterprise Territory at Cloud Expo 2010: Notes from the Show Floorhttps://www.hpcwire.com/2010/11/03/entering_enterprise_territory_at_cloud_expo_2010_notes_from_the_show_floor/?utm_source=rss&utm_medium=rss&utm_campaign=entering_enterprise_territory_at_cloud_expo_2010_notes_from_the_show_floor
https://www.hpcwire.com/2010/11/03/entering_enterprise_territory_at_cloud_expo_2010_notes_from_the_show_floor/#respondWed, 03 Nov 2010 07:00:00 +0000http://www.hpcwire.com/?p=9202Highlights and thoughts from the floor of Cloud Expo 2010 in Silicon Valley where we attended to look for the intersection between high-performance computing, large-scale enterprise and what the two have in common as they look to the clouds.

]]>Greetings from Cloud Expo in Silicon Valley where HPC in the Cloud is spending the week finding interesting people to talk to about what’s going on in the enterprise cloud space. I probably don’t need to reiterate that it’s not as similar as one might think to what’s important for high-performance computing in the cloud; it’s apples and oranges — but thus far it’s been quite an experience.

On a side note, it was even more exciting to be in town when the Giants won the World Series. For those who weren’t here, there was a moment where it seemed like the whole valley echoed with one unanimous roar of delight; cool stuff.

But back to the clouds, since that’s why so many gathered…

According to Expo personnel, around 5,000 registrants, from end users, speakers and vendors, were expected to attend. It might not seem like that many were here just by walking around, but the event has been spread over several days and the Santa Clara Conference Center is a rather large venue, so it’s difficult to get a feel for how accurate those numbers are.

Sys-Con Media, organizers of the conference, started the series back in 2007, “the day the term ‘cloud computing was coined” and held an event that same year in New York with 450 delegates that has now grown significantly across their series, which includes other cities and regions. It will be interesting to chart the growth of the event with each passing year in contrast to how the term “cloud” as a buzzphrase (albeit a long-lasting one thus far) fares on the hype cycle. According to some, it’s already peaked.

While there were a few occasions when I actually had to explain to folks what the HPC acronym stood for — an unexpected issue since most conferences I’ve attended have had high-performance computing either directly or indirectly in the title — there has been plenty of food for thought to be found. I expected there to be some separation between what we cover here and what I was going to find in the sessions and conversations with vendors, but I didn’t realize the extent of the disparity, especially for newer companies who are not targeting HPC in any way — between what’s meaningful for cloud discussions for enterprise versus technical users. There’s quite a chasm.

There are some pronounced differences in how the scientific and technical computing folks view cloud computing versus how it’s portrayed here, which didn’t come as a surprise in itself to say the least. It’s just that there are far different approaches, at least from the vendors, when they’re targeting small to mid-sized business with the occasional mega-enterprise score thrown in. I didn’t hear much about latency this week; instead, key words that I kept hearing were “ease of use” and “simple to manage” and the ubiquitous, vague term “solution.”

As you can imagine given the enterprise focus on the conference, many of the sessions at this event were geared toward the CIO-level executive or others who were considering making the move to the cloud. From those potential end users I was able to find and talk to, a common theme was that they had been sent to investigate the clouds based on directives from non-technical personnel at their respective companies since the promise of clouds has been reaching the mainstream business media in a way that’s almost impossible to ignore. Some had already implemented private cloud solutions of one kind or another but I was unable to find anyone who was using the public cloud for any mission-critical applications, only an occasional user who discussed the benefits of using the “cloud-bursting” model to push out into Amazon’s cloud for extra capacity on an infrequent basis.

Speakers were gearing their sessions toward this audience; these groups of scattered enterprise IT folks who wandered through the sessions clutching their notebooks and iPads, tenuously taking notes and walking away in small clusters, talking into the broad range of other topics that were organized by interest “tracks.”

Sessions Upon Sessions

For these enterprise IT professionals, there would have been a number of valuable sessions, indeed. Some did stick to working with definitions of clouds and providing the basics, but others took a more focused approach and delivered some keen insights. For instance, Dave Malcom from Quest delivered a “Masters Class in Enterprise Cloud Automation” and Peter Nickolov, senior vice president of software engineering at 3Tera Cloud Division/CA Technologies, presented a session on advanced cloud architectures.

Of particular interest was a talk given by John Monson called “The Impact of I/O Performance on Cloud Service Level Agreements” and as well as Gunther Schmalzhaf’s presentation, “Integrating Heterogeneity: Managing Applications in Virtualization and Cloud Infrastructures.” Vineet Tyagi from Impetus presented another great session (we have a video interview with him that will be posted soon) covering the Hadoop ecosystem entitled “Deriving Intelligence from Large Data — Using Hadoop and Applying Analytics” which ended up being one of the few that was very focused on the kinds of issues we cover here.

Otherwise, there was quite a large collection of presentations from across the vendor community that could have all read as “How to Make the Cloud Work for You” whether that was in the cost or efficiency sense or simply for the purposes of selling the cloud idea to those who showed up only because they wanted to learn more about what this catch-phrase “cloud” had to do with all that infrastructure they’d pumped hundreds of thousands (if not more) into throughout the years.

This is a great conference in terms of serving as an “on-ramp” to the cloud for enterprise leaders who are wary or haven’t done much due diligence to find out if the clouds are a good fit for their business. However, if they were on fence before, walking around the vendor booths would certainly leave them feeling that if they hadn’t done something cloud-related, they were somehow missing the boat.

There’s no denying that it’s exciting to be here, even though I’m trying to stay neutral and not forget my high-performance computing roots as I stroll about, investigating what the lower end cloud services are providing and to what types of customers. If nothing else, it lends quite a bit of perspective on what some of these smaller vendors are missing (and why they could never have offerings to match the needs of HPC applications) and conversely, what some in HPC might be overlooking when it comes to their consideration of clouds, particularly on the management level.

I am curious about this hype issue and again, wonder how long the term will remain valid enough to support a conference series and range of solutions that oftentimes, even with some of the bigger players in the computing market, seem a bit underdeveloped and thrown together. Only time will tell.

For now, we’re presenting all of you who couldn’t make it with some video treats to give you a feel for what’s going on in the enterprise cloud space and what folks are talking about. More updates coming today so stay tuned…

]]>https://www.hpcwire.com/2010/11/03/entering_enterprise_territory_at_cloud_expo_2010_notes_from_the_show_floor/feed/09202The Cloud Forecast for HPC: Preview of ISC’10https://www.hpcwire.com/2010/05/25/the_cloud_forecast_for_hpc_preview_of_isc10/?utm_source=rss&utm_medium=rss&utm_campaign=the_cloud_forecast_for_hpc_preview_of_isc10
https://www.hpcwire.com/2010/05/25/the_cloud_forecast_for_hpc_preview_of_isc10/#respondTue, 25 May 2010 07:00:00 +0000http://www.hpcwire.com/?p=9495It is becoming more difficult to question cloud's role in HPC as an increasing number of vendors jockey for top position in research, scientific, and large-scale enterprise computing. If the ISC exhibitor list is any indication, we're in for very cloudy conditions in HPC this year.

]]>While cloud computing is certainly not a brand new, cutting-edge movement, this year there’s been an increasing amount of news about HPC and the cloud. Compared to this time last year, general news releases related to cloud computing from companies in all areas of HPC — from storage vendors to application developers — are becoming more frequent. This means it’s an exciting time to sit back and observe the transformation and watch as one vendor after another adds to a suite of cloud-based offerings to diversify and stretch the ability to reach scientific and large-scale enterprise clients.If the recent uptick in the amount of news on a daily basis related directly to HPC and cloud is any indication, the next year will bring an even richer set of announcements aimed at providing infrastructure and software as a service for researchers and those involved in any number of enterprises.

There are other signs that cloud computing and HPC are meshing together at a rapid rate, one of which can be gleaned simply by scanning the list of exhibitors who are expected to attend the International Supercomputing Conference (ISC) that will be held in Hamburg, Germany from May 30th to June 3rd. While many of the exhibitors have had a presence in the past, this year it’s remarkable just how many of them actually have news to share on the cloud front. In fact, well over half of the companies on the show floor have at least one item of significant interest for HPC and cloud.

The cloud connection will be hard to ignore, even at a venue that’s been the traditional gathering place for general supercomputing in research and enterprise. This year, the floor will be bustling with vendors, end users, and organizations of all sizes as it hosts a record-breaking attendance list that includes 25 percent more advance registrants than last year.

Since this publication will have a presence and provide live, in-depth coverage of the event with newsmaker interviews, timely blog posts, and analysis of announcements and revelations, it seemed appropriate to present an overview of what organizations will be present and what they hold of value in terms of HPC and cloud.

Clouds Over Hamburg: What We’ll be Watching at ISC 2010

The following is a list of some of the exhibitors that will be in Hamburg for ISC with a brief description of what they have available for those interested in the cloud space as it relates to HPC. The list is not comprehensive by any means, if that was attempted it would leave no time to pack and read reviews of Hamburg restaurants — both of which are required. Suffice to say, this is more of a pre-ISC highlights just for those hoping to tune in to coverage of the event to see how cloud is being addressed by the international HPC community.

Adaptive Computing – Adaptive Computing is no stranger to HPC or cloud and is one of many companies that will be present at the conference with feet in both waters. Of great interest are its Moab line of HPC products that provide solutions for standard and cloud-based HPC as well as its work with particularly noteworthy HPC and cloud projects, including their work with the financial services industry and its large-scale computing needs.

Altair Engineering – Founded in the late 1980s, Michigan-based Altair Engineering straddles the line between traditional HPC services and cloud in terms of its on-demand offerings that appeal to those in engineering among other specializations. The company’s PBS Grid Computing division is of particular interest as is the on-demand model for the company’s widely-used HyperWorks product.

AMD – You might not have expected to see them listed here since the focus of this list is on companies with more significant cloud computing offerings, but take one look at this AMD blog post about the nearly 2 million Opteron processors that are engaged in cloud clusters. A chat with AMD might reveal a side to HPC in the cloud that we have yet to consider in depth.

Arista Networks – A pioneer in the data center Ethernet switch industry, Arista partners with several cloud computing initiatives including Greenplum Software to help them implement 10GbE networking on the grand scale for private cloud environments.

BCC Group – A German software company with distinct focus on the financial services industry, the BCC Group might provide some interesting information about providing software as a service for their target market. Although it appears that most of their offerings are not delivered via the cloud, an analysis of some of their strategic partners looks promising. More to come on this company as discussions follow.

BlueArc – Specialists in storage and data management for enterprises. One of the reasons why BlueArc is at the top of the list is because of their reach into several areas relevant to HPC, including life sciences, oil and gas, government, and even entertainment. Of particular interest are their Titan and Mercury Series servers as well as more generally, their virtualization framework and virtualized storage pools with parallel RAID striping. Discussions with BlueArc will likely revolve around the topic of diverse uses of virtualized servers in the HPC space.

Boston Limited – Of interest will be Boston Fenway Virtualization Solutions and discussions about their carbon-cutting efforts in the context of virtualization. This may be more of an ethereal chat about topics that are a little off their radar but it will be interesting to see if Boston Labs has any news on the cloud front.

Bright Computing – Specialists in the cluster management software and services for HPC (Bright Cluster Manager is the one that springs to mind) this San Jose, CA-based company provides its products and services to several universities and institutions. Of special interest is the application of their Bright Cloud Manager software in private, hybrid and public cloud environments—this has not yet been released but hopefully we will be able to gain some insight about the process during the conference.

Chelsio Communications – I first saw a connection between cloud computing and Chelsio in the announcement about a seminar on 10GbE converged network and storage technologies and what this means for cloud computing. The company’s Director of Application Engineering has significant experience with cloud and storage architectures and has patented several products aligned with his research areas. In the very overview of their company, Chelsio states that massive data centers, equipment density and power consumption are more critical than ever and “at the same time, cloud computing and server virtualization are driving the need for more uniform designs than the traditional data center architectures can offer.” Agreed, Chelsio—hope to see you there.

ClusterVision – This Netherlands-based company has designed and implemented several large-scale clusters and has built several complex computational, storage, and database clusters in Europe and the Middle East. Of particular interest is the grid-enabling of clusters for HPC as well as their theory on this process. Conversations with ClusterVision will likely revolve around the grid, which as we all know is very closely associated with what we are covering here. Also of interest is the company’s subsidiary GridVision, which offers virtual laboratories, making testing and development a bit more within reach. Since so much of their news is centered outside of the United States they are not always on our radar but hopefully we will be able to pick up some news about this interesting company as it has its tentacles spread far across traditional HPC.

DataDirect Networks – DDN is not a newcomer to the HPC space; they are considered to be storage specialists for big data operations, including markets like financial services and airlines as well as genetic sequencing. This company has significant offerings in the cloud storage and HPC space. Would like to spend time discussing DDN Cloud Storage Environments for what they call “Extreme Storage” for large-scale computational tasks on a centralized cloud infrastructure.

Dell – Stopping by to meet and greet folks from Dell is definitely on the to-do list with the hopes that we can have a discussion about the PowerEdge C6100. For starters, to see why this is of extreme interest to HPC and the Cloud, view Barton’s Dell blog to see a post (aptly entitled “The HPC and Cloud Machine“) where there is a video tour and discussion.

eStella Creative Consultants – This consulting group, which is based in New Mexico, will be discussing eStella, which is a GPU accelerated cloud service that has been designed to “democratize high performance computing to all strata within the scientific and engineering research communities allowing for greater innovation, faster computations, and low costs to those previously excluded from access to high-performance computing” and so yes, amen, looking forward to a chat.

Force10 Networks – Headquartered in San Jose, California, Force10 Networks is on the priority list for the visit to ISC to look at practical HPC in the cloud environment. The self-described focus of the company is on the “high performance solutions that are designed to delivery new economics by virtualizing and automating Ethernet networks.” While this can describe several other companies in the space, one of the reasons why Force10 Networks has been on the radar for some is because of their host of well-written, insightful, and somewhat sales-overkill white papers that do an excellent job of discussing the challenges of cloud computing networks among other topics relevant to HPC and cloud more generally. The emphasis of this company is definitely cloud-driven and hopefully we will have the chance to talk to some of their representatives who are well-versed on this topic as it relates to HPC in the cloud.

Gridcore AB (Gompute) – Gompute is a high performance computing on-demand service that is run by its parent company, Gridcore AB. There are a host of reasons to explain why it would be worthwhile to sit down with representatives from Gridcore AB, but most of them are directly related to the Gompute HPC Cloud Platform, which is used by the likes of Saab among others. Of interest would be to learn how other industries are making use of high-performance computing in an on-demand fashion and what the company sees coming next in the arena.

Hewlett-Packard – HP has been making some major announcements of interest to the HPC and cloud community as of late, all of which are aligned with their view that we are shifting toward an “everything as a service” society—both in terms of HPC and mainstream computing. HP’s interests are best aligned with the large-scale enterprise, especially in terms of their enterprise cloud software platform, Cirious. It will be interesting to chat with the company about new areas of research and ways that they are seeing customers leverage their products in the HPC and large-scale enterprise space.

IBM – This one is kind of a no-brainer, isn’t it? If not, skim our “This Just In” section for about 20 seconds to see why. Lots of acquisitions, lots of news on the enterprise cloud front. IBM is poised to become one of the biggest players in cloud and HPC for industry and science and is willing to bet billions on it. Literally.

Intel – There are about a million questions that one could ask Intel, but since time will be limited and they’ll be among some of the most popular kids at the party, I’ll have to keep it short and limit the discussion to one key item of interest—the Single-Chip Cloud Computer.

Isilon Systems – Based in the UK, this company designs and sells systems and software for digital content and other unstructured data, including audio, digital images, computer models, and test or simulation data. They have a wide array of clients in oil and gas, manufacturing, the life sciences, and other areas. Of particular interest is the possibility that they might take advantage of the cloud to handle these massive chunks of data for reasons of scalability and cost. If limited time is available, I would like to discuss how customers have deployed Isilon scale-out NAS to power enterprise cloud infrastructure as some, including SoftLayer, have done in recent weeks.

Mellanox Technologies – Mellanox has been a continuous newsmaker as one of the leading suppliers of end-to-end connectivity solutions for servers and storage. The company also has made a significant amount of news on the HPC and cloud fronts and has a large selection of materials on its website dedicated to discussing virtualization, and Infiniband in light of its line of products and services. Ideally, conversations with representatives of this company would revolve around Infiniband and cloud computing — low hanging fruit, sure, but there is quite a bit to discuss in this arena.

Microsoft – You saw this one coming, didn’t you. Again, as with some of the other major players with a presence at ISC, the news keeps pouring out from different angles — Azure, supercomputing for everyone, Windows HPC Server — the fun never stops.

NextIO – This Austin, Texas-based company is another in the I/O virtualization market for data centers. At issue will be the vConnect Platform which offers the ability to virtualize I/o technology on any server, OS, hypervisor and storage architecture and cutting the need to over-provision resources. This will be a great company to talk to about the sexy side of HPC — big cost savings without performance lags.

Numascale – Numascale just released some news that was timed nicely with this article and will make for some interesting conversation at ISC. Between demonstrations, one might hope we can find time to discuss NumaConnect, which enables scalable servers for quite a bit less in terms of cost of typical enterprise SMP systems. This technology provides one system image to a cluster of x86 commodity servers by providing support for virtualization of processing, memory and IO.

NVIDIA – Aside from making PC gaming even more of an immersive experience than it was to begin with (okay, will leave the personal asides out for now) NVIDIA’s Tesla will be one of the stars of the show this year at ISC. Most of my discussions with this company will be centered on some of the applications of NVIDIA’s GPU technology in a more generalized sense. In addition to discussions about their RealityServer, Cloud 3D, here will probably be some hopping and clapping on my end as this is just some really, really cool stuff.

Oracle – This might be another one of those “well, of course” spots on the circuit as I peruse the offerings on the HPC and cloud front at ISC, but it’s certainly for good reason. Before grid was grid and cloud was becoming a major force in the industry, Oracle has been a key player. Conversations with Oracle would be focused on the future of cloud — the big issues.

Panasas – There has been a lot of talk lately about how the future of cloud storage is NAS. A company specializing in high performance scale-out NAS storage for enterprise with the PanFS storage operating system that is in use in several sectors, including energy, government, finance, manufacturing, bioscience, and others. One particular subject that would be of interest is Panasas NAS Storage for Los Alamos National Laboratory — at least for a case study. Since HPC in the cloud needs to look for end users and case studies on the storage front, Panasas seems like a good starting point, even though there are other key players in the same space with many of the same solutions. This company has not released news in quite some time so it will be interesting to see what their trajectory will be in coming months.

Penguin Computing – A recognized player in integrated HPC clustering solutions, Penguin has extended its reach across industries beginning with its Beowulf Cluster architecture and leading up to its Penguin On-Demand cloud computing service. Just today Penguin made an announcement about its expansion into the European market with news that it has made strategic partnerships with Life Technologies and several other European research, academic, and engineering organizations, including the Fraunhofer Institute. Penguin has experienced some remarkable growth with a 35 percent uptick last year and — get this — 300 percent growth in the first quarter of this year alone. It will be worthwhile to get to the heart of their success and see what they’re doing differently in HPC and cloud.

Platform Computing – Platform Computing is another company with its roots in cluster and grid management software and with an ever-expanding presence in HPC and cloud. Platform’s partners include other major industry leaders, including Cray, Dell, IBM, Intel, and Microsoft among others. Of particular interest at ISC discussions will be Platform ISF, which creates provide cloud infrastructure to help manage application workloads across several virtual and physical platforms. While Platform’s CEO, Songnian Zhou will not present at the conference, we have an in-depth interview with the founder appearing in early June.

ScaleMP – ScaleMP delivers software for virtualizing and aggregating x86 systems, even in the cloud, into larger CPU and memory configurations to add computational boost during critical times of need. One of the best ways to demonstrate why ScaleMP’s insights will be useful is to take a look at the vSMP Foundation for Cloud that fully describes how HPC and cloud have been at odds, what challenges exist, and how it is possible to overcome some of these issues. Although it is certainly a piece written to explain the concept to potential buyers, it will likely form the basis of any discussions we have at the conference.

SGI – Another recognizable name in HPC, of course, especially in the scientific computing arena. SGI has a number of interesting offerings in terms of virtualization and cloud. The topic of discussion will most certainly center on SGI’s Cyclone, which offers cloud computing for technical applications. Their IaaS and SaaS offerings are geared toward scientific and engineering markets in particular as Cyclone supports many widely-used applications in computational fluid dynamics, finite element analysis, computational chemistry and materials, and computational biology among others.

Supermicro – Specializing in application-optimized server solutions, Supermicro presented a discussion and demonstration of its cloud-optimized Twin architecture server solutions for HPC in Bejing in April that caught some attention and is worth discussing further in addition to their other offerings.

And the List Continues

Companies like QLogic, T-Platforms, Versant, and Voltaire are others who didn’t get the mention they deserved here in hindsight but still, if you’ve skimmed through this list you’ll see that the division between the cloud and HPC is shrinking. By next year’s conference it will be interesting to gauge how many others have jumped on board and entered into the cloud race.

For now, let this serve also as a guide to coverage for the event. You are now armed with an overview of what companies you’re likely to see discussed during the conference and as always, you’re welcome to send along specific questions you’d like to hear addressed or direct any comments about companies that may not have been mentioned here.

]]>With all the SC09 news being broadcast this week, I neglected to cover our very own 2009 HPCwire Readers’ and Editors’ Choice Awards. As per our annual tradition, we recognize some of the biggest success stories in the HPC community over the past 12 months and announce them at the annual Supercomputing Conference. To be honest, given the economic turmoil over the last year, anyone who managed to survive 2009 intact deserves a medal.

In any case, there are too many HPCwire awards to cover in this short blog, so I’ll just point you to our press release here. However, it might be worthwhile to review our “Top 5 Vendors to Watch” category, especially considering we probably just got our last look at them for the year at SC09.

So who will be getting that extra attention in 2010? The readers picked SGI, Cray, IBM, Sun Microsystems, and Dell, while the HPCwire editors chose Microsoft, SGI, NVIDIA, Convey Computer, and Sun Microsystems. Now not all of these were chosen because we expect them to set the HPC world on fire; some were singled out because they have big challenges ahead and are worth keeping an eye on for that reason alone. SGI and Sun fall into the latter category.

SGI’s fundamental challenge is to start turning in profitable quarters. The company just launched its Altix Ultraviolet line, in the hope of reinvorgorating its shared memory offerings, which heretofore were strictly Itanium-based (and were not exactly flying off the shelves). The better price-performance of the UV line has a lot more potential for serious sales, so it may offer the company a potent new revenue stream. Here’s hoping.

Thanks to the anti-trust nervousness of the European Commission, the $7 billion Oracle-Sun deal is in limbo, adding a layer of uncertainty on top of what many consider a problematic merger. Despite general reassurances from Mr. Ellision, Sun HPC customers are wondering if the company’s high performance computing portfolio will survive unscathed.

Given all the recent consolidation in the industry, Dell may take the opportunity to make a bigger play in HPC. With clearly a volume strategy in mind, we should expect the company to continue to gather partnerships to offer the widest range of cluster products possible. A key HPC vendor acquisition would shake things up, but don’t hold your breath.

With its CPU-FPGA hybrid architecture, Convey Computer has developed one of the most compelling HPC designs in years. It still, however, has to convince a conservative HPC user community of the value proposition. With some customer wins under its belt and a couple of partners in tow, Convey may be about to reach escape velocity.

Cray and IBM always seems to show up on the Vendors to Watch list. And why not? These are the two companies usually duking it out for the top spots on the TOP500. More importantly, with Cray’s CX-1 deskside cluster line and its mid-range XT variant, the company is trying to pick some spots where IBM and other HPC OEMs have a lighter footprint.

Finally NVIDIA. With Fermi in the pipeline, 2010 could be a breakout year for the company’s HPC aspirations. The past three years spent seeding, watering and fertilizing the CUDA ecosystem may be about to pay off.