Entering Enterprise Territory at Cloud Expo 2010: Notes from the Show Floor

By Nicole Hemsoth

November 3, 2010

Greetings from Cloud Expo in Silicon Valley where HPC in the Cloud is spending the week finding interesting people to talk to about what’s going on in the enterprise cloud space. I probably don’t need to reiterate that it’s not as similar as one might think to what’s important for high-performance computing in the cloud; it’s apples and oranges — but thus far it’s been quite an experience.

On a side note, it was even more exciting to be in town when the Giants won the World Series. For those who weren’t here, there was a moment where it seemed like the whole valley echoed with one unanimous roar of delight; cool stuff.

But back to the clouds, since that’s why so many gathered…

According to Expo personnel, around 5,000 registrants, from end users, speakers and vendors, were expected to attend. It might not seem like that many were here just by walking around, but the event has been spread over several days and the Santa Clara Conference Center is a rather large venue, so it’s difficult to get a feel for how accurate those numbers are.

Sys-Con Media, organizers of the conference, started the series back in 2007, “the day the term ‘cloud computing was coined” and held an event that same year in New York with 450 delegates that has now grown significantly across their series, which includes other cities and regions. It will be interesting to chart the growth of the event with each passing year in contrast to how the term “cloud” as a buzzphrase (albeit a long-lasting one thus far) fares on the hype cycle. According to some, it’s already peaked.

While there were a few occasions when I actually had to explain to folks what the HPC acronym stood for — an unexpected issue since most conferences I’ve attended have had high-performance computing either directly or indirectly in the title — there has been plenty of food for thought to be found. I expected there to be some separation between what we cover here and what I was going to find in the sessions and conversations with vendors, but I didn’t realize the extent of the disparity, especially for newer companies who are not targeting HPC in any way — between what’s meaningful for cloud discussions for enterprise versus technical users. There’s quite a chasm.

There are some pronounced differences in how the scientific and technical computing folks view cloud computing versus how it’s portrayed here, which didn’t come as a surprise in itself to say the least. It’s just that there are far different approaches, at least from the vendors, when they’re targeting small to mid-sized business with the occasional mega-enterprise score thrown in. I didn’t hear much about latency this week; instead, key words that I kept hearing were “ease of use” and “simple to manage” and the ubiquitous, vague term “solution.”

As you can imagine given the enterprise focus on the conference, many of the sessions at this event were geared toward the CIO-level executive or others who were considering making the move to the cloud. From those potential end users I was able to find and talk to, a common theme was that they had been sent to investigate the clouds based on directives from non-technical personnel at their respective companies since the promise of clouds has been reaching the mainstream business media in a way that’s almost impossible to ignore. Some had already implemented private cloud solutions of one kind or another but I was unable to find anyone who was using the public cloud for any mission-critical applications, only an occasional user who discussed the benefits of using the “cloud-bursting” model to push out into Amazon’s cloud for extra capacity on an infrequent basis.

Speakers were gearing their sessions toward this audience; these groups of scattered enterprise IT folks who wandered through the sessions clutching their notebooks and iPads, tenuously taking notes and walking away in small clusters, talking into the broad range of other topics that were organized by interest “tracks.”

Sessions Upon Sessions

For these enterprise IT professionals, there would have been a number of valuable sessions, indeed. Some did stick to working with definitions of clouds and providing the basics, but others took a more focused approach and delivered some keen insights. For instance, Dave Malcom from Quest delivered a “Masters Class in Enterprise Cloud Automation” and Peter Nickolov, senior vice president of software engineering at 3Tera Cloud Division/CA Technologies, presented a session on advanced cloud architectures.

Of particular interest was a talk given by John Monson called “The Impact of I/O Performance on Cloud Service Level Agreements” and as well as Gunther Schmalzhaf’s presentation, “Integrating Heterogeneity: Managing Applications in Virtualization and Cloud Infrastructures.” Vineet Tyagi from Impetus presented another great session (we have a video interview with him that will be posted soon) covering the Hadoop ecosystem entitled “Deriving Intelligence from Large Data — Using Hadoop and Applying Analytics” which ended up being one of the few that was very focused on the kinds of issues we cover here.

Otherwise, there was quite a large collection of presentations from across the vendor community that could have all read as “How to Make the Cloud Work for You” whether that was in the cost or efficiency sense or simply for the purposes of selling the cloud idea to those who showed up only because they wanted to learn more about what this catch-phrase “cloud” had to do with all that infrastructure they’d pumped hundreds of thousands (if not more) into throughout the years.

This is a great conference in terms of serving as an “on-ramp” to the cloud for enterprise leaders who are wary or haven’t done much due diligence to find out if the clouds are a good fit for their business. However, if they were on fence before, walking around the vendor booths would certainly leave them feeling that if they hadn’t done something cloud-related, they were somehow missing the boat.

There’s no denying that it’s exciting to be here, even though I’m trying to stay neutral and not forget my high-performance computing roots as I stroll about, investigating what the lower end cloud services are providing and to what types of customers. If nothing else, it lends quite a bit of perspective on what some of these smaller vendors are missing (and why they could never have offerings to match the needs of HPC applications) and conversely, what some in HPC might be overlooking when it comes to their consideration of clouds, particularly on the management level.

I am curious about this hype issue and again, wonder how long the term will remain valid enough to support a conference series and range of solutions that oftentimes, even with some of the bigger players in the computing market, seem a bit underdeveloped and thrown together. Only time will tell.

For now, we’re presenting all of you who couldn’t make it with some video treats to give you a feel for what’s going on in the enterprise cloud space and what folks are talking about. More updates coming today so stay tuned…

By Thomas Ayres

Claiming no less than a reshaping of the future of Intel-dominated datacenter computing, Qualcomm Technologies, the market leader in smartphone chips, announced the forthcoming availability of what it says is the world’s first 10nm processor for servers, based on ARM Holding’s chip designs. Read more…

By John Russell

Sometime in Q2 2017 the first ‘results’ of the Joint Design of Advanced Computing Solutions for Cancer (JDACS4C) will become publicly available according to Rick Stevens. He leads one of three JDACS4C pilot projects pressing deep learning (DL) into service in the War on Cancer. The pilots, supported in part by DOE exascale funding, not only seek to do good by advancing cancer research and therapy but also to advance deep learning capabilities and infrastructure with an eye towards eventual use on exascale machines. Read more…

By John Russell

Transferring data from one data center to another in search of lower regional energy costs isn’t a new concept, but Yahoo Japan is putting the idea into transcontinental effect with a system that transfers 50TB of data a day from Japan to the U.S., where electricity costs a quarter of the rates in Japan. Read more…

By Staff

Who is Dell EMC and why should you care? Glad you asked is Jim Ganthier’s quick response. Ganthier is SVP for validated solutions and high performance computing for the new (even bigger) technology giant Dell EMC following Dell’s acquisition of EMC in September. In this case, says Ganthier, the blending of the two companies is a 1+1 = 5 proposition. Not bad math if you can pull it off. Read more…

By Tiffany Trader

Sometime in Q2 2017 the first ‘results’ of the Joint Design of Advanced Computing Solutions for Cancer (JDACS4C) will become publicly available according to Rick Stevens. He leads one of three JDACS4C pilot projects pressing deep learning (DL) into service in the War on Cancer. The pilots, supported in part by DOE exascale funding, not only seek to do good by advancing cancer research and therapy but also to advance deep learning capabilities and infrastructure with an eye towards eventual use on exascale machines. Read more…

By John Russell

Who is Dell EMC and why should you care? Glad you asked is Jim Ganthier’s quick response. Ganthier is SVP for validated solutions and high performance computing for the new (even bigger) technology giant Dell EMC following Dell’s acquisition of EMC in September. In this case, says Ganthier, the blending of the two companies is a 1+1 = 5 proposition. Not bad math if you can pull it off. Read more…

By Tiffany Trader

At first blush, and maybe second blush too, Hewlett Packard Enterprise’s (HPE) purchase of SGI seems like an unambiguous win-win. SGI’s advanced shared memory technology, its popular UV product line (Hanna), deep vertical market expertise, and services-led go-to-market capability all give HPE a leg up in its drive to remake itself. Bear in mind HPE came into existence just a year ago with the split of Hewlett-Packard. The computer landscape, including HPC, is shifting with still unclear consequences. One wonders who’s next on the deal block following Dell’s recent merger with EMC. Read more…

By John Russell

In 1994, two NASA employees connected 16 commodity workstations together using a standard Ethernet LAN and installed open-source message passing software that allowed their number-crunching scientific application to run on the whole “cluster” of machines as if it were a single entity. Read more…

By Vincent Natoli, Stone Ridge Technology

After offering OpenPower Summit attendees a limited preview in April, IBM is unveiling further details of its next-gen CPU, Power9, which the tech mainstay is counting on to regain market share ceded to rival Intel. Read more…

By Tiffany Trader

Amazon Web Services has seeded its cloud with Nvidia Tesla K80 GPUs to meet the growing demand for accelerated computing across an increasingly-diverse range of workloads. The P2 instance family is a welcome addition for compute- and data-focused users who were growing frustrated with the performance limitations of Amazon's G2 instances, which are backed by three-year-old Nvidia GRID K520 graphics cards. Read more…

By John Russell

The Department of Energy’s Exascale Computing Project (ECP) hit an important milestone today with the announcement of its first round of funding, moving the nation closer to its goal of reaching capable exascale computing by 2023. Read more…

By Tiffany Trader

ARM and Fujitsu today announced a scalable vector extension (SVE) to the ARMv8-A architecture intended to enhance ARM capabilities in HPC workloads. Fujitsu is the lead silicon partner in the effort (so far) and will use ARM with SVE technology in its post K computer, Japan’s next flagship supercomputer planned for the 2020 timeframe. This is an important incremental step for ARM, which seeks to push more aggressively into mainstream and HPC server markets. Read more…

By John Russell

Not long after revealing more details about its next-gen Power9 chip due in 2017, IBM today rolled out three new Power8-based Linux servers and a new version of its Power8 chip featuring Nvidia’s NVLink interconnect. Read more…

By John Russell

Vector instructions, once a powerful performance innovation of supercomputing in the 1970s and 1980s became an obsolete technology in the 1990s. But like the mythical phoenix bird, vector instructions have arisen from the ashes. Here is the history of a technology that went from new to old then back to new. Read more…

By Lynd Stringer

Leading Solution Providers

The 48th edition of the TOP500 list is fresh off the presses and while there is no new number one system, as previously teased by China, there are a number of notable entrants from the US and around the world and significant trends to report on. Read more…

By Tiffany Trader

At the Intel Developer Forum, held in San Francisco this week, Intel Senior Vice President and General Manager Diane Bryant announced the launch of Intel's Silicon Photonics product line and teased a brand-new Phi product, codenamed "Knights Mill," aimed at machine learning workloads. Read more…

By Tiffany Trader

With OpenPOWER activity ramping up and IBM’s prominent role in the upcoming DOE machines Summit and Sierra, it’s a good time to look at how the IBM POWER CPU stacks up against the x86 Xeon Haswell CPU from Intel. Read more…

By Tiffany Trader

The freshly minted Dell EMC division of Dell Technologies is on a mission to take HPC mainstream with a strategy that hinges on engineered solutions, beginning with a focus on three industry verticals: manufacturing, research and life sciences. "Unlike traditional HPC where everybody bought parts, assembled parts and ran the workloads and did iterative engineering, we want folks to focus on time to innovation and let us worry about the infrastructure," said Jim Ganthier, senior vice president, validated solutions organization at Dell EMC Converged Platforms Solution Division. Read more…

By Tiffany Trader

Neuromorphic computing – brain inspired computing – has long been a tantalizing goal. The human brain does with around 20 watts what supercomputers do with megawatts. And power consumption isn’t the only difference. Fundamentally, brains ‘think differently’ than the von Neumann architecture-based computers. While neuromorphic computing progress has been intriguing, it has still not proven very practical. Read more…

By John Russell

HPC container platform Singularity is just six months out from its 1.0 release but already is making inroads across the HPC research landscape. It's in use at Lawrence Berkeley National Laboratory (LBNL), where Singularity founder Gregory Kurtzer has worked in the High Performance Computing Services (HPCS) group for 16 years. Read more…

By Tiffany Trader

Micron Technology used last week’s Flash Memory Summit to roll out its new line of 3D XPoint memory technology jointly developed with Intel while demonstrating the technology in solid-state drives. Micron claimed its Quantx line delivers PCI Express (PCIe) SSD performance with read latencies at less than 10 microseconds and writes at less than 20 microseconds. Read more…

By George Leopold

Tucked in a back section of the SC16 exhibit hall, quantum computing pioneer D-Wave has been talking up its new 2000-qubit processor announced in September. Forget for a moment the criticism sometimes aimed at D-Wave. This small Canadian company has sold several machines including, for example, ones to Lockheed and NASA, and has worked with Google on mapping machine learning problems to quantum computing. In July Los Alamos National Laboratory took possession of a 1000-quibit D-Wave 2X system that LANL ordered a year ago around the time of SC15. Read more…