Common Topics

Recent Articles

When we think of supercomputing we think of water-cooled Crays and Deep Thought, but features that once used to be reserved for lab-coated researchers are today often found in the most unexpected of places.

However, such compute-intensive systems can be elusive, disguised as rendering engines or batch processing servers hidden at the back of the machine room.

These days, supercomputing tends to be referred to as high performance computing, frequently shortened to HPC. On the technology side, HPC solutions that were once purely the domain of specialist supercomputing systems can now be built on top of industry standard x86 architecture processors – coupled with suitable operating systems, which include those built around the Linux platform or Microsoft Windows Compute Cluster Server as well as those available from specialist suppliers. As a result, systems capable of supporting processing intensive workloads are now available to support a wide variety of workload demands and have become far more affordable.

While the technology to support high demand workloads is available, many organisations don't yet take advantage of them. A potentially deciding factor in the rate of adoption is the ability to determine the existence of suitable workloads.

This is, of course, chicken-and-egg: without this understanding, it is difficult to appreciate value and vice versa. Understandably, then, HPC systems are more often deployed to support a relatively small number of specialist functions, before being used to support more general, "mainstream" processes.

So, should HPC systems remain the domain of the specialist, or are they ready for such a wider remit? Are the systems now available to support high performance workloads ready and affordable for the mainstream and, perhaps even more importantly, are there now enough "routine" or everyday business workloads that require such systems? With such questions in mind we would dearly love to know your opinions, to gauge the potential use of HPC type systems in mainstream business operations.

So please take a few minutes to fill in the short poll attached here and we will be sure to let you know the results.

Q1. Do you run any applications that are particularly demanding in terms of server "horse-power" requirements, i.e. compute intensive applications that put heavy pressure on server CPUs?

Q2. Do you run or expect to run in the future any of the following types of compute-intensive applications? (tick all that apply)
Note potential compute-intensive applications might include: Reservoir Modelling, Geophysical analysis, CAD/CAM/CAE, Simulation, Safety Processes, Disaster Planning, HazMat, Situational Awareness, Demand forecasting, R&D, Complex Analytics, Visualisation

Now

Future

Modelling and/or simulation

"Number crunching" applications (such as those above)

Real-time graphics generation and rendering, visualisation

Complex Spreadsheet Calculations

CPU intensive batch jobs that need to run in a short time window

Other compute intensive applications

Q3a. Do you have any business workloads that currently run in a batch environment (overnight, at weekends)?
Yes
No

Q3b. Would there be business benefit to running these jobs interactively / on demand rather than in batch?
Yes
No

Q4. From whom would you seek advice and high performance computing solutions for the compute-intensive applications and workloads you have specified?
(please identify)

Q5. What, if anything, is stopping you from using such high performance computing systems today? (Please pick all that apply)
Cost
Complexity
Lack of applications
No business requirement
Other (please specify)

Q6. Do you have any other thoughts about use of high performance computing in your business?

Q7. Which of the following best describes your organisation?
An IT vendor
A telecom service provider
Other business with less than 250 employees
Other business with 250 to 5,000 employees
Other business with more than 5,000 employees
Public sector
None of the above (please specify)