Tagged Questions

High Performance Computing encompasses using "supercomputers" with high numbers of CPUs, large storage systems and advanced networks to do time-consuming calculations. Parallel algorithms and parallelization of storage are essential to this field, as well as issues with complex, fast networks.

I am trying to set up bursting to Azure with a Windows HPC cluster. THe cluster already works fine and I can start jobs on the machines on that are on the local network.
When I try and create a node ...

I have a large analysis job on an AWS EC2 instance (c3.8xlarge) on Ubuntu 12.04.
The objective is to load the server at 100% CPU, running as many jobs as memory allows (varying amounts but generally ...

I'm trying to set up a Windows Server cluster with a head node and 2x compute nodes.
So far, i've managed to install : (thanks to this tutorial http://msdn.microsoft.com/en-us/library/jj884142.aspx)
...

I'm working with a SLURM driven HPC Cluster, containing of 1 control node and 34 computation nodes and since the current system is not exactly very stable I'm looking for guidelines or best practices ...

I was wondering whether the following stack would work, and if so then how well, and what sort of problems I might expect to encounter when setting it up?
Hardware layer - lots of cheap servers
SMP ...

I'm quite new to HPC environment. Is there any difference in running a job on a node utilizing 8 cores and running the same job on 8 nodes utilizing I core in terms of performance or walltime used.
...

I have a working Windows HPC cluster with 32 blades, all of them are using Windows HPC.
My question is: can I install Linux on 16 blades and keep the other 16 on Windows? Is there a specific version ...

Looking for a feedback if anyone has already played with running ScaleMP linux appliances in OpenStack (KVM)?
A short description of the setup (w/ or w/o InfiniBand, total amount of RAM, etc) and its ...

I've got a number of servers used for HPC / cluster computing and I noticed that, given the fact that part of the computations they run use huge files over NFS, this causes significant bottlenecks. ...

I'm running scientific software (WRF-NMM, atmosphere simulation, weather prediction). On all computers I used it, job running times of run cycles were consistent. All those machines were one socket, ...

Does Xeon Phi coprossesors works with i7 CPUs ?
It's advertised for use with Xeons, but for my app (WRF), i7-3930k performs better and is 3 times cheaper than high grade xeons. So I wonder if I could ...

The instructions for deploying an HPC Cluster (e.g. step 1.5 on this page in TechNet) are very clear that HPC cluster nodes "must be members of an Active Directory domain".
Does the Active Directory ...

We have an environment of a few thousand users running applications on about 40 clusters ranging in size from 20 compute nodes to 98,000 compute nodes. Users on these systems generate massive files ...

First of all, thanks for reading, and sorry for asking something related to my job. I understand that this is something that I should solve by myself but as you will see its something a bit difficult.
...

It's not clear to me from reading the documentation if I must update the clients, or whether I can just update the head node and compute nodes.
Does anyone have any experience of this? I don't want ...

O know this is a long-shot but I'm clueless here. I'm running several computer simulations on High Performance Computation cluster (HPC) of oracale grid engine (sge). A single job runs at a certain ...

We are currently running an interactive HPC application which presents a graphical interface to the user, attaches to an HPC cluster and allows them to run and observe some computation. The user logs ...

The cluster resource manager Torque typically allocates compute nodes on an exclusive basis. However, when you have a lot of small jobs (like we do) running against multi-core compute nodes, this can ...

I am looking forward to use parallel file system using MPI in linux cluster.
I am wondering if parallel file systems like lustre/Parallel Virtual File System
require special hardware support(special ...

I am setting up a supercomputing Linux cluster at work. We ran the most recent HPCC benchmarks using OpenMPI and GoToBlas2 but got really bad results. When I ran the benchmarks using one process for ...

We have a couple of servers (part of an HPC cluster) in which we're currently seeing some NFS behavior which is not making sense to me. node1 exports its /lscratch directory via NFS to node2, mounted ...

I'm currently in charge of a rapidly-growing Hadoop cluster for my employer, currently built upon release 0.21.0 with CentOS as the OS for each worker and master node. I've worked through most of the ...

Anyone know of an alternative to ScaleMP? They let several x86 boxes boot as one large box. Theoretically AMD's hypertransport should allow the same thing.
Any other companies or OSS projects doing ...