Looking for UNIX and IT expertise? Why not get in touch and see how we can help?

The idea behind processor sets has been around for a decade or so in the HPC arena. You’ve got certain jobs, that require a certain amount of CPU resources, or a certain IO profile, so you want to dedicate some CPUs just to them. Solaris has had processor controls in since the dark days of 2.6.

*Note:* I’m going to be freely talking about CPUs as the processing unit. This is all on T2ks and so I know that they’re not *real* CPUs – call them thread processing units or something, but for simplicity this document will just call them CPUs and be done with it.

The actual management of processor sets is very straightforward, and I’ll be playing about with them on one of my favourite bits of kit – the Sun T2000.

First of all we use the psrinfo command to view the status of our processors:

At the same time, let’s have a look with mpstat to get an idea of what the processors are dealing with while this is going on.

The important colums here are intr, showing the amount of interrupts each CPU is handling. We also need to keep an eye on the number of system calls each CPU is fielding (syscl) and also the context switches and involuntary context switches (csw and icsw respectively) to make sure jobs are completely before the scheduler kicks them off the CPU.

From this we can see we’re getting fairly decent throughput over GigE, and that the interrupts are spread across all the CPUs.

Now let’s create a processor set, and stick half our CPUs in it.

The command is psrset with the -c option to create a set. As this is the first processor set it will be processor set 1 – the next would be 2, etc. etc.

Remember we can get the number of our CPUs from the psrinfo command.

bash-3.00# psrset -c 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15
created processor set 1
processor 0: was not assigned, now 1
processor 1: was not assigned, now 1
processor 2: was not assigned, now 1
processor 3: was not assigned, now 1
processor 4: was not assigned, now 1
processor 5: was not assigned, now 1
processor 6: was not assigned, now 1
processor 7: was not assigned, now 1
processor 8: was not assigned, now 1
processor 9: was not assigned, now 1
processor 10: was not assigned, now 1
processor 11: was not assigned, now 1
processor 12: was not assigned, now 1
processor 13: was not assigned, now 1
processor 14: was not assigned, now 1
processor 15: was not assigned, now 1

Now that we’ve assigned half our CPUs to processor set 1, we want to disable interrupt handling for them. We could use the psradm command to do it on a per CPU basis, but it’s much easier to just apply the setting to the entire processor set.

bash-3.00# psrset -f 1

The -f option disables interrupt handling, and the 1 is the processor set we want to apply this to.

Well, that’s broken things. How come the processors in the set are now handling interrupts?

It looks like executing the binary inside the processor set still generates interrupts – but these are unlikely to be network I/O. Check out the number of syscalls being generated! It’s likely an artefact of my poor choice of application – iperf generates a huge amount of interrupts and can really cane your ethernet interfaces.

We could use dtrace to have a real poke around, but I think that should be the topic for another day.

Now we’ve finished playing around, we need to re-enable interrupt handling on those CPUs. As the -f flag to psrset disabled interrupt handling, -n is the option we need to re-enabled interrupt handling on a processor set.

bash-3.00# psrset -n 1

Now the CPUs are handling interrupts again, we need to delete the processor set. We do this by passing the psrset command the -d option, and giving it the processor set number:

Solaris processor sets are the easiest to use of all the resource controls built into the OS. We can peg things like zones, individual applications, or even specific processes, to their own processor sets to control and manage resource usage. This gives us some really fine grained control over how the system is used, and with a machine like the T2000 it allows us to really scale performance.

Looking for UNIX and IT expertise? Why not get in touch and see how we can help?

Over on their nTersect blog NVidia have post an interesting interview with Pat McCormick, a Research Computer Scientist, at Los Alamos National Lab (LANL). If you’ve ever wondered exactly how using GPUs for computation would work, or how much of a performance improvement it could bring to your workloads, you should watch this interview.

According to Pat, “Our research challenge is dealing with massive amounts of data, not only from the high performance computing aspect but how to analyze the data from simulations.”

This isn’t an HPC problem, it’s an issue that affects every business today. As storage expands and business needs grow, faster and more efficient methods of data analysis are needed – and GPUs seem to be offering the most cost-efficient way to solve this at the moment.

Looking for UNIX and IT expertise? Why not get in touch and see how we can help?

Sandia’s Sun Constellation system, Red Sky, has been placed at number 10 on the latest Top 500 supercomputer list. It’s a monster cluster system – 70TB of memory, 47,232 cores, and built up of Sun x6275 blade systems hooked up in a 3D torus with Infiniband interconnects.

Marc Hamilton has posted up a timelapse video on his blog over at Sun showing the system being installed.

Red Sky has replaced Sandia’s existing Thunderbird system, and is actually built in the same place as ASCI Red used to live. The x6275 blades use Intel Nehalem EP processors running at 2.96Ghz, with no local disk fitted to the blades, allowing a much greater density.

Red Sky also features Sun’s new Cooling Door System which pumps cooled water through the cabinet doors. Sandia’s calculations reckon that this will save over 5 million gallons of water a year, compared to traditional air-cooled systems.

Looking for UNIX and IT expertise? Why not get in touch and see how we can help?

Over on his blog at Sun Glen Brunett has announced he’s published a new version of the Solaris 10 Deep Dive security training. He’s updated it to cover new features and tools available in the latest 10/09 release of Solaris 10.

The updated Deep Dive includes things like nss_LDAP support for shadowAccount, ZFS quotas, and an example of using the Solaris Trusted Extensions. As usual it’s well written and aims to expose a huge amount of technology very quickly – so grab a copy and have a read through.

Looking for UNIX and IT expertise? Why not get in touch and see how we can help?

Sun have released a technical report on Transactional Memory, based on their experiences with the (now sadly canned) ROCK processor. “Early Experience with a Commercial Hardware Transactional Memory Implementation” is available as a free download from Sun’s research website – you can grab it at http://research.sun.com/techrep/2009/abstract-180.html

From the abstract:

We report on our experience with the hardware transactional memory (HTM) feature of two revisions of a prototype multicore processor. Our experience includes a number of promising results using HTM to improve performance in a variety of contexts, and also identifies some ways in which the feature could be improved to make it even better. We give detailed accounts of our experiences, sharing techniques we used to achieve the results we have, as well as describing challenges we faced in doing so. This technical report expands on our ASPLOS paper [9], providing more detail and reporting on additional work conducted since that paper was written.

Anyone who’s interested in High Performance Computing (HPC) or performance gains from Transactional Memory should have a read through this paper – it’s interesting stuff.