Posted
by
ScuttleMonkey
on Friday November 14, 2008 @05:46PM
from the minimum-requirements-for-crysis dept.

Protoclown writes "The National Center for Computational Sciences (NCCS), located at Oak Ridge National Labs (ORNL) in Tennessee, has upgraded the Jaguar supercomputer to 1.64-petaflops for use by scientists and engineers working in areas such as climate modeling, renewable energy, materials science, fusion and combustion. The current upgrade is the result of an addition of 200 cabinets of the Cray XT5 to the existing 84 cabinets of the XT4 Jaguar system. Jaguar is now the world's most powerful supercomputer available for open scientific research."

There already exists economic modeling. And it is no more impossible than climate modeling. Granted, human interaction becomes a factor when the general population is aware of the economic predictions, but I am talking theory, not necessarily practice.

Accurate economic modeling needs infinite resources, as the existence of the economic modeler needs to be taken into account, and it could be argued that the entire universe would have to be modelled 100% accurately - one atom being in a different place could cause drastically different outcomes years down the line, causing different economic conditions.

The same could be said of any simulation of a system where both the system and the simulation itself aren't completely independant (and, one could say, always have been). But since that's an unrealistic expectation we approximate, and AFAIK in the case of economics we do it pretty well.

Look at the caption on your graph. Hansen's Scenario A is a high emissions scenario which does not correspond to the emissions which actually occurred. If you want to legitimately test the skill of a climate model, you need to compare apples to apples. In this case, Hansen's Scenario B is the one that most closely corresponded to the real emissions trajectory. (Since Hansen is a climate scientist, not an economist, he gave a range of possible emissions scenarios and did not claim the world would follow

Ultimately human behavior is near-continuous series of yes/no decisions. Our brains iterate pretty deeply, but at some level it's ones and zeros. Though we may need more petaflops than angels on the head of a pin before we can scratch that itch. At any rate, the application of such a model will probably always doom it to failure.

How much do we really know about climate? Probably a lot less than we think. Scientists are always so sure they are right. And then a few decades pass and they realize they weren't.

I would suggest that you can't be sure of that, especially not for firstborns. You'll have to do some math on an awful lot of people and when you do so you'll make quite a few enemies and find out that a large number of firstborns were conceived before their parents married, in fact were the reason their parents married.

There is a queueing system. If you want to run a job on a machine like this, you log into the control node (which is just a linux box) and submit your job to the queue, including how many CPU's you need for it and how much time you need on them.

A scheduling algorithm then determines when the various jobs waiting in the queue get to run, and sends mail to their owners when they start and stop.

On many machines there is a debug queue with low limits for number of CPU's and runtime, and thus fast turnover; this is used to run little jobs to ensure everything is working right before you submit the big job to the main queue.

NCCS is a capability site, so no. You just need to be willing to wait for your job to bubble up to the top of the queue. In fact, as a capability site, the whole point is to develop codes that can run on the entire machine. Now, once your job runs, you will have to wait a while to get another opportunity, as the queues are set up to provide an 8 week moving average of "fairness".

Hell, you could *simulate* an x86_64 CPU on that thing, and it would not even hiccup on Crysis on Vista with every setting set to "Extreme".I wonder if someone could replace the engine by a full global illumination raytracer first.:D

I don't think you have a clue how such a machine works. To simulate a non-trivially-parallelizable system like an x86_64 on it you could run your simulation on exactly the same number of cpu's as you have cores in your simulated cpu, or less (for a much easier to program simulation).

Clusters such as these are *built* using standard hardware, in this case a (rather large) bunch of AMD opterons.

If you can show me how to run 'Crysis on Vista' faster on rack with 10 pcs without rewriting either (and I did not s

So, I'm standing in front of my new Jag, with my WOW CD in my trembling hands. Where do I plug in my Game Keyboard, and Mouse? There's nothing in the owners manual about where the plug is to connect to my Cable Box!? How much was this thing again?

The current upgrade is the result of an addition of 200 cabinets of the Cray XT5 to the existing 84 cabinets of the XT4 Jaguar system.

That sounds like Cray engineered this to aggregate components across product generations. For short product life cycles that seems like a great idea, not throwing out the old system when you get the new one but combining the two systems instead. Though obviously for long product life cycles it would be a losing proposition; The space and power requirements of inefficient older components would be greater than the space and power savings of upgrading to the latest model + the expense of the upgrade.

These systems are not so tightly integrated as you may imagine. True, many size a full-speed fabric just-right, each little bit costs a ton. However, commonly at scale, you only have full-speed fabric in large subsections anyway, and oversubscribe between the subsections. Jobs tend to be scheduled within subsections as they fit, though the subsection interconnects are no slouch.

This is particularly popular as the authortitative Top500 benchmark is not too badly impacted by such a network topology, and re

There are much bigger computers around in Los Alamos and Lawrence Livermore that are used to model thermonuclear weapons sitting around in storage to see if they'll still go pop when we push the Big Red Button.

The scientific community would like to use these machines for something useful, and in fact the scientists at Los Alamos have allowed some folks from my lattice QCD group to use a bit of spare time on it. Unfortunately the UNIX security features aren't enough; they weren't allowed to ftp our data out,

LANL, LLNL, and SNL are all weapons labs. ORNL is primarily a science lab.

I myself have worked at three of these labs and held an account on an earlier iteration of Jaguar as well as some of LANL's other supercomputing clusters, so I ought to know.

ORNL's Jaguar cluster, although parts of it are I think "controlled" rather than open so that it can run export-controlled code, is not at all classified. It's used for biology, astronomy, physics, CFD, etc.

He was specifically talking about LANL and LLNL rather than ORNL.. that was the entire point.

Granted, yes, his description of disallowing classified non-classified connectivity as "ludicrous" is a little off-base, although writing things down by hand really is stupid- there are plenty of procedures in place for putting data on transportable media and then arranging to declassify that media once it has been verified so that it can be used elsewhere.

In that case, pardon my misunderstanding in thinking that his post was at all related to the posted article.

His title, "Used for open science...", was a quotation from the summary specifically about the ORNL computer. His rant about "much bigger computers around" was plausibly interpreted as the biggest one, the new ORNL cluster. I certainly must have been misled.

It would be nice on these sorts of systems to have recurring, perhaps low priority, jobs issued by worthy outside distributed computing projects. Depending upon how busy the system is with other jobs it could make regular contributions to drug research and especially to AIDS research. To have complete and accurate pre-computed models of all steps in the protein folding process for all possible mutations of the AIDS virus, for example, would be a technological triumph and of potentially great benefit to humanity in the development of new drugs and possibly even an effective vaccine.

To have complete and accurate pre-computed models of all steps in the protein folding process for all possible mutations of the AIDS virus

1. Each trajectory would be several terabytes (possibly verging on petabypes).

2. The largest simulation I know of is this one: http://www.ks.uiuc.edu/Research/STMV/ [uiuc.edu] they simulated for 50ns and it's 10 times smaller than HIV. Protein folding takes milliseconds, not nanoseconds... it's not really tractable right now. I don't know how much cpu time the simulation took but it would have been a lot.

3. Clusters like these are rarely idle, jobs are queued up to run when the cpus become available.

There's no reason, whatsoever, to use a highly-connected, high-bandwidth HPC machine, like Jaguar, on distributed computing jobs. There are other very worthy jobs that can be run on such a system, that can't be run on a pile of desktops all over the internet. Use the real supercomputers for real supercomputer jobs. There are plenty of idle xbox in the world for distributed computing.

Climate change is gradual, but the emissions we put into the atmosphere today will last for centuries. Even if we switched over to all fusion power tomorrow, we'd still see more climate change, and the longer we wait to replace fossil fuels, the more we will see. Realistically, it takes a long time to widely deploy a new energy technology. Fusion isn't even feasible in the lab, let alone ready for deployment, let alone widely deployed.

Also, even if fusion were widely deployed, that doesn't mean we'd nece

I noticed this a short time ago, but have yet to see the 'Rmax' performance. They speak to Rpeak, which does beat out the current Rpeak by 23%, though the Rpeak by itself is even more uninformative than Rmax, which is already quite synthetic. Assuming the current #1 hasn't managed tuning or upgrades, this will have to beat 65% efficiency to technically win. 65% is likely an acheiveable goal, though the larger the run, the more difficult to extract a reasonable efficiency number, so it's not certain. I w

In the June 2008 Top 500 list, the Cray XT Jaguar was number 5 with 205 teraflop/s. By comparison, the number 1 was an IBM Roadrunner Bladecentre, with a mix of 6,562 Dual Core Opterons and 12,240 PowerXCell8i Cell Processors, housed in 278 cabinets. That got up to 1.026 petaflop/s.

In June the Jaguar had 30,000 Quad Core Opterons, and now it has 45,000. The previous machine was an XT4, but the most recent update shows that 200 XT5 cabinets have been added to it. I have been unable to find how many cabinet

In the not too distant future, we shall see a new Top 500 list. It just seems like yesterday that RoadRunner cracked the Petaflops barrier, and the whole world seems to have fallen on its ass in the interim. Banking failures, government bailouts, people losing their retirement portfolios. The irony is too much. Even as the computers get better, the answers that people need don't come fast enough.

Then the light turned on for me. People in general, the people you see on the street going on their busy way to whatever, are mostly relying on "someone else" to come up with the answers. Most people have little confidence in their own ability to answer hard questions.

Well, maybe things will turn around because of the power of supercomputers. It would be about time, wouldn't it? Here's how it may play out. Supercomputers so far, good as they are, serve up expensive results, so they are applied to difficult problems that are useful but far removed from everyday life.

As supercomputer clock cycles become more abundant, researchers can apply them to do more mundane things that the unwashed can relate to. The result could be revolutionary. People who have always aspired to some inconsequential achievement that requires some expertise or training may suddenly have access to highly instructive supercomputer-generated procedures that explain both how and why. Not only will people become more expert do-it-yourselfers, but robots will become far versatile, with amazing repertoires.

Crossing the petaflops barrier may be sufficient psychological incentive for people to request that governments begin to make supercomputing infrastructure available for public consumption, like roads and other services. Certainly, exciting times are comiing.

I would say more than lack of evidence is lack of causation rather than correlation. Scientists appear to agree that at least in the short term the earth is a little warmer. What they can't say with any certainty is why. Anthropogenic warming is the desired cause as that is the only one we can do a damn thing about.

Global warming is not based merely on correlation studies. It has a direct and well understood physical cause, which is the greenhouse effect. (What is less understood is the climate system feedbacks which modify the greenhouse effect.) And climate scientists can indeed say with a high degree of confidence that the recent warming is due mostly to human activities. This evidence comes from physical reasoning as well as observational measurements (such as the stratospheric cooling signature of the enhance

The worst IPCC scenario (A1FI) gives a worst case of 6.4 C (11.5 F) global warming in less than 100 years. I don't know if it's the worst thing that is likely to happen, but that's not that tame (especially when you consider that land warms faster than the global average, and northern latitudes even faster than that). As for the economics, Nordhaus's book A Question of Balance is a good place to start. Nordhaus isn't ideological. It's economically worth mitigating some CO2 emissions to insure against th

Whether it's man made or not is quite important, actually. If it's not (a scenario that's looking pretty damned unlikely) then doing something to halt or slow it becomes difficult as we have to find out what the hell IS causing the problem.

This point was made about astrological observations on older slashdot stories and it holds true for climate prediction too.

I have to admit I am sceptical about blindly believing in global warming. I used to in the past however I've become a little smarter since then I can not see any hard observations for it, especially when volcanoes pump out 26 times more CO2 per year then all of humanity on the planet (however I'm slightly sceptical of ho

I have to admit I am sceptical about blindly believing in global warming. I used to in the past however I've become a little smarter since then I can not see any hard observations for it, especially when volcanoes pump out 26 times more CO2 per year then all of humanity on the planet

That's not even remotely true. Volcanic CO2 emissions are about 1% of human CO2 emissions (see here [usgs.gov]). Where did you get the rather specific, and wrong, factor of 26?

Maybe you'd see the hard evidence if you spend a little more time reading about it, since you appear to have some peculiar misconceptions. I recommend Kerry Emanuel's essay "Phaeton's Reins", David Archer's undergrad textbook, and the IPCC AR4 report for technical details.

Having done my graduate work in fluid dynamics, only half your statement is possibly correct. There is historical evidence for global climate change, both warming and cooling. If is our interest to maintain the current status quo, the climate as we know it, it is not at all clear what interventions we need to take, or what effect they might have.
Without simulations, that correctly model the real world. We have no way of knowing what our interventions might do.
If anyone is interested, I can elaborate. Th

Well, purely scientific reasons for one. Climate science existed long before global warming was a concern.

Another reason is to inform adaptation. No matter what policy is realistically put in place, at least some more climate change is expected to occur. People are going to have to adapt to whatever change is not prevented. It's thus important to improve our understanding of what may happen. It also tells us how large and how fast a policy response is required, although as you note, we are not yet even

I am so confused by the mods...the post above this one (posted 1 minute before) says exactly the same thing but gets "-1 Redundant"?!! Is it just because it was an AC, or did the title of "So cool!" really change the joke?!?

I believe Cray made its partnership with AMD quite a while ago while they were still ahead of Intel in the performance/power ratio. In addition, these machines have a very fast interconnect (SeaStar) that is based on HyperTransport links.
I believe it was recently announced that Cray has formed a partnership with Intel, and I imagine they will port the technology to QuickPath for future machines, but QPI was not available at the time this machine was commissioned. One does not simply order a machine like

creating better algorithms? Or at least educating a little bit all non-CS scientists about performance and optimization?

The guys who work on the the dynamic cores for the biggest climate models (NCAR, GFDL, NASA, etc.) do world class numerical hydrodynamics. Maybe not quite on par with the nuke guys at, say, Sandia, but pretty good. And they do hire programmers and numerical methods people to do algorithm design, optimization, and parallelization. They're cutting edge in terms of grid solver algorithms for these sorts of problems. There are lots of complications from irregular topography, coupling between atmosphere, oce