Slashdot videos: Now with more Slashdot!

View

Discuss

Share

We've improved Slashdot's video section; now you can view our video interviews, product close-ups and site visits with all the usual Slashdot options to comment, share, etc. No more walled garden! It's a work in progress -- we hope you'll check it out (Learn more about the recent updates).

riverat1 writes "After being embarrassed when the Europeans did a better job forecasting Sandy than the National Weather Service Congress allocated $25 million ($23.7 after sequestration) in the Sandy relief bill for upgrades to forecasting and supercomputer resources. The NWS announced that their main forecasting computer will be upgraded from the current 213 TeraFlops to 2,600 TFlops by fiscal year 2015, over a twelve-fold increase. The upgrade is expected to increase the horizontal grid scale by a factor of 3 allowing more precise forecasting of local features of weather. The some of the allocated funds will also be used to hire some contract scientists to improve the forecast model physics and enhance the collection and assimilation of data."

Europe: 70 TFLOPS by and upgrade to be finished by early 2013 (Sandy was in Oct 2012), which they say will make it about 3 times the power of the computer it replaces. i.e. 23 TFLOPS, they did a part upgrade during Sandy, to about 50 TFLOPS

USA: 213 TFLOPS, to be upgrade to 2,600 TFlops

So no, the Europeans did the prediction with 10%-20% of the supercomputing power, 2% of the proposed supercomputing power. This is just a subsidy to the Supercomputer industry (and indirectly USA chip makers), at a time when the PC market is tanking. It has nothing to do with the garbage the US produced, they just used a bad model.

"Replacement of the second cluster will be completed in early 2013. Each cluster has 768 POWER7-775 servers connected by the IBM Host Fabric Interface (HFI) interconnect."

"For the first time the processor clock frequency actually decreased, going from 4.7GHz to 3.83GHz, despite this each processor core has a theoretical peak performance 60% greater than that of the POWER6. For ECMWF's applications the system is about three times as powerful as the system it replaced.The first operational forecasts using this system were produced on 24 October 2012."

This is similar to how politicians and teachers' unions insist that the way to produce better results in our public schools is to throw more money at them. Meanwhile, the performance of European schools (and even American private schools) do better with less.

The assertion that private schools are better is a dangerously miguided one for a few reasons:

1) private schools can and do choose the students they want. So they either throw out or do not enroll the students that take the most time and effort. This includes for things like discipline problems, learning disabilities, and phisical disabilities. So they avoid all of the expensive students. And those students are enormously expensive to take care of.

So much ignorance, I'm not sure where to start. First, I work in a public school. I've worked in private schools, my mother runs a (non-religious) private school. My wife has taught in other private and public schools. My daughter has an IEP.

Teacher's unions generally do not want more money thrown at the schools, depending on the state. There will be significant differences in the political games in public education depending on the state. In most "at-will employment" states, the teacher's union is mostly there for show. The district mostly controls the teacher's union. In states like this, the district administration pushes for more money to be thrown at "schools." This is because they control how the money is actually spent, and as a result, most of the money doesn't get to the school level.

There is a huge misdirection that most of the general public has fallen for in public education. The perception among the general public is that the "schools" are at fault. In reality, the district administration controls and dictates everything. A school principal has much less authority and autonomy than most people realize. This works out great for the district administration, because the school staff regularly become the scapegoat for failed district policies. In many states, counties, districts, cities, a school can do very little other than what district administration tells them to do. In effect, a school has all the accountability with none of the authority. Meanwhile, the district administration continues to make decisions in a vacuum while collecting paychecks that would make a seasoned IT Director blush.

I've been in countless IEP meetings, both as a parent and as a school administrator. Most IEP meetings are educators and parents. 4-5 school staff, 1-2 parents. The school staff are usually the various specialists (speech, OT, learning specialist, etc) and general ed teacher. I will usually be involved if there's some behavior concerns related to the IEP. I have never had a lawyer in an IEP meeting, other than a child advocate when there's social services involvement with a student. If a district needs lawyers at every IEP meeting, they're doing something very wrong. That would suggest the district is the problem, not the system itself.

One last note, socio-economic status has a larger impact on student success than most people want to admit. Districts don't want to talk about that because then they might lose the federal programs thanks to No Child Left Behind.

tl;dr It's not the individual public schools that are a problem, it's district administrations and school boards that have created huge bureaucratic structures and keep huge portions of money at the district level. This is really why private and charter schools generally do better with less money. They don't have the huge bureaucracy sucking up the money.

You are correct. Add to this the cozy relationship between the Administrators and the Politicians, and the natural tendency to create benefit and retirement packages that the taxpayers will never, ever be able to pay for...

1. Less teacher overhead (time spent with rulebreakers, and slow students) because most students actually want to be there. And for those who do not want to be there, they will break rules, and they will either get kicked-out, or their parents will donate large sums of money (both help solve problems). Public schools are the providers of last recourse, so they can't pick-and-choose; this holds-back result

at http://www.ecmwf.int/services/computing/overview/ibm_cluster_phase2.html we can see that ECMWF lists 70 TFLOPS as the SUSTAINED performance of their system, wheras the US numbers (213 TFLOPS and 2.6 PFLOPS) are peak. Big difference.

You're confusing peak and sustained performance. According to the link you provided the latest European system has 70 Tflops sustained performance, but 1.5 Pflops peak. According to this article [theregister.co.uk] the ratings given for the American systems are peak, so the European system is much more powerful than the current US system.

The ECMWF, for example, utilizes an IBM system capable of over 600 teraflops that ranks among the most powerful in the world, and it's used specifically for medium-range models. That, fundamentally, is the reason their model frequently outperforms the American one. The US National Weather Service’s modeling center runs a diversity of short-, medium-, and long-term models, all on a much smaller supercomputer. The National Weather Service has to do more with less.

The Europeans rightfully use Fortran for the numerical simulations, while the US hipster-doofus coderz use C and tons of flying pointers everywhere (essentially just sophisticated GOTOs). This creates code that is far, far less efficient. I wouldn't be surprised if much of their C codebase has been refactored with the use of automated tools several times.

This upgrade in computing power is to move the US from 3DVAR to 4DVAR, however, it does nothing to improve the US weather models. This is interesting, in that 4DVAR can give worse results than 3DVAR, while using additional compute power. There was a nice paper written in this:

Before everyone has a European-model lovefest, I should point out that that the European model has been *spectacularly* wrong about a few hurricanes over the past 2 years with respect to South Florida landfall (or non-landfall). I remember at least one (possibly Sandy or Isaac) where it was totally out in wacky-land ~5 days out, and didn't converge back into agreement with reality until 2-3 days later.

I don't know if it was the NWS or the EU agency that did it, but the predictions for Sandy were pretty much dead on accurate. North up the coast to Long Island, sharp left turn into Jersey and then back North to dissipate. Storm surge corresponding with high tide at Battery Park (whatever time it was), and they got that right too.

The NWS, or whoever, got the storm right. Same with Katrina. How it will effect the infrastructure it hits is a different story and doesn't seem to be the NWS's problem.

Why not just pay attention to the European forecasts, which would cost nothing?

Actually, the NWS pays a great deal of money to see the ECMWF (the European model of choice) and are required to encrypt it before it is sent out to the various forecast offices over their NOAAPort system.

Because Europe was better? Why not:because we want to increase our quality regardless of what others are doing. Think about it: if the Europeans would not have been better, what you had would have been good enough. Or "We could predict the storms better and save potential lives, but who really cares? We are already the best in doing it. USA! USA! USA!"

The NWS doesn't need a faster supercomputer. The current one can pump out bad results based on a flawed set of algorithms at a perfectly useable rate. What the current computer can't do is act as an East Coast regional processing point for THIS [slashdot.org].

This all costs money: the further ahead and the more precise you can forecast the storm track, the less it costs.And yes, the NWS will have had to provide good evidence they can save that money in order to justify the upgrade.

But they do get tired. Not sure about a hurricane, but I know that at least some types of seals (whatever's in the San Diego area anyway) head out to sea when there is a storm, presumably so they don't get bashed into the beaches or the rocks. Afterwards they're very tired (ever try swimming 24 hours through a storm?) and go up on the beach to rest. At times like that you'll see loads of them, much more than usual.

I swear, the way things are going, I expect people to start living on platforms suspended over active volcanoes and demanding taxpayer dollars for their air conditioning costs.

If you live somewhere that nature has decided is no longer going to be habitable by humans, get out or go down with the damn ship but either way do not expect anyone to help you rebuild in the same place. The most the taxpayer should be on the hook for is helping you relocate, which is generous enough. Evacuate permanently, or not a

If you live somewhere that nature has decided is no longer going to be habitable by humans

Where in the country isn't there such a potential? I think it's ridiculous to put buildings in very low lying areas, often a few feet above sea level, but entire parts of the country? I went through Sandy without a scratch to my house on Long Island. I wasn't dumb enough to buy in a low lying area, but otherwise it was mostly luck. Even aside from that the damage to the infrastructure wasn't fun. There are limits to what you can do to live in areas that aren't subject to any natural disasters.

I live on the banks of a small stream, in a 170+ year old former water-powered factory, so I am forced to pay Federal Flood Insurance. It'll never pay out - my money will go to rebuild the homes of people far wealthier than me, who can afford to live on hurricane-swept beaches and barrier islands.

If I had the money for beach property, or chose to live on a barrier island, I would not evacuate. If you can't face real life in the place you live, you should move. Note, though, I personally am all in favor o

If your house is in an area blackened by recurring fires, and you think it's reasonable for you to evacuate when one's expected and then receive government assistance to rebuild in the same damn place when it burns down, then you and I are talking about the same things and we're having a conversation.

Otherwise, not; we're just talking past each other.

That being said, the best thing to do if your house catches fire is put it out.

If you live somewhere that nature has decided is no longer going to be habitable by humans, get out or go down with the damn ship but either way do not expect anyone to help you rebuild in the same place.

Name a part of the United States that is not at risk from a natural disaster of any kind. Now name a part large enough to support 300 million people.

If your house gets wiped out once, that's very sad. Let's all chip in to put you some other place. If you insist on staying, you should be on your own - it's disgusting when society encourages people to be craven parasites.

Except that after a relatively short period of time, there will have been enough hurricanes, tornadoes, earthquakes, floods, droughts, wildfires, volcanic eruptions, meteor strikes, and alien invasions that there won't be "some other place" left.

Relative to what, the heat-death of the universe? Here on Earth, there are remarkably many buildings that are far older than any living human. My own house is over 170 years old, and my sister lives in a 600 year old house in England that isn't going anywhere in our lifetimes. If you are trying to claim that there is no better place to put the people who lose their homes in hurricanes, and that we have no other option but to rebuild their homes where we know they will be again destroyed, I'll remain unco

What? I thought you'd made it clear you don't care about costs - rebuilding the rich man's beach house at taxpayer expense over and over again is what I'm arguing against.

I'm not the one forcing cost and risk on others here - I already volunteered to go down with the ship. Please don't "rescue" me and don't try to claim I'll change my tune when the water rises - I've been through several hurricanes already, and I don't evacuate or call for taxpayer help, I get in there and do stuff.

Of course. My reply was a half assed attempt at humor. I could just as easily point out, like so many others in this thread, that "the Europeans" used much, much less processing power (than the proposed 2k TFLOPS) to come to a much better prediction of the outcome.

The hardware is already in place, putting more in place is just adding precision to an inferior result. 1.023 vs 1.0234762 is still shit when the correct answer is 2.

And yes, the NWS will have had to provide good evidence they can save that money in order to justify the upgrade.

Well, what is a human life worth then? Or a life's work? How about spending some

... the Europeans did a better job forecasting Hurricane Sandy. Oh. Didn't know that. But hey when they make a movie of it, I'm sure they will present as fact that the American system was the most awesome thing and NWS was right on the money with typical awesome American ingenuity.... sorry, 'Argo' flashback.

Though most of the world uses the Celsius scale, the Fahrenheit scale may be better suited to meteorology. For one thing, it is more precise and less coarse simply because each degree represents a smaller interval.

Bullshit. There is no precision to be had from choosing a unit, the precision comes from not being an idiot and doing all your calculations in straight integers.

More importantly, the range in temperature from 0 to 100 degrees Fahrenheit almost perfectly demarcates the extremes found in the climates of the United States and Europe; it seldom gets any hotter or colder. The convenience of a perfect 100 degree interval encompassing the temperatures in which most of us live seems a pity to lose. (The same range on the Celsius scale is a clumsier -18 to +38 degrees.)

More bullshit. The argument is based on how you feel towards a given range, but nobody is going to do those calculations by hand. You could just as easily have a range of 0-1 and have the exact same precision as before, just more numbers after the decimal point.

And predicting the weather is not about predicting the normal as much as predicting the extremes, which would lie outside your "perfect range".

The everday public benefits from not having to think about fractional degrees, which can't even be felt on the Fahrenheit scale.

Fractional degrees can't really be felt on the Celsius scale, either.

Really its only benefits are that the numbers are convenient for us to think about in everyday use while not doing a lot of mathematics with them. But those are benefits.

What, exactly, is more convenient about F than C? You don't really do arithmetic on either in daily use. So it's all down to what you're conditioned to accept. It's 17 C / 62 F here right now, but the second number would not tell me anything if I didn't know that it translates to the first. For you, it's the other way around. Neither is inherently more convenient to think about.

On the other hand, zero celcius is the boundary line between dealing with frozen water (as ice or snow) and dealing with liquid water (as flooding or rain). That's incredibly convenient when travelling. I don't think that the nuanced subtlety implied by indicating that it's going to be 95F instead of 94F tomorrow is really worth the tradeoff, or for that matter reflected in the precision of the model itself.

On the other hand, zero celcius is the boundary line between dealing with frozen water (as ice or snow) and dealing with liquid water (as flooding or rain). That's incredibly convenient when travelling.

Not that I'm in favor of fahrenheit (I'm not) but water freezing at 32F isn't any more difficult to deal with than 0C. Both are arbitrarily chosen chosen scales. Celsius has the nice round numbers but from a practical day to day usage standpoint that matters not at all. I know that water freezes below 32F and that doesn't take up any more room in my brain than 0C. The only real problem is that I have to remember two scales instead of one. Since Celsius is the more widely used scale, I wish we would swi

It appears that the computers that Europe was using for the "better forecast" were not as powerful [ecmwf.int] as the [metoffice.gov.uk] old system being replaced. Upgrading because Europe's forecast better would be like taking a slow route to a holiday destination then buying a Porsche because your neighbours got there sooner when all you need is a new roadmap.

Also, though I would like to believe that Europeans have superior algorithms, realistically the hurricane prediction could be a "one off". We know that modeling weather can gibe widely different results based on small variations of starting conditions, assumptions, etc. Unless there is evidence that European forecasts are consistently better it could just be luck. With the known chaotic nature of storm systems it wouldn't surprise me if the "butterfly effect" of the rounding errors when converting from C to F would be enough to displace a storm by hundreds of miles!

To detect butterfly effects they bracket the scenarios with a small delta and see if it swings off chaotically. If that happens, then they know they can't make a realistic prediction, because the sensors they have don't permit it. Adding more computing power doesn't fix anything then.

No, it was simply a little better algorithm run on a computer a tiny fraction of the processing power of the *current* US supercomputer. I notice there's a lot of government money going into supercomputers as the PC market drie

It's not just once. Several hurricanes and other severe weather systems have been most accurately predicted by the European model. In fact, if you read some of the links in the article, you'll see references to that.

With the known chaotic nature of storm systems it wouldn't surprise me if the "butterfly effect" of the rounding errors when converting from C to F would be enough to displace a storm by hundreds of miles!

Absolutely not the case. First, all non-trivial computational fluid dynamics codes (e.g. those used for weather prediction) use non-dimensionalized values in the governing equations. You're not solving with C vs F (you'd never use either anyway, but absolute kelvin vs rankine), or meters vs feet, but non-dimensional numbers which are only converted back to dimensional ones after all the heavy computation is done.

Secondly, even if one were to use dimensional values in solving the equations, the round off err

No, that article is drumming up controversy that doesn't really exist. From TFA:

Only 19 percent of U.S. meteorologists saw human influences as the sole driver of climate change in a 2011 survey.

I'm surprised it isn't 0%. The vast majority of climate scientists don't believe human influence is the sole cause either. Considering how much the climate has changed w/o human intervention, it's ridiculous. One of the difficulties of convincing people of AGW is that it's superimposed on a natural warming trend (emergence from the little ice age and beyond).

It's kind of astonishing how little we (by which I mean the U.S.A.) spend on weather forecasting relative to the economic effects. The economic costs of weather are in the hundreds of billions of dollars annually. You can't change the weather, but more accurate predictions will save more lives and property.

I try not to plan my life around the weather, but a few million to possibly offset billions in damage from an incorrect hurricane path prediction is a no-brainer.

Where did you find information on the USA's spending on weather forecasting? Is it really that much lower than that of the European countries?

People seem to see all the embarrassment behind the fact that the European weather forecasting system is so much better, but Europe consists of 50 countries with a total population of 750 million. I don't know how many of those countries put into that weather system funding pot, but I'll betcha its most of them.

I'm a bit confused... why is so much money being spent if the technology already exists elsewhere? What about remote computing? Why can't we share resources? A 2.6TFlop super computer had better last us a long time. I can't imagine what the "1.21 Gigawatt" power bill will look like.

Here's a great chance to jump in on another multi-billion dollar government tech boondoggle. Why let SAIC and the other Beltway Bandits scarf up all the big bucks? A bunch of us ought to slap a shell company together and bid like there's no tomorrow. Get on board that gravy train while we can!

If this goes anything like recent FAA, USPS, and VA projects to name but a few, a successful contractor can bill for years while never delivering a finished, operational product.

Surely we can spec a 2.6K TFlop monster, with ancilliary systems, and market it to the GSA purse-holders. Easy math. Calculate the probable actual cost (fair bid price), triple it (IBM, Kray, or SAIC's price), and multiply by.9 = winning bid (never bid too low on a government contact; they automatically chuck out the highest and lowest).

This is how Government funding works. I was at a workshop on the then-new field of space weather forecasting in the mid 1990s where the keynote address was given by Dr. Joe Friday, at the time the head of the NWS. He pointed out that we would see no serious funding from Congress until there was the space-weather equivalent of a train wreck that kills many voters, or costs the monied interests lots of dinero. (Joe later lost his job when a non-forecastable flood in the mid-west that exceeded the 100-year

Posting anon to avoid burning bridges. NCAR has tried to develop better forecast models but they've layed off experienced US staff to hire foreign H1B grad students to write their software. I lost my 18+ yr position as a software engineer at NCAR, while helping to replace the 1980's crap they use to verify the accuracy their models with modern software, using modern techniques . They have great hardware but very amateur software. I got a "we've lost funding for you" while they were hiring H1B's. I was of

Supercomputing improvements are nice, but I personally want to see them get the cash to profoundly increase their NEXRAD backhaul (the data lines connecting their radar sites to the outside world).

Right now, they're HORRIBLY backhaul-constrained. I believe most/all NEXRAD sites only have 256kbps frame relay to upload raw data to NOAA's datacenter for further processing & distribution to end users. As a result, they're forced to throw away data at the radar site to trim it down to size, and send it via UDP with little/no modern forward error correction. That's a major reason why glitches are common. In theory, the full-resolution data is archived to tape on site and CAN be mailed in if some major weather event happens that might merit future study, but the majority of collected data gets archived to tape, then unceremoniously overwritten a few days later. And most of the tapes that DO get sent in sit in storage for weeks or months before finally getting added to their near-line data archive.

The low backhaul bandwidth is made worse by the fact that the secondary radar products (level 3 radar, plus the derived products like TVS) get derived on site, and wedged into the SAME bandwidth-constrained data stream. That's part of the reason why level 3 data lags by 6-15 minutes... they send the raw level 2 data, and interleave the previous scan's level 3 data into the bandwidth that's left over. I believe the situation with TDWR sites is even worse... I think THEY actually have a single ISDN line, which is why level 2 data from them isn't available to the public at all.

As I understand it, they can't use lossless compression for two reasons -- since they have no error correction for the UDP stream, a glitch would take out a MUCH bigger chunk of data (possibly ruining the remainder of the tilt's data), and the error correction would defeat the size savings from the compression. Apparently, the processors at the site are pretty slow (by modern computer standards), so it would also add significant delay to getting the data out. When you're tracking a tornado running across the countryside at 50-60mph, 30 seconds matters.

If NWS had funding to increase their backhaul to at least T-1 speeds, they could also tweak their scan strategies a bit to make them more useful to others. For example, they could do more frequent tilt-1 scans (the lowest level, which is the one that usually affects people the most directly), and almost immediately upgrade all current NEXRAD sites to have 1-minute updates for tilt 1 (adding about a minute to the time it takes to do a full volume scan, but putting data more immediately useful to end users out much more frequently).

Going a step further, more bandwidth would open the door to a fairly cheap upgrade to the radar arrays themselves... they could mount a second antenna back-to-back with the current one with fixed tilt (ideally at 10cm, like the main one, but possibly 5cm like TWDR if 10cm spectrum isn't available, or a second dish of the proper size for 10cm wouldn't fit), and do some moderate hardware and software tweaks that would effectively increase their tilt-1 scanrate to one every 6-10 seconds (because every full rotation of the main antenna would give them a full tilt-1 rotation off the back). This means they could send out raw tilt-1 data with 6-10 second frequency. It's not quite realtime, but it would be a HUGE improvement over what we have now.

Unfortunately, NWS has lots of bureaucracy, and a slow funding pipeline. I think it's safe to say that the explosion in popularity of personal radar apps, combined with mobile broadband, almost totally caught them by surprise. Ten years ago, very few people outside NWS were calling for large-scale NEXRAD upgrades. Now, with abundant Android and IOS apps & 5mbps+ mobile data the norm, demand is surging.

That said, I hope they DON'T squander a chunk of cash on public datafeed bandwidth instead of upgrading their backhaul. I'd rather see them do the back-end upgrades that only THEY can do, and tell people who want reliable & frequent upgrades to get their data feed through a private mirror service (like allisonhouse or caprockweather) who can upgrade their own backhaul as needed, instead of having to put in funding requests years in advance.

One key thing you missed. The NWS 88-D Radar system *can* take a scan every minute at the expense of resolution and distancee. A "full" scan across the commonly used tilts takes *six* minutes. You can have a OC-3 to every radar site, but you're only going to get data ever six minutes most of the time.

You're mostly right, but you're overlooking the software limits that exist mainly due to the limited bandwidth. If they upgraded the sites to a full T1 and tweaked the software a bit, they could give us new tilt-1 updates every minute, with about 15-60 seconds of radar-to-end-user latency, without major hardware upgrades besides the T1 interface itself.

Compare that to now, where we get only a single tilt-1 scan every 6 minutes, and that scan might itself be delayed by another 6-10 minutes on top of that. There are ALREADY several VCP programs that sample tilt 1 every minute... they just can't send out that data, and only use it locally for calculating their derived products, because they don't currently have the dedicated bandwidth to send it out.

Remember, WSR88D is kind of like an Atari 2600... it has very few limits that are truly "hard" and insurmountable. Rather, they're software-imposed in recognition of other limiting factors like backhaul bandwidth, or are precautionary limits imposed to guarantee that some specific product can always be fully-derived and delivered within some specific amount of time, or in a way that won't be destroyed by random errors. Many of them could be substantially improved with even minor hardware upgrades in other areas.

There are real limits to resolution imposed by scattering, wavelength, and particle size, but from what I've read, the current level 2 scan data is still throwing away about 30-50% of the nominal max resolution, and enormous amounts of theoretical resolution that could be recovered through oversampling. At this point, NWS doesn't even *know* what they could derive offsite from oversampled level 2 data, because they've never had the backhaul resources to even *fantasize* about streaming it in its full oversampled glory, or even archiving it all on site. 20 years ago, the idea of having 64 terabytes of on-site raid storage for Amazon/Google-like raw indiscriminate archiving would have been unthinkable, and never even entered into the equation.

The current scan rates are a compromise that tries to balance their backhaul against the need to track fast-moving storms like tornadoes. If they mounted a second, fixed-tilt dish back to back with the current dish so that every rotation produced a tilt-1 sample, they could alternate the back-facing samples between slow and fast pulse rates (so every other scan would be alternately optimized for range or resolution), and dedicate the front-facing dish currently in place to sampling the higher tilts (interleaving them to sample lower tilts twice at both PRF rates). Freed of the need to dedicate at least two full sweeps out of each volume scan to tilt 1 (because the back-facing antenna would sample tilt one every time the dish rotated), they could possibly slow down the rotation rate and use it to increase the resolution.

The closest thing I've seen to my idea was a paper someone at NOAA wrote about a year or two ago, proposing a compromise between fixed-tilt back-to-back conventional radar, and full-blown (and likely to be cost-prohibitive) phased-array radar 360-degree fixed radar. Basically, their idea was to build a limited wedge of PAR modules capable of sampling 4 tilts over ~1 degree horizontal, and mount it to the back side of the existing dish assembly, so that it could sample 4 tilts per revolution, and give us the equivalent resolution of 4-tilt level 3 TDWR every 12-15 seconds. The idea is that NOAA would then have a TDWR-resolution rapidly-updating radar source for tracking fast-moving/rapidly-developing storms off the back, and could slow down the overall rotation to get more detailed ultra-hires samples than we have now off the front dish.

The catch, from what I recall, was that they'd HAVE to decrease the RPM, and use 5.8GHz (like TDWR) for the rear array, because there just isn't enough C-band 10cm spectrum available to simultaneously broadcast 5 pulse beams without creating an interference scenario that would make their current range-folding issues look downright tame. They'd

Consumer-grade broadband is NOTORIOUSLY vulnerable to regional power outages... something that tends to happen simultaneously with bad storms. Imagine the outrage if South Florida lost its radar every time the outer rain bands of a hurricane started to knock out the local power grid, or if Oklahoma or Kansas lost their radar when an advancing squall line knocked out Comcast's power a half hour before the parade of tornadoes following it arrived?

AFAIK, 256kbps frame relay at WSR88-D sites, and 128kbps ISDN at TDWR sites. I believe they're now in the process of upgrading the TDWR sites to 256kbps frame-relay, and enabling 1-minute updates for tilt-1 data as they get the backhaul updates completed.

The big, huge, immediate improvement from backhaul upgrades is basically 1-minute updates for the lower tilt. I believe they're doing TDWR now, and hoping to use it as a demonstration of value to gain support for doing the same for the WSR88D sites "really

Cliff Mass, University of Washington Atmospheric Sciences Professor, has been arguing for an upgrade [blogspot.com] for a long time. He sees great potential [blogspot.com] for this new system if used right. The reasons [blogspot.com] for the upgrade boil down to having "huge economic and safety benefits" with better forecasting, and he says these benefits are within our reach.

Which big government contractor needs work now?
That seems to drive these projects more than actual need. I'm guessing the NWS/NOAA has plenty of computing resources, just need to fine tune the models a little bit and collaborate techniques with the Europeans...

The accuracy measurements in the article are meaningless by themselves. Does anybody know how those slight differences in accuracy translate into dollars saved? Furthermore, why can't we piggy-back on the European system? They run world-simulations after all and share the data.

The Euro model by itself was more accurate than the multi-model forecast run by NWS, which in turns was more accurate than raw GFS. IIRC, the Euro model predicted the Sandy landfall 320km off, while the NWS multi-model analysis was 1500km off, and raw GFS said it'd not hit land at all, going WAAAAY east.

The NWS multi-model forecast predicting landfall only came out a few days before it hit, while the euro model predicted it more than a week before. The US Navy multi-model forecast was also ahead of the NWS

Thanks, but that really doesn't answer my question about cost/benefit. Even if the European model weren't available and even if an improved model would show such improvements every time, the economic benefit could still be negligible.

They're talking about $25M. While that would more than max out my credit cards, compare it to the $65B that Sandy cost. That's just one storm. So they're proposing to spend 0.04% of the cleanup cost of Sandy on a shinier new computer that hopefully will give them better forecasts. I say it's worth the gamble.

OK, here's a hard benefit: imagine how much money it costs a company like Citibank to close offices for a day or more in anticipation of an upcoming storm. It's staggering. If it allows a company like that to make better decisions about which offices and branches are unquestionably going to have to be closed, and which ones might be able to safely remain open, the hard dollar value would be measured in millions. Ditto for concert venues, sporting events, tourist destinations (hello, Disney? Myrtle Beach?),

You're stating the obvious. But we're not talking about an all-or-nothing thing here, we're talking about a small improvement to a generally unreliable prediction.

By analogy, I we could build a personal scale that measures my weight precisely down to the milligram, provably beating all the other personal scales on the market. But that would be a waste of money because such precise measurements just aren't useful for most uses of personal scales.

By analogy, I we could build a personal scale that measures my weight precisely down to the milligram, provably beating all the other personal scales on the market. But that would be a waste of money because such precise measurements just aren't useful for most uses of personal scales.

Your analogy doesn't stand up. You're talking about taking a system that works quite well (typical bathroom scale) and needlessly refining it. Storm forecasting is, as you point out, not nearly as accurate as we would like or could use. Therefore it's worth trying to improve it. I don't think anyone can say exactly how much a new computer will help it, as there is some research. However, since Sandy alone cost 2600x as much as this new computer, it seems like it's worth trying.

It would be nice if they'd also do something about the remote sensing infrastructure to get more data to these nice new supercomputers. My current understanding is that the Feds are getting increasingly weak in that department.