The Experimental Forecast Program

Tag: models

The HWT is examining some fairly sophisticated model simulations over a big domain. One question that frequently arises is: Can we trust the model over here if it is wrong over there?

What does “wrong” mean in these somewhat new models? Wrong in the sense that convection is absent or wrong in the sense that convection is too widespread? Perhaps, a particular feature is moving too slow or too fast. Can you really throw out the whole simulation if a part is “wrong”? Or do you just need time to figure out what is good/bad and extract what you can? Afterall the model is capable of detail that is not available anywhere else. That includes observations.

So Thursday and Friday we discussed how wrong the models have been. The features missed, the features misrepresented, the features absent. Yet each day we were able to extract important information. We were careful about what we should believe. On Friday, though, it was a different story. The NSSL WRF simulated satellite imagery was spot on. That is 14 hours into the simulation where the upper low, its attendant surface cold front were almost identical.

Our domain was northern AR, southern MO, western TN and MS. The models were not in agreement mind you. The different boundary layer schemes clustered into two groups: all the schemes were going for the northern AR initiation, and a second group, the TKE based schemes were also going for the southern part of the cold front. Another signal I was paying attention was post-frontal convergence that was showing up. I made note of it but I never went back to check all the simulations but I wanted to keep that threat in the forecast. Turns out, the TKE schemes hit on all of these features. The northern storms initiated similar to model consensus, the southern storms initiated as well, and so did the secondary episode behind the front (at least from the radar perspective).

The second domain of the day was Savannah GA, in the afternoon. This was an event involving convection possibly moving in from the west, the sea breeze front penetrating far inland along the east, a sea breeze fron the west FL and gulf coast sea breeze penetrating even farther inland, and a highly organized boundary layer sandwiched in between. The models had little in the way of 30 dBz 1km reflectivity at hourly intervals. The new CI algorithms showed that CI was occurring along all of the aforementioned features:1. Along the sea breezes,2. in the boundary layer along horizontal convective rolls,3. along the intersections of 1 and 2,4. and finally along the outflow entering into our domain.

We went for it and there was much rejoicing. We watched all afternoon as those storms developed along radar fine lines, and along the sea breeze. This was a victory for the models. These storms ended up reaching severe levels as a few reports came in.

As far as adding value on days like this, I am less certain. Our value was in extracting information. There is much to add value to. At this stage, we are still learning. It is impossible to draw what the radar will look like in 3 hours (unless there is nothing there). But I think as we assemble the capabilities of these models, we will be able to visualize what the radar might look like. As our group discussed, convection in the atmosphere appears random. But only because we have never seen the underlying organization.

It is elusive because our observing systems do not see uniformly. We see vertical profiles, time series at a location, and snap shots of clouds. We see wind velocity coming towards or away from radars. We see bugs caught in convergence lines (radar fine lines). So these models provide a new means to see. Maybe we see things we know are there. Maybe we are seeing new things that we don’t even know to look for. Since we can not explain them we are not looking for them. We expect to see more cool stuff this week.

Thanks to all the forecasters this week who both endured us trying to figure out our practical, time limited forecast product, and who taught us how to interrogate their unique tools and visualizations. We begin anew tomorrow with a whole new crop of people, a little better organized, with more new stuff on display, and more complex forecasts to issue.

It looks like we have full 18-06z loops from all available data from 00Z, May 6, 2009!

This is a first for the 2009 SE.

With the center point change to CLT today we are outside the domain of the 00Z CAPS CNA and C0A models.
GEMPAK just plots a blank image with a title for those frames. Everyone else appears to exist for the
18z-06z period. This includes the following 00Z runs:

WRF-AFWA
WRF-NCAR
WRF-NMM
WRF-NSSL
WRF-CAPS1
CAPS-SSEF-ALL (18 members)
CAPS-CNA – no grids in selected domain and not shown in the loop links, below.
CAPS-C0A – no grids in selected domain and not shown in the loop links, below.

Here are the links to the loops for this 00Z run of the models (the verifying base/composite reflectivity will fill-in tonight):