Tag Archives: VIIRS

Post navigation

The Indian Ocean has just had its first Super Cyclone since 2007. The name of it is “Phailin” and I bet you just pronounced it incorrectly (unless you speak Thai). It’s closer to “PIE-leen” than it is to “FAY-lin”. The name was derived from the Thai word for sapphire. (If you go to Google Translate and translate “sapphire” into Thai, you can click on the “audio” icon {that looks like a speaker} in the lower right corner of the text box to hear a robotic voice pronounce it. You can also click on the fourth suggested translation below the text box and try to pronounce that as well.)

If you’re tired of reading about flooding in this blog, you’re probably going to want to avoid reading about Phailin. It already dumped up to 735 mm (28.9 inches) of rain on the Andaman Islands in a 72-hour period. Aside from the heavy rains, Phailin is a text-book example of “rapid intensification”, as official estimates of the storm’s intensity grew from 35 kt (65 km h-1 or 40 mph) when the storm was first named, to 135 kt (250 km h-1 or 155 mph!) just 48 hours later. Here’s a loop of what that rapid intensification looks like from the geostationary satellite, Meteosat-7. (Those are the Andaman Islands where the cyclone first forms.)

VIIRS being on a polar-orbiting satellite, it’s not possible to get an image of the cyclone every 30 minutes like you can with Meteosat-7. VIIRS only views a cyclone like Phailin twice per day. But, VIIRS can do things that Meteosat-7 can’t. The first is produce infrared (IR) imagery at 375 m resolution. (Meteosat-7 has 5 km resolution.) The image below is from the high resolution IR band, taken at 20:04 UTC 10 October 2013:

Look at the structure of the clouds surrounding the eye. (You’re definitely going to want to see it at full resolution by clicking on the image, then on the “3875×3019” link below the banner.) VIIRS is detecting wave features in the eyewall that other current IR sensors aren’t able to detect because they don’t have the resolution. The coldest cloud tops are found in the rainband to the west of the eyewall (look for that purple color) and are 179 K (-94 °C). That’s pretty cold!

Also notice the brightness temperature gradient on the west side of the eye is a lot sharper than on the east side of the eye. This is because the satellite is west of eye (the nadir line is along the left edge of the plotted data), looking down on the storm at an angle, revealing details about the side of the eyewall on the east side. Look down on the inside of a cardboard tube or a piece of pipe at an angle to replicate the effect. (Actually, the eye wall of a tropical cyclone slopes away from the center, so it’s more like funnel than a tube. If you go looking for a cardboard tube or a piece of pipe to look at, the results will be inaccurate. Grab a funnel instead.)

Another advantage of VIIRS is the Day/Night Band, a broadband visible channel that is sensitive to the low levels of light that occur at night. There is no geostationary satellite in space with this capability. The image below was taken from the Day/Night Band at the same time as the IR image above:

Was it the scattering of city lights off the clouds that allows you to see the clouds at night, like in this photo? No, because this cyclone is way out over the ocean, in the middle of the Bay of Bengal. Due to the curvature of the Earth, city lights won’t illuminate any clouds more than a few tens of kilometers away. The center of this storm is about 600 km away from any city lights and is still visible. At the most, only the very edges of the storm near cities would be illuminated if this were the case.

I can see at least two lightning strikes in the image, so is it lightning illuminating the cloud from the inside? No, it’s not that either. See how streaky the lightning appears? The whole storm would look like a series streaks, some brighter than others, depending on how close they were to the tops of the clouds (and how close the lightning was to the position of the VIIRS sensor’s field of view during each scan). The top of the storm is much too uniform in brightness for it to be caused by lightning.

So, if you’re so smart, what is the explanation, Mr. Smartypants? I’m glad you asked. It is a phenomenon called “airglow” (or sometimes “nightglow” when it occurs at night). You can read more about it here and here. The basic idea is that gas molecules in the upper atmosphere interact with ultraviolet (UV) radiation and emit light. Some of these light emissions head down toward the earth’s surface, are reflected back to space by the clouds, and detected by the satellite.

Really? Some tiny amount of gas molecules way up in the atmosphere emit a very faint light due to excitation by UV radiation, and you’re telling me VIIRS can see it? But, it’s nighttime! There’s no UV radiation at night! How do you explain that? The UV radiation breaks up the molecules into individual atoms during the day. At night, the atoms recombine back into molecules. That’s when they emit the light. Look, it’s in a peer-reviewed scientific journal if you don’t believe me. (A shortened press release about it is here.) Thanks to airglow (and the sensitivity of the Day/Night Band), VIIRS can see visible-wavelength images of storms at night even when there is no moon!

Getting back to the Super Cyclone, here’s what Phailin looked like in the high-resolution IR channel the next night (19:45 UTC 11 October 2012), right around the time where it reached its maximum intensity:

Here, the cyclone is much closer to nadir (the nadir line passes through the center of the image), so you’re more-or-less looking straight down into the eye on this orbit. The corresponding Day/Night Band image is below:

Once again, the cyclone is illuminated by airglow. (Some of the outer rainbands are also being lit up by city lights, which are visible through the clouds.) The only question is, what is that bright thing off the coast of Burma (Myanmar) that shows up in both Day/Night Band images? It looks like a huge, floating city. According to Google Maps, there’s nothing there. That is one question I don’t have the answer to (*see Update #2*).

UPDATE #1 (15 October 2013): The Day/Night Band also captured the power outages caused by Phailin. Here is a side-by-side comparison of Day/Night Band images along the coast of the state of Odisha (also called Orissa), which took a direct hit from the cyclone – a zoomed in and labelled version of the 10 October image above (two days before landfall) against a similar image from 14 October 2013 (two days after landfall):

VIIRS Day/Night Band images from before and after Super Cyclone Phailin made landfall along the east coast of India.

Notice the lack of lights in and around the small city of Berhampur. That’s roughly where Phailin made landfall. Also, notice the difference in appearance of the metropolitan area of Calcutta. It almost appears as if the city was cut in two as a result of electricity being out in large parts of the city.

UPDATE #2 (15 October 2013): Thanks to Renate B., we’ve figured out the bright lights over the Bay of Bengal near the coast of Myanmar (Burma) are due to offshore oil and gas operations. Take a look at the map on this website. See the yellow box marked “A1 & A3”? That is a hotly contested area for gas and oil drilling, right where the bright lights are. It is claimed by Burma (Myanmar) and India, China and South Korea are all invested in it. China has built a pipeline out to the site that cuts right through Myanmar (Burma) that some of the locals are not happy about.

UPDATE #3 (16 October 2013): It was pointed out to me that the maximum IR brightness temperature in the eye of the cyclone in the 20:04 UTC 10 October 2013 image was 297.5 K (24.4 °C), which is pretty warm for a hurricane/cyclone/typhoon eye. It is rare for the observed IR brightness temperature inside the eye to exceed 25-26 °C. Of course, the upper limit is the sea surface temperature, which is rarely above 31-33 °C. And the satellite’s spatial resolution affects the observed brightness temperature, along with a number of other factors.

A warm eye is related to a lack of clouds in (or covering up) the eye, the eye being large enough to see all the way to the surface at the viewing angle of satellite, the satellite having high enough spatial resolution to identify pixels that don’t contain cloud, and the underlying sea surface temperature. Powerful, slow moving storms may churn the waters enough to mix cooler water from the thermocline up into the surface layer, reducing the sea surface temperature. Heavy rains and cloud cover from the storm may also lower the sea surface temperature. Phailin was generally over 28-29 °C water, and was apparently moving fast enough (or the warm water was deep enough) to not mix too much cool water from below (a process called upwelling).

It may or may not have any practical implications, but the high resolution IR imagery VIIRS is able to produce may break some records on warmest brightness temperature ever observed in a tropical cyclone eye.

On the border between Chile and Argentina sits the volcano Copahue. (If you say it out loud, it is pronounced “CO-pa-hway”.) In the local Mapuche language, copahue means “sulfur water”. This name was given to the volcano as the most active crater contains a highly acidic lake full of sulfur. An eruption in 1992 filled the area with “a strong sulfur smell.” Later eruptions have involved “pyroclastic sulfur” (molten hot sulfur ash) and highly acidic mudflows. That doesn’t sound very pleasant.

This is a “true color” image just like the MODIS one in the link. Make sure you click on the image, then on the “3200×2304” link below the banner to see it in full resolution. Then see if you can spot the volcanic ash cloud from Copahue. I’ll give you a hint: it’s the only cloud that appears brownish-gray.

If you still can’t see it, here’s a zoomed-in image with a yellow arrow to help you out:

Now compare the ash cloud in the VIIRS image with the ash cloud in the MODIS image from 4 hours earlier. (This is easier to do if you can locate in the VIIRS image the lakes marked as “Embalse los Barreales” in the MODIS image.) There’s a lot less ash in the VIIRS image, right?

Not so fast. As the ash dispersed, the plume thinned out, making it harder to see against the brown background surface. But, that doesn’t mean that it’s not there. Here’s the “split window difference” image from VIIRS at the same time:

That whole black plume is volcanic ash detected by the split window difference. The yellow arrow points to Copahue and the ash plume that is visible in the true color image. The red arrow points to the ash plume that is not visible in the true color image, yet is detected by this simple channel difference (M-15 minus M-16). A victory for the split window technique!

It is interesting that the ash plume right over Copahue is tough to detect in this RGB composite because it is red, just like a lot of the other clouds. As the plume thins out away from the volcano, its color changes to a variety of pastels of pink and blue, and even appears to extend out over the Atlantic Ocean. Where clouds and ash coexist near the coast of Argentina, pixels show up orange and yellow and green (click to the high-resolution image to see that).

Why does the plume appear to extend into the Atlantic Ocean in the EUMETSAT Dust RGB, and not in the split window difference? It is due to the fact that the Dust RGB uses channel M-14 (8.55 µm), which is sensitive to absorption by sulfur dioxide (SO2) gas. The split window difference is better at detecting sulfuric ash particles, which may have mostly settled out of the atmosphere before reaching the Atlantic coast. There are likely still some ash particles in the plume, though – just not enough to show up easily in the split window difference. Detection of SO2 gas plumes has been used to infer the presence of volcanic ash.

Being able to see the location of the volcanic ash very important to pilots. Aircraft engines don’t work that well when they are sucking in particles of liquified sulfur and other abrasive and corrosive materials spit out by stinky volcanoes like Copahue.

Now is it easy to differentiate clouds from snow? Just changing the resolution doesn’t help that much.

This has long been a problem for satellites operating in visible to infrared wavelengths. Visible-wavelength channels detect clouds based on the fact that they are highly reflective (just like snow). Infrared (IR) channels are sensitive to the temperature of the objects they’re looking at, and detect clouds because they are usually cold (just like snow). So, it can be difficult to distinguish between the two. If you had a time lapse loop of images, you’d most likely see the clouds move, while the snow stays put (or disappears because it is melting). But, what if you only had one image? What if the clouds were anchored to the terrain and didn’t move? How would you detect snow in these cases?

Snow is hot pink (magenta), which shows up pretty well. Clouds are a multitude of colors based on type, particle size, optical thickness, and phase. That whole PowerPoint file linked above is designed to help you understand all the different colors.

The Daytime Microphysics RGB uses a reflectivity calculation for the 3.9 µm channel (the green channel of the RGB). Without bothering to do that calculation, I’ve replaced the reflectivity at 3.9 µm with the reflectivity at 2.25 µm (M-11) when applying this RGB product to VIIRS, and produced a similar result:

Except for the wavelength difference of the green channel (and minor differences between the VIIRS channels and Meteosat channels), everything else is kept the same as the official product definition. Once again, the snow is pink, in sharp contrast to the clouds and the snow-free surfaces. We won’t bother to show the Nighttime Microphysics/Fog RGB (link goes to PowerPoint file) since this is a daytime scene.

This also uses the reflectivity calculated for the 3.9 µm channel. Plus, it uses a gamma correction for the blue and green channels. Is it just me, or does snow show up better in the Daytime Microphysics RGB?

If you switch out the 3.9 µm for the 2.25 µm channel again and skip the gamma correction when creating this RGB composite for VIIRS, the snow stands out a lot more:

VIIRS "Snow" RGB (with modifications as explained in the text), taken 12:03 UTC 12 December 2012

Now you have snow ranging from pink to red with gray land areas, black water and pale blue to light pink clouds. This combination of channels makes snow identification easier than the official “Snow RGB”, I think.

All of this is well and good but, for my money, nothing beats what EUMETSAT calls the “natural color” RGB. I have referred to it as the “pseudo-true color“. Here’s the low-resolution EUMETSAT image:

The VIIRS image above uses the moderate resolution channels M-5, M-7 and M-10, although this RGB composite can be made with the high-resolution imagery channels I-01, I-02 and I-03, which basically have the same wavelengths and twice the horizontal resolution. Below is the highest resolution offered by VIIRS (cropped down slightly to reduce memory usage when plotting the data):

Make sure to click on the image and then on the “2594×1955” link below the banner to see the image in full resolution.

This RGB composite is easier on the eyes and easier to understand. Snow has high reflectivity in M-5 (I-01) and M-7 (I-02) but low reflectivity in M-10 (I-03) so, when combined in the RGB image, it shows up as cyan. Liquid clouds have high reflectivity in all three channels so it shows up as white (or dirty, off-white). The only source of contention is that ice clouds, if they’re thick enough, will also show up as cyan.

Except for the cyan snow and ice, the “natural color” RGB is otherwise similar to a “true color” image. Vegetation shows up green, unlike the other RGB composites where it has been gray or purple or a very yellowish green. That makes it more intuitive for the average viewer. You don’t need to read an entire guide book to understand all the colors that you’re seeing.

Compare all of these RGB composites against the single channel images at the top of the page. They all make it easier to distinguish clouds from snow, although some work better than others. Now compare the VIIRS images with the Meteosat images. Which ones look better?

(To be fair, it’s not all Meteosat’s fault. The images provided by EUMETSAT are low-resolution JPG files [which is a lossy-compression format]. The VIIRS images shown here are loss-less PNG files, which are much larger files to have to store and they require more bandwidth to display.)

As a bonus (consider it your Christmas bonus), here are a few more high-resolution “natural color” images of snow and low clouds over the Alps. These are kept at a 4:3 width-to-height ratio and a 16:9 ratio, so they make ideal desktop wallpapers.

VIIRS "natural color" composite of channels I-01, I-02 and I-03, taken 12:29 UTC 14 November 2012. This is an ideal desktop wallpaper for 4:3 ratio monitors.

That was the 4:3 ratio image. Here’s the 16:9 ratio image:

VIIRS "natural color" composite of channels I-01, I-02 and I-03, taken 12:29 UTC 14 November 2012. This is an ideal desktop wallpaper for 16:9 ratio monitors.

“At 10 o’clock the Captain was walking on deck and saw what he supposed to be an immense iceberg. … the atmosphere was hazy, and then a heavy snow squall came up which shut it out entirely from our view. Not long after the sun shone again, and I went up again and with the glass, tried to get an outline of it to sketch its form. The sun seemed so dazzling on the water, and the tops of the apparent icebergs covered with snow; the outline was very indistinct. We were all the time nearing the object and on looking again the Captain pronounced it to be land. The Island is not laid down on the chart, neither is it in the Epitome, so we are perhaps the discoverers, … I think it must be a twin to Desolation Island, it is certainly a frigid looking place.”

The text above was the journal entry of Isabel Heard, wife of the American Captain John Heard, on 25 November 1853. The couple was en route from Boston, Massachusetts to Melbourne, Australia (a long time to spend in a boat) and the land they spotted became known as Heard Island. It should be noted that “Desolation Island” refers to Îles Kerguelen, which has its own unique story of discovery.

Kerguelen Island was discovered in 1772 by Yves-Joseph de Kerguelen de Trémarec, a French navigator commissioned by King Louis XV to discover the unknown continent in the Southern Hemisphere that he believed to be necessary to balance the globe. (Look at a globe or map of the world and notice that most of the land area is in the Northern Hemisphere.) Kerguelen himself never set foot on the island, but he told his king the island was inhabited and full of forests, fruits and untold riches. He called it “La France Australe” (Southern France). Captain Cook actually did land on the island a few years later and named it Desolation Island because it had none of that stuff, and King Louis XV imprisoned Kerguelen after his lie was discovered. Oops.

Îles Kerguelen, made up of the main island (Kerguelen to us, La Grande Terre to the French) and the many small surrounding islands are part of the French Southern and Antarctic Lands (Terres Australes et Antarctiques Françaises or TAAF). Heard Island is part of the Australian territory of Heard Island and McDonald Islands (HIMI).

These islands are in the “Roaring Forties” and “Furious Fifties”, the region of the Southern Ocean (southern Indian Ocean in this case) between 40 °S and 60 °S latitude. Get out your globe or world map once again and notice that there is very little land in this latitude range. This region is where strong, persistent westerly winds circle the globe. With no land in the way, there isn’t much to disturb this flow. The high winds almost always from the same direction create huge waves of 10 m (33 ft) or more. (Now imagine being John or Isabel Heard. Well, actually, if you suffer from sea-sickness you probably shouldn’t imagine it.) The cold winds flow over the relatively warmer waters of the ocean, forming persistent cloudiness. If you zoom in on the image above (click on the image, then on the “1893×1452” link below the banner for full resolution) you can see quite a bit of structure in the resulting “cloud streets“.

The persistent cloudiness makes Kerguelen and Heard Island a rare sight from any satellite. We can see them here because the flow is stable and the islands are producing the equivalent of a “rain shadow” on the clouds. (It’s tempting to call it a “cloud shadow” but, since clouds actually do cast shadows, it would just confuse people.) If we zoom in on Kerguelen, this shows up more clearly:

Notice how all the clouds are piling up on the west (windward) side of Kerguelen, where the highest mountains, are located. (These mountains are covered with snow and glaciers, as the cyan color indicates.) Could that be the equivalent of a bow shock near 68 °E longitude where there is an apparent crack in the clouds? On the leeward side of the island, downwind of the mountains, the air is descending, which prevents clouds from forming. Kerguelen created a hole in the clouds by disrupting the flow.

In addition to creating a hole in the clouds, Heard Island is creating all sorts of waves in the atmosphere. The ones you probably noticed first look like the wake created by a boat (and have the same basic cause). But, why do they start well out ahead of the island where the yellow arrow is pointing? Because those first waves are actually caused by the McDonald Islands (discovered by Capt. William McDonald in 1854). Even though the highest point on McDonald Island is only 186 m above mean sea level (610 ft), it’s enough to disrupt the flow.

The highest point on Heard Island is Mawson Peak at 2745 m (9006 ft), which is actually the highest elevation in Australia. It is part of Big Ben, an active volcano that last erupted in 2008. This peak is creating a series of lenticular clouds in the above image. A patch of cirrus clouds also exists downwind of Heard Island (the more cyan colored clouds), although it is not clear if these clouds were formed by the waves caused by Heard Island.

If you’re interested in visiting either of these islands, here are some other interesting facts: Kerguelen has a year-round population of ~100, almost all scientists. It has a permanent weather station and office maintained by Météo-France (France’s version of the National Weather Service), and the French version of NASA (CNES) has a station for launching rockets and monitoring satellites. Heard Island has no permanent residents. Every few years a scientific expedition sets out for the island to study the geology, biology, weather and climate of the island. The next one is planned for 2014 and is being called an “open source expedition”. There may still be time to join in if you’re looking for an adventure!

How fast does an aurora move? I “googled” it, and got answers ranging from “fast” to “very fast”. Not very scientific. It also doesn’t help that the majority of aurora videos on the Internet are time-lapse footage, and there’s no way to know how fast the footage has been sped up. Although, I did find this video that claims to be real-time footage:

When the camera is still, you could try to calculate the speed of some of the aurora elements if you knew where the cameraman was, what stars were in the view (and how far apart they are), and how high up (or how far away) the aurora was at that time. All information that I don’t have.

What if I said we could estimate the speed of the aurora by examining VIIRS Day/Night Band (DNB) images?

Here’s a DNB image of the aurora australis (a.k.a. Southern Lights) over Antarctica, taken on 1 October 2012:

VIIRS DNB image of the aurora australis, taken 00:22 UTC 1 October 2012

Compare this image with the images of the aurora borealis shown back in March 2012. Something doesn’t look right. Far from looking like smooth curtains of light, the aurora (particularly the brightest one) has a jagged appearance, like a set of steps. (This is easier to notice if you click on the image to see it in higher resolution.) This is because the aurora wouldn’t stay still, and we can use this information to estimate the speed it was moving.

The stripes that you see in the image are a caused by the 16 detectors that comprise the DNB which, for various reasons, don’t have exactly the same sensitivity to light. (This condition is given a super-scientific name: “striping”.) The DNB senses light from the Earth by having a constantly rotating mirror reflect light onto these detectors. One rotation of the mirror (particularly the part that occurs within the field of view of the sensor) comprises one scan. Each detector comprises one row of pixels in each scan, each with 742 m x 742 m resolution at nadir. There are 48 scans in one “granule” (the amount of data transmitted in one data file), and it takes ~84 seconds to collect the data that make up one granule. That means it takes ~1.75 seconds per scan.

If you watch that video again, you’ll notice that the aurora can move quite a bit in 2 seconds. Now, let’s zoom in much more closely on one of the aurora elements:

Zoomed-in VIIRS DNB image of an aurora, taken 00:22 UTC 1 October 2012

This image has been rotated relative to the original image, in case you were wondering why it doesn’t seem to match up with the first image. The brightest pixels are where the brightest aurora elements were located. The “steps” (or “shifts” as they are typically called) occur every 16 pixels, which mark out the end of one scan and the beginning of the next. If you count the number of pixels that the brightest aurora elements shifted from one scan to the next, it varies from about 6 to 10 pixels. Assuming a constant resolution of 742 m per pixel along the scan (which isn’t exactly true, the resolution degrades a little bit as you get closer to the edge of the scan but not by much), that means this particular aurora element moved somewhere between ~4.5 and ~7.5 km in ~1.75 seconds from one scan to the next. Doing the math (don’t forget to carry the 1), that comes out to somewhere between 9000 and 15,000 km h-1 (rounded to account for possible sources of error), which I guess counts as “very fast”. But, it’s not as fast as the coronal mass ejections that create auroras. They have an average speed of 489 km s-1 (1,760,000 km h-1)!

So, what looks like an oddity in the VIIRS image, actually contains some interesting scientific information about the speed of an “active aurora“.

But, we’re not done yet. Let’s get back to the striping. Along with “stray light”, it’s one of the few remaining issues in VIIRS imagery. Stray light, which you can see evidence of in the lower right corner of first aurora image, is a particular problem in the DNB. It occurs when sunlight is reflected onto the detectors when the satellite is on the nighttime side of the Earth, but close to the edge of the day/night “terminator“. Our colleagues at Northrup Grumman have been working on a correction to stray light that also reduces the striping. This correction allows for much better viewing of auroras, which have a tendency to occur right where stray light is an issue.

Here is an image of another aurora over Antarctica, taken on 15 September 2012, corrected for stray light and striping:

VIIRS DNB image of the aurora australis over Antarctica, taken 18:56 UTC 15 September 2012. The data used in this image was corrected for stray light and striping by Stephanie Weiss (Northrup Grumman).

This aurora was a lot less “active” so it looks more like smooth curtains of light. Although, when you zoom in on the brightest swirl in the upper right corner, you can see it did move 3-5 pixels between scans:

VIIRS DNB image of the aurora australis over Antarctica, taken 18:56 UTC 15 September 2012. This image has been zoomed in and rotated relative to the previous image of the same aurora. The data used in this image was corrected for stray light and striping by Stephanie Weiss (Northrup Grumman).

This translates to 4000 to 8000 km h-1, which still counts as “fast” even if it doesn’t count as “very fast”. See, Google was right! Auroras do move anywhere from “fast” to “very fast”. But, now we at least have an estimate to quantify that speed.

And, in case you were wondering, these estimates of the speed of auroras are consistent with earlier observations. According to the book Aurora and Airglow by B. McCormac (1967), the typical speed of auroras is between 0 and 3 km s-1 (up to 10,800 km h-1). So, it appears that VIIRS does give a reasonable estimate about the speed of an aurora. We just happened to catch one “typical” aurora and one “faster than typical” aurora.