May 19, 2015
Lunar Cartography

Sen—When I was a kid, my favorite jigsaw puzzle was a map of the Moon. I would sit in the basement and put it together, then take it apart and re-assemble it over and over again. In between construction and destruction I would study it, poring over the weird and huge impact features with exotic names like Oceanus Procellarum and Mare Orientale, the mountain ranges with more familiar names like Apennines and Pyrenees, and craters like Tycho and Aristarchus.

Some of these features I knew from seeing them through my own small (and, in reality, terrible) department store telescope. It was thrilling to identify on a map the things I had seen with my own eyes, destinations nearly 400,000 kilometers distant.

Both maps are based on observations taken by the Lunar Reconnaissance Orbiter, which has been circling our Moon since June 2009. One map is made from imaging observations, and the other from altimeter data.

The imaging map was made from 15,000 separate observations taken by LRO’s Wide Angle Camera. From its mapping orbit 50 km above the lunar surface, the WAC looks straight down, seeing a swath of lunar surface 57 km wide, with each pixel covering about 75 meters of the Moon. Over the course of a month it maps out nearly the entire surface.

During that month, though, things change. For one, as the Moon orbits the Earth the angle of sunlight illuminating the surface changes. A mountain will cast a shadow to the west at sunrise, and east at sunset two weeks later. To correct for that the observations used to createthe map were picked out from several different orbits, to provide a smooth change in illumination across the surface.

For the poles, deep craters never see sunlight! You can see that in the map; the region right at latitudes of ±90° are completely black.

The second map is different. That was made using observations from the Lunar Orbiter Laser Altimeter (or LOLA), which sends down pulses of laser light to the lunar surface. These reflect off the ground there, and bounce back up to the detector, which very accurately times how long the trip took. Since we know the speed of light very accurately, this means we can convert the time to a distance, which in turn yields the relative topography of the surface. For example, a pulse will reflect off a mountain top and take a shorter trip than another that plunges into a deep crater.

For that map, darker blue represents negative topography, deeper features like crater floors, white is the average height (think of it as “sea level”, an obvious if somewhat misplaced analogy on the airless Moon), and black are the highest features like crater rims and mountains. One thing I noticed right away is that since LOLA uses laser pulses, it can still “see” the craters at the poles, so this map actually tells us more about those areas than in the other map which relies on the Sun to illuminate them!

Incidentally, the LOLA map used a staggering 6.4 billion separate laser pulses to make these maps. Incredible.

If you’re a map nerd like I am, you might be wondering how we have a latitude and longitude defined on the Moon. On Earth, the equator is halfway between the spin poles, and we define the prime meridian, the point of 0° longitude, arbitrarily (it goes through Greenwich, England).

On the Moon we can do a little better. It does spin, so the equator is easy enough to define. However, it spins once for every time it orbits the Earth, always keeping the same face to us. That means there’s a spot on the Moon where the Earth is always directly overhead, called the “sub-Earth point”. That’s a pretty good spot to use to define the lunar prime meridian.

In reality, the Moon’s elliptical orbit and tilt means the sub-Earth spot moves around the lunar surface, so planetary astronomers use the average of all those locations, called the “mean sub-Earth point”.

Map-making on Earth is hard enough. On the Moon, there are obviously lots of other issues to deal with.