Bayesian inference of subglacial topography

Perhaps the most important component of a glacier model is the geometry of the bed. Unfortunately, it’s also extraordinarily difficult to measure, especially over large areas, like Greenland or the Bagley Icefield. Measurements are usually done with low-frequency radar, either from an airplane, or by walking around stringing out long pieces of wire like this (there’s a radio transmitter and a big battery in my backpack):

Even after all that radar data has been collected and analyzed, the question remains of how to determine what the bed looks like in between those data points. This procedure is known as interpolation. One way is to draw straight lines, but that doesn’t usually work very well. Another method is to use kriging, which fits the best Gaussian Random Field to a set of data points.

Another idea is to use a physics to try and constrain what should be going on given our best understanding of how the world works. In this case, the principle of mass conservation works very well. The idea is as follows: Say we know how much water is put into a pipe. Then we measure how fast the water in the pipe is flowing. Then it’s easy to determine how wide the pipe is through the relationship discharge=area*volume. Similarly, if we know how much snow falls, ice melts, the velocity of the ice, and its surface elevation, we can determine how thick the ice is, and hence what the bed elevation is. The trouble is that problems like this are highly sensitive to errors in data, which show up all over the place: in the specification of the ice speed and direction, in snowfall and melt rates, and in the observations of thickness themselves . Further, normal statistical methods aren’t well suited for quantifying these kinds of errors.

To try and deal with this problem, I use Bayesian statistics to try and do a better job at estimating what the errors associated with estimating a bed using mass conservation actually are. While the details are complicated, the basic idea is to randomly generate a whole bunch of possible beds, and see which ones are likely to have produced the observations, likely ones get kept, unlikely ones get discarded (in a careful way), and by analyzing the statistical properties of the resulting quiver of beds, I can say something about the uncertainty. The paper is here, but a sample of the results (this time for Jakobshavn Isbrae, the world’s fastest glacier) looks like this: