Antarctic IRC: how NASA’s flying lab stays connected

Dialing up San Jose over satellite connections from 500m over the Antarctic.

The view from the cockpit of NASA's DC-8 flying lab during an Operation IceBridge survey flight 500 meters above the Antarctic ice fields.

NASA

Mars and Antarctica have a lot in common—they're both cold, inhospitable places with terrible broadband service. The crew of Operation IceBridge, NASA's airborne survey of glaciers and ice shelves in the Arctic and Antarctic works with networking constraints similar to those of the Curiosity rover, keeping in contact with its ground crew at worse-than-dialup speeds using the lowest-bandwidth method possible: Internet Relay Chat.

In February of 2010, after seven years of operation, the final laser sensor on NASA's Ice, Cloud, and Land Elevation Satellite (ICEsat) failed. With its replacement not slotted to be launched until 2015, NASA launched Operation Ice Bridge to conduct aerial surveys to fill in the gap. Flying 500 meters above the surface in a precisely-planned pattern over the Antarctic ice sheets, the OIB aircraft—operated by the National Suborbital Education and Research Center at the University of North Dakota—carries ice-penetrating radar, a gravimeter for measuring variations in the density of the ice below, and an Airborne Topographic Mapper—a laser altimeter that combines GPS data with laser measurements to build a precise record of the elevation of the ice sheets.

But because of the poor satellite coverage in the Antarctic, the refitted vintage Douglas DC-8 airliner can't use the Inmarsat BGAN service it normally uses for voice and data communication channels. "Like most high bandwidth satellite systems, the constellation is in geosynchronous orbit," David Van Gilst, NSERC's network engineer, told Ars in an e-mail interview. "So once you get past 72-73 degrees latitude the satellites are so low in the sky as to be problematic. Past 80 degrees latitude, they're below the horizon."

Bandwidth is a big enough problem for the land-based scientists in the Antarctic, who get about 10 hours a day of broadband (with some aggressive management of ground station dishes) from NASA's Tracking and Data Relay Satellite-F (TDRS-F) and the re-purposed GOES-3 weather satellite. But it's not practical for OIB's flying laboratory (operated by the National Suborbital Education and Research Center at the University of North Dakota) and other science on the move to try to lock in on TDRS or GOES-3; the only real option is to use the Iridium satellite network—the satcom equivalent of dial-up.

Iridium's 66 low-orbit satellites zip around Earth in near-polar orbits (inclined at 86.4 degrees relative to the equator). Since they orbit at just 470 miles or so above the Earth—much closer than even the GPS satellite network, let alone geosynchronous communications satellites—they don't require directional antennas or the kind of broadcast power needed for most satellite communications. But the tradeoff is that the satellites quickly pass in and out of range, and connections have to be passed off from one to the other. While Iridium advertises data rates of up to 10 kilobits per second, the best that they usually can manage over a single connection is a quarter of that.

So Operation Ice Bridge squeezes all it can out of Iridium the old-fashioned way—by multiplexing over PPP Multilink. The DC-8 is equipped with an array of Iridium-based modems, each of which dials into a modem on a land line at NASA's Ames Research Center in San Jose. By aggregating the connections, the OIB flying lab gets about 9600 bits per second of bandwidth. "We've experimented with as many as 8 channels," said Van Gilst, "but with Iridium's lower level of reliability (modems will tend to hang up and have to redial periodically, particularly on a moving, banking aircraft) and the relatively high latency, PPP starts to see diminishing returns beyond 4 or 6 channels."

Over that BBS-worthy bandwidth, Operation Ice Bridge pushes and pulls three main types of data. One of them is IRC chat, which allowed for the crew on the DC-8 to coordinate with the ground crew in Punta Areas, Chile, and has also been used during the summer to coordinate with weather forecasters for thunderstorm-chasing over Kansas. During the latest Antarctic mission, IRC was also used to communicate with students in 49 school classrooms in the US and Chile. One IRC server is aboard the aircraft and another at the ground station, reducing the number of connections that need to be handled over the narrow IP network pipe.

Enlarge/ The DC-8's onboard network rack and "Housekeeping" systems—the sensors and computer hardware that are permanently installed in the flying laboratory.

NSERC

The DC-8 also sends back an ASCII-based telemetry stream over the Iridium connection, providing the aircraft's location as well as some meteorological sensor data. "This data is of interest to weather modeling groups," Van Gilst said, "as there is often not a lot of in-situ measurement data available. And simply knowing where the aircraft is via a Google Earth KML is extremely helpful, allowing ground crew to return to the airport to receive the aircraft when we're about to return."

Enlarge/ A computer running Google Earth shows the DC-8's location as it starts its first survey of Antarctic ice in October 2012.

NASA

Another stream of data that comes over the multiplexed Iridium channels is a feed of weather and satellite data from the ground station. "The OIB missions generally do not need much in the way of real-time situational awareness data, as the flight plans are fixed at takeoff," said Van Gilst, "but the scientists have found it helpful to have access to model run data while we're returning to base in order to get a jump on planning the next day's flight." Other missions flown by NASA's Earth Science Division—such as the ARCTAS atmospheric monitoring mission in the Arctic in 2007—have changed their flight plans in mid-air based on updated satellite data; in the case of ARCTAS, the data would be used "to chase forest fires in Northern Canada, locating flare ups from GOES and MODIS satellite data," Van Glist said.

The Operation IceBridge team at the mission's base at the airport in Punta Areas, Chile.

NASA

Operation IceBridge finished up its work in Antarctica for the season in November, as summer began. The data collected is now being processed at the National Snow and Ice Data Center at the University of Colorado.

Wow.... I started out on a 2400 baud modem, moved to a 14.4 and then on to a 56k. I was giddy when I first connected with the 56k. No more setting watching the screens full of text being drawn one line at at time. I also spent way to many hours on IRC servers chatting with people from around the world. I still have some of the old .wav files we used to dcc to each other.

Wow.... I started out on a 2400 baud modem, moved to a 14.4 and then on to a 56k. I was giddy when I first connected with the 56k. No more setting watching the screens full of text being drawn one line at at time. I also spent way to many hours on IRC servers chatting with people from around the world. I still have some of the old .wav files we used to dcc to each other.

Once Iridium starts launching the Iridium Next sats starting in 2015, it should greatly improve sat data rates for science coming out of Antarctica. It'll be interesting to see how that changes the scope of these projects that have had to manage with such tiny data resources. Of course it also depends on what Iridium decides to charge for that bandwidth.

I wonder if it has to put up with any annoying chat bots. <OperationIceBridge> location: -77.542096, -44.981690. calving observed covering 30 square kilometers <fact-droid> Did You Know That: no vertebrate animals live year-round on the Antarctic continent? <fact-droid> Did You Know That: water actually expands as it freezes?

What's not clear is why real time communication is needed. I thought all planes had their own radar for weather and also it mentions that the flight plan is premade. I'm not sure why they couldn't download all the data from the ice sheet analysis upon landing.

What's not clear is why real time communication is needed. I thought all planes had their own radar for weather and also it mentions that the flight plan is premade. I'm not sure why they couldn't download all the data from the ice sheet analysis upon landing.

That article talks about how data in and out of the whole continent is constrained. Coming up with their own solution for data transfer would allow 1) a savings in attempting to buy bandwidth for data dumps and 2) quicker (better) access to data then having someone fly hard disks or CDs back and forth. Plus the novelty of real time commo with schools and the link is interesting; perhaps even helps with some minor grant monies from Education, etc. I have no links to suggest these are true but they make sense to me.

Looking at the back of that rack has me wondering how secure those cables are during transit. I can't imagine that they'll stay secure if there is any turbulence in flight. I've seen such cables come loose over time from vibrations form a hard drive array in the same rack - one would think travling through a storm would be far worse.

Looking at the back of that rack has me wondering how secure those cables are during transit. I can't imagine that they'll stay secure if there is any turbulence in flight. I've seen such cables come loose over time from vibrations form a hard drive array in the same rack - one would think travling through a storm would be far worse.

You know, I didn't catch that in the read through. As an electronics guy who used to work with military aircraft, I can't imagine anyone signing off on that. What a train wreck, with none of the wiring anywhere near properly secured. Most every rule rule in aircraft maintenance is written in blood and I'm not seeing how anyone can get away with that getup either.

They are using 8p8 connectors on CAT cable. How would you secure them differently? Cable ties?

Cable ties would be a good start. Ideally one would sheath a wiring harness but in a pinch cable ties could be used if properly dressed and secured. A rule of thumb for aircraft is one every three inches. The harness should be secured to the airframe where possible.

They are using 8p8 connectors on CAT cable. How would you secure them differently? Cable ties?

Cable ties would be a good start. Ideally one would sheath a wiring harness but in a pinch cable ties could be used if properly dressed and secured. A rule of thumb for aircraft is one every three inches. The harness should be secured to the airframe where possible.

Spoiler: show

I have to agree. The cables in that picture are a complete mess. While the various connections have a positive lock to prevent accidental cable removal, it still is very messy and makes for difficult troubleshooting. In addition, the cables are not exact length nor are they using optimal cable types to minimize overall weight. Cabling on aircraft is a critical part of the physical plant and enormous steps are taken to make sure that the cables are both extremely ductile to reduce the impact of constant vibration and light weight. The cables in that photo are none of that. If they are going to use off-the-shelf cables with fixed lengths, they should absolutely secure them so that they don't vibrate and either rub against a metal edge, slowly cutting through the cable, or fail due to bend fracturing after many hours of high vibration.

This is NASA. Cable ties were not in the 2012 budget. If we were in Russia, they'd have long since located a ball of twine or fallen back on the last resort of trained penguins with frickin' lasers on their heads.

As a tech who has worked on these Iridum Multi-Channel Systems (IMCS) I have a bit of input you might find amusing. *The United States Antarctic Program uses 4 to 5 of these every year in addition to the fixed version at the South Pole. They range from 4 modems to 12 modems (at the South Pole http://www.usap.gov/technology/contentH ... fm?id=1972 ).*Per channel you have a 2400 baud max so 9600 baud is under optimal conditions with all 4 modems connected and passing traffic. It might happen in an aircraft, but the ones South Pole Traverses and field camps use rarely connect all of the modems at a time and stay connected consistently as the article alluded to (but far worse then guessed).*Wiring in the rack is only one of the many problems we see regularly, that's par for the course. The field servers (think of it as the regulator valve, that feeds the router) is not a toughbook. SSDs are often used, but have seen an high failure rate in the field. *Ping times range in the 2,000ms+ range. (I've seen 20,000ms) so timeout is a large issue, and TCP traffic is a pain due to the SYN ACK nature of the protocol.*New satellites are on the horizon as was mentioned in of of the previous comments, but Iridum has a replacement that's already out for most commercial users called the Iridium Pilot. Using a single antenna it's supposed to do 134 Kbps bi-directionally.

I have to take issue with the comments in the article about the numbers of Iridium channels that can be successfully muliplexed. The National Science Foundation has for years been using a 12-channel Iridium inverse multiplex system, designed and implemented by the prior US Antarctic Program prime contractor, Raytheon Polar Services. This system works quite well and is very robust. It has been documented that the Antarctic experiences a higher degree of call-drops due to Iridium's constellation management strategy over the polar regions (beams on satellites, or possibly whole satellte comm links) are inactivated in the high latitudes due to satellite footprint overlap. The Raytheon Polar implementation allows for graceful degradation and rapid call restoration on each individual channel. This was all done via a conventional, commercial off-the-shelf version of Cisco IOS (once Raytheon found some bugs in Cisco's IOS PPP implementation and got Cisco to patch the code). NSF has deployed multiple 4-channel variants of this same technology for its deep field camps for the Antarctic summers. The motivation for the use of custom made sysetms in lieu of Iridium's Pilot multichannel system is that NSF (and likely NASA for that matter) can take advantage of a highly discounted Iridium air time contract that DoD has with Iridium. DoD does not support the Pilot (previously known as OpenPort) and the aviation version- OpenPort-aero.

Sean Gallagher / Sean is Ars Technica's IT Editor. A former Navy officer, systems administrator, and network systems integrator with 20 years of IT journalism experience, he lives and works in Baltimore, Maryland.