Petabyte-chomping big sky telescope sucks down baby code

Robert Heinlein was right to be worried. What if there really is a planet of giant, psychic, human-hating bugs out there, getting ready to hurl planet-busting rocks in our general direction? Surely we would want to know?

Luckily, big science projects such as the Large Synoptic Survey Telescope (LSST), which (when it's fully operational in 2016) will photograph the entire night sky repeatedly for 10 years, will be able to spot such genocidal asteroids - although asteroid-spotting is just one small part of the LSST's overall mission.

I caught up with Jeff again a couple of weeks ago, and asked him how this highly ambitious project is progressing. "Very nicely" seems to be the crux of his answer.

It might not make for the most dramatic of headlines but given the scale and complexity of what's being developed, this in itself is a laudable achievement. In Jeff's words: "First, we have to process 6.4GB images every 15 seconds. As context, it would take 1,500 1080p HD monitors to display one image at full resolution.

"The images must go through a many-step pipeline in under a minute to detect transient phenomena, and then we have to notify the scientific community across the entire world about those phenomena. That will take a near real-time 3,000-core processing cluster, advanced parallel processing software, very sophisticated image processing and astronomical applications software, and gigabit/second networks.

"Next, we have to re-process all the images taken since the start of the survey every year for 10 years to generate astronomical catalogs, and before releasing them we need to quality assure the results."

That's about 5PB of image data/year, over 10 years, resulting in 50PB of image data and over 10PB of catalogs. The automated QA alone will require a 15,000-core cluster (for starters), parallel processing and database software, data mining and statistical analysis, and advanced astronomical software.

They now have a prototype system of about 200,000 lines of C++ and Python representing most of the capability needed to run an astronomical survey of the magnitude typically done today. Next, they have to scale this up to support LSST volumes. According to Jeff: "We hope to have all of that functioning at about 20 per cent of LSST scale of the end of our R&D phase. We then have six years of construction and commissioning to 'bullet-proof' and improve it, and to test it out with the real telescope and camera."

The incremental development and R&D mode the team is following could be called agile, although this is agile on a grand scale. Each year or six months, they do a new design and a new software release, called a Data Challenge. Each DC is a complete project with a plan, requirements, design, code, integration and test, and production runs.

Lessons learned

The fifth release just went out the door, and they've completely re-done their UML-based design third times with the lessons learned from each DC. They're using Enterprise Architect to develop each model, following a version of the agile ICONIX object modeling process tailored for algorithmic (rather than use case driven) development. I've co-authored a book on the ICONIX process, Use Case Driven Object Modeling with UMLTheory and Practice, here.

ICONIX uses a core subset of the UML rather than every diagram under the sun, and this leanness has allowed them to roll the content into a new model as a starting point for the next DC.

Jeff explains: "After each DC, we also extract the design/lessons learned from the DC model to the LSST Reference Design Model which is the design for the actual operational system. That last model is also used to trace up to a SysML-based model containing the LSST system-level requirements."

Boffins share their data

Perhaps the most exciting aspect of LSST is the sharing of data and discoveries, allowing schools, universities and pretty much anyone to run collaborative projects, or simply to explore the latest data. I could see open-source visualization projects springing up, Typepad widgets layering sky maps over time, and so on.

But this kind of openness, for such huge amounts of data, presents its own technical challenge.

Jeff told us: "We have to allow scientists to quickly access any or all of that data, and support their science by enabling them to run their own scientific software along with our software. Some of the science hasn't even been conceived yet and, for what does exist, the algorithms will improve over the decade-long survey. So, we have to allow for both algorithmic and infrastructure evolution over 10 years."

Broadly speaking, the data will be published to the outside world at two levels: "For scientists, the data will be served from dedicated Data Access Centers in the US and Chile. There, they will have supercomputing resources available, advanced tools and user interfaces, access to grid computing resources and so on.

"At the same time, we will feed the data to our Education and Public Outreach Center, so that citizen-science programs, educational institutions, and museums can work with the data to create courses, exhibits, portals, and web applications to engage those communities."

SciDB in the frame

When it comes to storage and query in the science community, there's been some talk of boffins going "post-relational" with mega-database SciDB from relational legend Michael Stonebraker. Surely that would be suited to this kind of environment? The LSST team is evaluating SciDB - and it certainly seems like a good fit, designed as it is for large volumes of information crunched on thousands of nodes in distributed data centers. So does this mean that the LSST will be abandoning its existing MySQL server - throwing out relational data structures in favor of a less restrictive, multi-dimensional mathematical array model?

Jeff explains: "The baseline is still a custom parallelization layer (called qserv), on top of MySQL and xrootd. We have implemented a prototype of this layer and have tested it successfully on a 40-node cluster. We are currently testing it on a 100-node cluster to make sure it scales seamlessly with additional servers.

"We are evaluating SciDB, MonetDB, Hadoop, and other technologies as alternatives or complements to the baseline. We developed dbms-agnostic schema, test data, and a set of 65 standard queries that exercise and stress the database. We hope to implement these in all those technologies and run tests for side-by-side comparisons."

There have been some interesting astronomical discoveries and advances made recently - for example, discovering a new spiral arm in our Galaxy, and also discovering that the Milky Way is cube-shaped. I asked Jeff if there is anything out there - or any aspect of the Universe - that he secretly hopes the LSST will discover, or prove to be true (and naturally I hoped he would say something about detecting cosmic missiles being tossed at us by misanthropic alien bugs).

"It would be wonderful to know more about how the universe evolved and where it is headed (Cosmology). That is a primary mission for LSST and I have every reason to believe it will uncover astounding new science." ®