The pyCBIR deep-learning tool can help researchers match their images to similar ones already in the database. This is an analysis of images from X-ray scattering from the Advanced Light Source.

Physicists on the MINERvA neutrino experiments at the Department of Energy’s Fermilab faced a conundrum. Their particle detector was swamping them with images. The detector lights up every time a neutrino, a tiny elementary particle, breaks into other particles. The machine then takes a digital photo of all of the new particles’ movements. As the relevant interactions occur very rarely, having a huge amount of data should have been a good thing. But there were simply too many pictures for the scientists to be able to analyze them as thoroughly as they would have liked to.

Enter a new student eager to help. In some ways, it was an ideal student: always attentive, perfect recall, curious to learn. But unlike the graduate students who usually end up analyzing physics photos, this one was a bit more – electronic. In fact, it wasn’t a person at all. It was a computer program using machine learning. Computer scientists at DOE’s Oak Ridge National Laboratory (ORNL) brought this new student to the table as part of a cross-laboratory collaboration. Now, ORNL researchers and Fermilab physicists are using machine learning together to better identify how neutrinos interact with normal matter.

“Most of the scientific work that’s being done today produces a tremendous amount of data where basically, you can’t get human eyes on all of it,” said Catherine Schuman, an ORNL computer scientist. “Machine learning will help us discover things in the data that we’re collecting that we would not otherwise be able to discover.”

Fermilab scientists aren’t the only ones using this technique to power scientific research. A number of scientists in a variety of fields supported by DOE’s Office of Science are applying machine learning techniques to improve their analysis of images and other types of scientific data.

Teaching a Computer to Think

In traditional software, a computer only does what it’s told. But in machine learning, tools built into the software enable it to learn through practice. Like a student reading books in a library, the more studying it does, the better it gets at finding patterns that can help it solve a big-picture problem.

“Machine learning gives us the ability to solve complex problems that humans can’t solve ourselves, or complex problems that humans solve well but don’t really know why,” said Drew Levin, a researcher who works with DOE’s Sandia National Laboratories.

Recognizing images, like those from experiments like MINERvA, is one such major problem. While humans are great at identifying and grouping photos, it’s difficult to translate that knowledge into equations for computer programs.

Speeding up Analysis

In the past, creating image-recognition programs was incredibly complex. First, programmers identified every single type of feature in the image they wanted to analyze. Using this list of features, they then made rules for the program to follow. For the neutrino experiments, those rules included describing all of the possible angles a proton could travel. Because scientific images can involve thousands of variables, the process was so slow that many astrophysics experiments had scientists analyze the images by hand instead. Unfortunately, that too was a slow and laborious process.

But machine learning eliminates the vast majority of that work. Programmers create a set of examples that tell the program how to broadly do the analysis, such as processing an image. The program then works to “understand” the data and come up with the rules. It’s the difference between telling a student how to add together objects one by one each time and teaching the principles behind arithmetic.

After the programmer finishes creating the program, he or she then supplies it with large amounts of sample data. The program creates its rules, processes the data, and spits out an answer. In the beginning, these predictions may seem random. As the program takes more data into account, it revises its equations. Those equations then come up with more accurate answers.

To Supervise or Not to Supervise?

Training a machine-learning program can be either “supervised” or “unsupervised.”

In supervised learning, the program receives input data as well as output data that gives the “right” answer. Like a student self-scoring a test with an answer key, the program checks to see how its result differs from the correct one. It then tweaks its calculations to get a little closer the next time. Programs that classify images, like identifying whether a photograph is of a star or a galaxy, need to use supervised learning. Scientists can also use supervised learning for creating programs that analyze relationships between variables, such as how the position of a star affects its brightness.

But supervised learning requires data with the answer clearly labeled. For many experiments, labeling the data the program would need for training purposes could take so long that the scientists might as well just analyze it themselves.

“Beginning with unlabeled data is a challenge,” said Daniela Ushizima, a researcher at DOE’s Lawrence Berkeley National Laboratory (Berkeley Lab), who develops machine learning tools. Unlabeled data particularly pose an issue when researchers are interested in a rare event.

That’s where unsupervised learning comes into play. Unsupervised learning requires the program to find patterns itself without the “correct” answers. It’s the computer version of independent study. Fortunately, these programs can still group types of data, such as similar images from particle detectors.

Creating a Brain: Deep Learning

While machine learning itself is useful, deep learning takes the concept to the next level. Deep learning is a form of machine learning that uses a neural network – software inspired by human brains.

Each deep learning program is made of a series of very simple units networked together. By grouping the units into hierarchical layers and stacking those layers, programmers create powerful programs. Each layer of units is like a separate team in a factory assembling an intricate puzzle. The earliest teams process basic features. In images, these would be edges and lines or even points. They then pass that analysis along to later teams or deeper layers. The deeper layers put the simple features together to create more complex ones. For an image, this could be a texture or a shape. The final layer spits out an answer. For MINERvA, this final answer may include a variety of information, including where the neutrino collided and what particles resulted from the collision.

As the program learns, it doesn’t necessarily change the equations as it would in a simpler machine-learning program. Instead, it subtly changes the relationships between the units and layers, shifting connections from one to another.

What Machine Learning Can Do For You

Grouping and identifying images is one of the most promising uses for machine learning. Back in 2012, a deep-learning program could identify photos in a specific database of images with a 20 percent error rate. Over the course of only three years, scientists improved deep-learning programs so much that a similar program in 2015 beat the average human error rate of 5 percent.

“There’s a lot of image-based science that can benefit from deep learning,” said Tom Potok, leader of ORNL’s Computational Data Analytics group.

For image recognition that requires special expertise, machine learning can provide even bigger benefits. “These techniques are extremely efficient at finding subtle signals” like small shifts in particle tracks, said Gabe Perdue, a Fermilab physicist on the MINERvA experiment.

While Fermilab physicists are using deep learning to understand neutrinos, other scientists are using it to understand images from sources as diverse as telescopes and light sources.

Spotting when a very large object is warping our view of a galaxy can help astronomers understand unknown phenomena like dark matter and dark energy. But it can take expert astronomers weeks to analyze a single image. This rate is fine for current equipment, which has only captured a few hundred images of this happening. But when the Large Synoptic Survey Telescope goes online in 2022, astronomers predict it will photograph tens of thousands of these galaxies.

To get ready, scientists at SLAC National Accelerator Laboratory have already developed a deep-learning program to tackle it. First, they spent a day feeding about half a million real and simulated images of galaxies into the eager student’s electronic brain. The program then analyzed a combination of real images from the Hubble Space Telescope and simulated images in a few seconds. This analysis was 10 million times faster than previous methods and just as accurate. It even provided data that the previous methods didn’t, like measurements of how much mass was warping the images.

Other scientists are using machine learning to sort and organize images. Most databases of scientific images are difficult to search or limited to images with in-depth descriptions. But Ushizima thought she had a better way. She imagined something like Google’s Image Search, where you can upload an image and have Google find others like it.

“Instead of looking for experiments in terms of keywords or a mathematical model, we would have a more concrete way to retrieve results: We input an image,” she said.

Using a DOE Early Career Research Program award, she and graduate students Flavio Araujo and Romuere Silva developed a deep-learning tool called pyCBIR. The program can tell researchers how similar their images are to ones already in its database. Currently, Ushizima and her fellow researchers are working with the Advanced Light Source, an Office of Science user facility, to analyze many of its images. With the program analyzing the content from millions of images, scientists can now rank and organize experimental data without needing to rely on filename or other textual information. While sifting through massive amounts of unlabeled data could take days or even longer, the pyCBIR software allows scientists to find relevant images in seconds.

But machine and deep learning aren’t panaceas for scientific research. One of the biggest challenges is ensuring that the programs are providing the correct answers. In deep learning, programmers can only see calculations that are happening in the first and last layer. In the classroom of deep learning, there’s no way to ask the student about its thought process. In addition, if there are inaccuracies in the training data, the deep-learning program will amplify it.

“You might worry if there’s some bias or some mistake coming from these simulations used to train these machines,” said Perdue.

Usually, scientists can recognize when results they receive are wrong or at least different from what they expected. In the case of MINERvA, neutrinos only move and interact in certain ways. If the images show something different, they need to double-check the machine. Scientists also understand what kinds of problems can arise from the programs they use to conduct the analysis and how to fix them. But because programs used for deep learning are so different from traditional ones, they throw a wrench in that institutional knowledge.

The ORNL team helping Fermilab analyze the MINERvA data is hoping to solve some of those challenges. They’re using three different technologies to design a powerful, accurate deep- learning program. Using a specialized computer that processes quantum information, they hope to design the best structure for the program. They’ll then use ORNL’s fastest supercomputer, Titan, to create the best arrangement of units and connections to maximize accuracy and speed. Lastly, they’ll run the program on a brain-like piece of hardware. In addition to MINERvA, they plan to use this program to analyze data from ORNL’s Spallation Neutron Source, an Office of Science user facility.

Whether in neutrino experiments or cancer research, machine learning offers a new way for both researchers and their electronic students to better understand our world and beyond.

As Prasanna Balaprakash, a computer scientist at DOE’s Argonne National Laboratory, said, “Machine learning has applications all the way from subatomic levels up to the universe. Wherever we have data, machine learning is going to play a big role.”

The Office of Science is the single largest supporter of basic energy research in the physical sciences in the United States and is working to address some of the most pressing challenges of our time. For more information please visit https://science.energy.gov.

Filters

An international team of scientists --- including several researchers from the U.S. Department of Energy's (DOE) Argonne National Laboratory -- - has discovered an anode battery material with superfast charging and stable operation over many thousands of cycles.

Vector polarizers are a light filtering technology hidden behind the operation of many optical systems. They can be found, for instance, in sunglasses, LCD screens, microscopes, microprocessors, laser machining and more. Optical physicists published details of their new vector polarizer design this week in APL Photonics. The newly proposed design is a major advance in polarization technology because it enables flexible filtering of a wide range of light sources and generation of new light states.

Scientists have come up with a way to massively speed up the ordering process for self-assembling materials. The resulting ultra-small, well-ordered patterns could be used in the fabrication of microelectronics, antireflective surfaces, magnetic data storage systems, and fluid-flow devices.

Researchers from Stanford University, two Department of Energy national labs and the battery manufacturer Samsung created a comprehensive picture of how the same chemical processes that give cathodes their high capacity are also linked to changes in atomic structure that sap performance.

A new Petascale Data Transfer Node project aims to to achieve regular disk-to-disk, end-to-end transfer rates of one petabyte per week between major supercomputing facilities, which translates to achievable throughput rates of about 15 Gbps on real world science data sets.

Filters

NAU Regents' Professor Bruce Hungate, director of the Center for Ecosystem Science and Society (Ecoss), recently joined a new initiative lead by LLNL to study how the soil microbiome controls the mechanisms that regulate the stabilization of the organic matter in soil.

Scientists pause each afternoon at Kirtland Air Force Base in Sandia National Laboratories in Albuquerque, New Mexico, awaiting the daily lightning flash and unmistakable floor jolt that accompanies a Z shot

The University of Illinois at Chicago has received a five-year, $4.2 million grant from the U.S. Department of Energy to help industrial, commercial, institutional and utility entities evaluate and install highly efficient combined heat and power (CHP) technologies.CHP, also known as cogeneration, is a single system that produces both thermal energy and electricity.

ECS, in a continued partnership with the Toyota Research Institute of North America (TRINA), a division of Toyota Motor Engineering & Manufacturing North America, Inc. (TEMA), is requesting proposals from young professors and scholars pursuing innovative electrochemical research in green energy technology.

John Carlisle has been named the director of Chain Reaction Innovations (CRI), a program aimed at accelerating job creation through innovation, based at the U.S. Department of Energy's Argonne National Laboratory.

This fall, U.S. Department of Energy Secretary Rick Perry announced nearly $4.7 million in funding for the department's Argonne National Laboratory across 16 projects in three divisions. Four of those TCF awards, representing more than $1 million in funds, are slated for Argonne's Nuclear Engineering division.

Southern Research has been selected to receive nearly $1.7 million in U.S. Department of Energy funding to develop a new, cost-efficient gasifier capable of converting low-grade coal into synthesis gas (syngas) that can be used in a number of applications.

The world's most advanced particle accelerator for investigating the quark structure of matter is gearing up to begin its first experiments following official completion of an upgrade to triple its original design energy. The Continuous Electron Beam Accelerator Facility (CEBAF) at the Department of Energy's Thomas Jefferson National Accelerator Facility is now back online and ramping up for the start of experiments.