HPCwire » MIThttp://www.hpcwire.com
Since 1986 - Covering the Fastest Computers in the World and the People Who Run ThemSun, 02 Aug 2015 12:39:43 +0000en-UShourly1http://wordpress.org/?v=4.2.3Scalable Priority Queue Minimizes Contentionhttp://www.hpcwire.com/2015/02/02/scalable-priority-queue-minimizes-contention/?utm_source=rss&utm_medium=rss&utm_campaign=scalable-priority-queue-minimizes-contention
http://www.hpcwire.com/2015/02/02/scalable-priority-queue-minimizes-contention/#commentsTue, 03 Feb 2015 00:00:43 +0000http://www.hpcwire.com/?p=17257The multicore era has been in full-swing for a decade now, yet exploiting all that parallel goodness remains a prominent challenge. Ideally, compute efficiency would scale linearly with increased cores, but that’s not always the case. As core counts are only set to proliferate across the computing spectrum, it’s an issue that merits serious attention. Researchers from Read more…

]]>The multicore era has been in full-swing for a decade now, yet exploiting all that parallel goodness remains a prominent challenge. Ideally, compute efficiency would scale linearly with increased cores, but that’s not always the case. As core counts are only set to proliferate across the computing spectrum, it’s an issue that merits serious attention.

Researchers from MIT’s Computer Science and Artificial Intelligence Laboratory are exploring ways to improve the scalability of priority queues, which are essential for applications such as task scheduling and discrete event simulation. Concurrent priority queues scale up to about eight cores, but above single digits, efficiency drops off as multiple threads attempt to access elements at the head of the queue. By modifying priority queues so that they are directed a certain distance away from the head, the MIT team has demonstrated improved efficiency for up to 80 cores.

With traditional versions of a priority queue, bottlenecks arise as multiple threads attempt to access the front of the priority queue at the same time. The SprayList algorithm created by the MIT team resolves this issue by randomly assigning tasks. The algorithm, based on the skip list, enables processors with many cores to work in unison.

“Roughly, at each SkipList level, a thread flips a random coin to decide how many nodes to skip ahead at that level,” they write in their paper. “In essence, we use local randomness and the random structure of the SkipList to balance accesses to the head of the list. The lengths of jumps at each level are chosen such that the probabilities of hitting nodes among the first O(p log3 p) are close to uniform.”

Figure 1 shows the intuition behind the sprays.

Justin Kopinsky, an MIT graduate student in electrical engineering and computer science and one of the newpaper’s co-authors, explains the approach addresses what can be an order-of-magnitude slowdown or more arising from collisions and logjams associated with the standard priority queue.

The researchers tested the SprayList algorithm using a Fujitsu RX600 S6 server with four 10-core Intel Xeon E7-4870 (Westmere EX) processors, supporting a total of 80 hardware threads (approximating an 80-core setup). Running several benchmarks, which are detailed in their paper, the algorithm was said to achieve a drastic reduction in contention. The researchers explain that while a random approach takes longer to make its way around the queue compared to conventional priority queues, the gain in scalability outweighed the additional work for reasonably parallel workloads. Furthermore, they state that the relaxation parameters of their algorithm can be tuned to further increase performance depending on workload and specific core counts.

The researchers will present their findings this month at the Association for Computing Machinery’s Symposium on Principles and Practice of Parallel Programming.

]]>http://www.hpcwire.com/2015/02/02/scalable-priority-queue-minimizes-contention/feed/0NVIDIA GPUs Unfold Secrets of the Human Genomehttp://www.hpcwire.com/2015/01/16/nvidia-gpus-unfold-secrets-human-genome/?utm_source=rss&utm_medium=rss&utm_campaign=nvidia-gpus-unfold-secrets-human-genome
http://www.hpcwire.com/2015/01/16/nvidia-gpus-unfold-secrets-human-genome/#commentsFri, 16 Jan 2015 22:40:03 +0000http://www.hpcwire.com/?p=17047With 3 billion base pairs of DNA on hand, it’s no wonder that genes are able to program nearly ever detail of our physical makeup, from constructing organs to fighting off disease. But how can a system so vast find the right operating manual for one body part, and ignore all the data meant for Read more…

]]>With 3 billion base pairs of DNA on hand, it’s no wonder that genes are able to program nearly ever detail of our physical makeup, from constructing organs to fighting off disease. But how can a system so vast find the right operating manual for one body part, and ignore all the data meant for another?

It turns out that the secret is folding. While we’re used to viewing DNA in a long, untangled double helix, a genome looks more like an impossibly complex knot when it’s inside of a cell’s nucleus. But this genome jumble has much more rhyme and reason than first meets the eye.

By gathering a portion of the genome into a specific shape, scientists have found that these specific segments can be turned on or off to best serve the task at hand. And now, by using NVIDIA GPUs, researchers from Baylor College of Medicine, Rice University, MIT and Harvard University have mapped this sophisticated system of genome folds in unprecedented detail.

Among the folded shapes that the team was able to identify was the “3D loop,” where two sections of DNA that are usually far apart snap together.

Under the leadership of Erez Aiden, assistant professor of genetics at Baylor and assistant professor of computer science and computational and applied mathematics at Rice, the team has unraveled roughly 10,000 loops that the human genome folds into.

“Our maps of looping have revealed thousands of hidden switches that scientists didn’t know about before,” said Co-first author Miriam Huntley, a doctoral student at the Harvard School of Engineering and Applied Sciences (SEAS) in a Harvard press release. “In the case of genes that can cause cancer or other diseases, knowing where these switches are is vital.”

With it, scientists hope they can uncover clues to cell function that could combat complex diseases such as cancer.

Of course, obtaining such a high resolution of a 3 billion base pairs didn’t come without computational challenges. Using HPC clusters and custom algorithms, the team set off to work, but soon realized that CPUs alone wouldn’t get them to their goal.

“Ordinary computer CPUs are not well-adapted for the task of loop detection,” said Suhas Rao, a researcher at Baylor’s Center for Genome Architecture. To indentify the special places in the genome where loops can form, Rao said the team had to turn to NVIDIA GPUs to get the job done.

“We faced a real challenge because we were asking, ‘How do each of the millions of pieces of DNA in the database interact with each of the other millions of pieces?’” said Miriam Huntley, a doctoral student at Harvard’s School of Engineering and Applied Sciences. “Most of the tools that we used for this paper we had to create from scratch because the scale at which these experiments are performed is so unusual.”

Among these customs tools, such as algorithms and data structures, Rao commented that data-visualization tools created by co-authors Neva Durand and James Robinson played a vital role in their research.

The results of the team’s study were published in the December 2014 issue of Cell Magazine.

]]>Even the most powerful supercomputers cannot be productive without suitable operating software and applications. In engineering, finite element analysis (FEA) is used to create 3D digital models of large structures to simulate how they perform under different real-world conditions (stress, vibration, heat, etc.). The challenge of modeling large-scale structures, such as mining equipment, buildings, and oil rigs, is the sheer amount of computation involved. Running these simulations takes many hours on expensive systems, which to engineering firms translates into a lot of time and money.

MIT spinoff Akselos has been working to make the process more efficient, so that it can be used on a wider basis. The Akselos team – which includes chief technology officer David Knezevic, cofounder and former MIT postdoc Phuong Huynh as well as MIT alumnus Thomas Leurent – developed innovative software based on years of research at MIT.

The software relies on precalculated supercomputer data for structural components — like simulated Legos — to significantly reduce simulation times. According to an article on the MIT news site, a simulation that would take hours with traditional FEA software can be carried out in seconds with the Akselos method.

The startup has attracted hundreds of users from the mining, power-generation, and oil and gas industries. An MITx course on structural engineering is introducing the software to new users as well.

The Akselos team is hoping that its technology will make 3D simulations more accessible to researchers around the world. “We’re trying to unlock the value of simulation software, since for many engineers current simulation software is far too slow and labor-intensive, especially for large models,” Knezevic says. “High-fidelity simulation enables more cost-effective designs, better use of energy and materials, and generally an increase in overall efficiency.”

The software runs in tandem with a cloud-based service. A supercomputer precalculates individual components of the model, and this data is pushed to the cloud. The components have adjustable parameters, so engineers can fine-tune variables such as geometry, density, and stiffness. After creating a library of precalculated components, the engineers drag and drop them into an “assembler” platform that links the components. The software then references the precomputed data to create a highly detailed 3D simulation in seconds.

By using the cloud to store and reuse data, algorithms can finish more quickly. Another benefit is that once the data is in place, modifications can be carried out in minutes.

The roots for the project extend back to a novel technique called the reduced basis (RB) component method, co-invented by Anthony Patera, the Ford Professor of Engineering at MIT, and Knezevic and Huynh. This work became the basis for the 2010-era “supercomputing-on-a-smartphone” innovation, before morphing into its current incarnation under the Akselos banner.

]]>http://www.hpcwire.com/2014/08/28/mit-spinout-speeds-3d-engineering-simulations/feed/0An Easier, Faster Programming Language?http://www.hpcwire.com/2014/06/18/easier-faster-programming-language/?utm_source=rss&utm_medium=rss&utm_campaign=easier-faster-programming-language
http://www.hpcwire.com/2014/06/18/easier-faster-programming-language/#commentsWed, 18 Jun 2014 23:39:12 +0000http://www.hpcwire.com/?p=13228The HPC community has turned out supercomputers surpassing tens of petaflops of computing power by stringing together thousands of multicore processors, often in tandem with accelerators like NVIDIA GPUs and Intel Phi coprocessors. Of course, these multi-million dollar systems are only as useful as the programs that run on them, and developing applications that can Read more…

]]>The HPC community has turned out supercomputers surpassing tens of petaflops of computing power by stringing together thousands of multicore processors, often in tandem with accelerators like NVIDIA GPUs and Intel Phi coprocessors. Of course, these multi-million dollar systems are only as useful as the programs that run on them, and developing applications that can take advantage of all those cores requires the concerted efforts of highly-skilled programmers.

Current HPC programming tools are failing to meet the challenges presented by large-scale, heterogenous architectures and the demands of big data. Frameworks like MPI can be difficult to learn and use and time-consuming even for established experts. A new open source collaboration called “Julia” aims to simplify the coding process by providing “a powerful but flexible programming language for high performance computing.”

“In recent years, people have started to do many more sophisticated things with big data, like large-scale data analysis and large-scale optimization of portfolios,” says Alan Edelman, a professor of applied mathematics who is leading the Julia project. “There’s demand for everything from recognizing handwriting to automatically grading exams.”

Edelman, who is affiliated with MIT’s Computer Science and Artificial Intelligence Laboratory, points to a lack of professionals capable of coding at this level, noting that it’s not just difficult, it’s time-intensive.

“At HPC conferences, people tend to stand up and boast that they’ve written a program so it runs 10 or 20 times faster,” Edelman says. “But it’s the human time that in the end matters the most.”

The origins of Julia can be traced back to an HPC startup that Edelman was involved in, called Interactive Supercomputing. After the business was acquired by Microsoft in 2009, Edelman launched a new project with the goal of developing a novel, high-level programming environment that was both fast and efficient and suitable for domain experts as well as expert coders.

The development group includes Jeff Bezanson, a PhD student at MIT, and Stefan Karpinski and Viral Shah, both formerly at the University of California at Santa Barbara. They had all tried MPI (message-passing interface), the popular parallel processing tool, but found it was not the easiest interface to work with.

“When you program in MPI, you’re so happy to have finished the job and gotten any kind of performance at all, you’ll never tweak it or change it,” Edelman says.

The group made it their mission to develop a new language with the parallel-processing support of MPI that could generate code that ran as fast as C. It also had to be as easy to learn and use as Matlab, Mathematica, Maple, Python, and R, and it should be open-source, like Python and R.

The effort led to the launch of Julia in 2012, released under an MIT open-source license.

Edelman reports that Julia, while still a work in progress, has surpassed the group’s expectations.

“Julia allows you to get in there and quickly develop something usable, and then modify the code in a very flexible way,” he says. “With Julia, we can play around with the code and improve it, and become very sophisticated very quickly. We’re all superheroes now — we can do things we didn’t even know we could do before.”

The language uses a “multiple dispatch” approach which enables users to define function behavior across combinations of argument types. A dynamic type system enables greater abstraction, which bolsters performance and supports large data. Programs can be created quickly; when equally good programmers compete, the Julia programmer always wins, according to Edelman.

Edelman is not only a Julia creator and developer, he uses the language for Monte Carlo simulations for his “other” job as a theoretical mathematician.

“I love using Julia for Monte Carlo because it lends itself to lots of parallelism,” he explains. “I can grab as many processors as I need. I can grab shared or distributed memory from different computers and put them altogether. When you use one processor, it’s like having a magnifying glass, but with Julia I feel like I’ve got an electron microscope. For a little while nobody else had that and it was all mine. I loved that.”

Perhaps the coolest thing about Julia is that it the spirit of collaboration and extended community that is being enabled by the combination of ease-of-use and open-source licensing. Edelman says that people from all over the world working on the project. Geographically separate parties can even work on the same piece of software in real time.

]]>http://www.hpcwire.com/2014/06/18/easier-faster-programming-language/feed/0Simulation Details 13.8 Billion Years of Cosmic Evolutionhttp://www.hpcwire.com/2014/05/07/simulation-details-13-8-billion-years-cosmic-evolution/?utm_source=rss&utm_medium=rss&utm_campaign=simulation-details-13-8-billion-years-cosmic-evolution
http://www.hpcwire.com/2014/05/07/simulation-details-13-8-billion-years-cosmic-evolution/#commentsWed, 07 May 2014 19:19:45 +0000http://www.hpcwire.com/?p=12717Astrophysicists and cosmologists have come up with a new time-lapse simulation of the universe’s evolution that is the most comprehensive and detailed yet. The Illustris simulation, as it’s called, spans 13.8 billion years of cosmic evolution and follows thousands of galaxies taking into account gravity, hydrodynamics, cooling, the course of stellar population and other complex Read more…

]]>Astrophysicists and cosmologists have come up with a new time-lapse simulation of the universe’s evolution that is the most comprehensive and detailed yet. The Illustris simulation, as it’s called, spans 13.8 billion years of cosmic evolution and follows thousands of galaxies taking into account gravity, hydrodynamics, cooling, the course of stellar population and other complex processes.

Developed by a team of scientists from the Massachusetts Institute of Technology and several other institutions and executed on powerful supercomputers, the model traces the history of the universe, starting soon after the Big Bang and continuing through to the present day, capturing 13.8 billion years of change with unprecedented fidelity.

The research team reports that the massive simulation once again confirms the standard theory of the universe and matches key astronomical observations, including the distribution of galaxies and the richness of neutral hydrogen gas in galaxies of all sizes.

A paper describing the research appears in the May 7 issue of the journal Nature. Besides MIT, the 10-author team includes researchers from Harvard-Smithsonian Center for Astrophysics (CfA); the Heidelberg Institute for Theoretical Studies in Germany; the University of Heidelberg, the Kavli Institute for Cosmology and the Institute of Astronomy, both in Cambridge, England; the Space Telescope Science Institute in Baltimore; and the Institute for Advanced Study in Princeton, N.J.

Aside from being a stunning achievement in its own right, Illustris offers important insight into the rate at which certain types of galaxies develop.

“Some galaxies are more elliptical and some are more like the Milky Way, [spiral] disc-type galaxies,” explains Mark Vogelsberger, an assistant professor of physics at MIT and lead author of the Nature paper. “There is a certain ratio in the universe. We get the ratio right. That was not achieved before.”

The model also provides clues on the tendency of matter to redistribute in the universe, prodded by supernovas and other phenomenon. This finding could be used to fine-tune experiments performed with space-based telescopes, such as NASA’s WFIRST survey, and EUCLID, the European Space Agency’s program.

Illustris showcases a cubic chunk of the universe measuring 350 million light-years on each side, which is found to contain 41,416 galaxies. The amount of data is such that the complete simulation required several months of computing time at multiple computing centers, including the Harvard Odyssey and CfA/ITC cluster; the Ranger and Stampede supercomputers at the Texas Advanced Computing Center; the CURIE supercomputer at CEA/France; and the SuperMUC computer at the Leibniz Computing Centre in Germany. The largest run incorporated 8,192 compute cores, and spanned 19 million CPU hours. For comparison’s sake, it would take the best desktop computers of the day 2,000 years to execute the entire simulation.

Adding to the simulation’s complexity are 12 billion visual “resolution elements,” which enabled the researchers to compare “snapshots” from the simulation with images of the known universe. “[There was] agreement with observational data on small scales and large scales,” affirms Vogelsberger. A close match like this serves as validation of the study’s correctness.

Illustris diverges from earlier efforts in both scope and fidelity. Its predecessor, Millennium, only tracked the evolution of the dark matter web; ordinary matter and galaxies were tacked on in an ad hoc approach. But the Illustris simulation incorporates ordinary matter from the start. Where the former visualization was relatively calm-looking, Illustris is packed with explosions, including hot blasts of gas that emanate from supermassive black holes at the center of galaxies. These ejections are an essential part of galaxy formation, acting as a brake to star formation.

As Simon White, a cosmologist at the Max Planck Institute for Astrophysics in Garching, Germany, who worked on the Millennium Simulation, explains: Illustris is the first simulation that is large enough to capture a representative segment of the universe and also fine-grained enough to incorporate individual galaxies. “It’s the combination of those two things that is new,” he tells Science.

Advances in supercomputing power are what enabled the simulation to handle the 350 million light-year span and all the additional features. “Previous simulations of the growth of cosmic structures have broadly reproduced the ‘cosmic web’ of galaxies that we see in the Universe, but failed to create a mixed population of elliptical and spiral galaxies, because of numerical inaccuracies and incomplete physical models,” the research team explains in the Nature article.

]]>http://www.hpcwire.com/2014/05/07/simulation-details-13-8-billion-years-cosmic-evolution/feed/0Research Advances on Key Quantum Computing Elementshttp://www.hpcwire.com/2014/04/11/research-advances-key-quantum-computing-elements/?utm_source=rss&utm_medium=rss&utm_campaign=research-advances-key-quantum-computing-elements
http://www.hpcwire.com/2014/04/11/research-advances-key-quantum-computing-elements/#commentsFri, 11 Apr 2014 21:54:47 +0000http://www.hpcwire.com/?p=10520Equal parts fascinating and confounding, the field of quantum computing keeps making headway. Two exciting developments are described in the current issue of Nature, one from a collaboration between Harvard University and MIT researchers and the other from the Max Planck Institute of Quantum Optics in Germany. Their work concerns the fundamental building blocks that Read more…

]]>Equal parts fascinating and confounding, the field of quantum computing keeps making headway. Two exciting developments are described in the current issue of Nature, one from a collaboration between Harvard University and MIT researchers and the other from the Max Planck Institute of Quantum Optics in Germany. Their work concerns the fundamental building blocks that make quantum computing possible.

As summarized in Popular Mechanics, the scientists figured out a way to combine atoms and particles of light – photons – to create a quantum versions of the switch and logic-gate – two essential elements of classic computing systems.

Quantum computing has long been considered the holy grail of computing. This bizzare world of particle superposition and spooky action at a distance promises to unlock the door to unprecedented kinds of computing tasks. Beyond the killer app of encryption, all sorts of seemingly uncanny things become possible, such as simulations of the universe itself.

At their core, all modern computers involve data and rules. In classical computing, the smallest unit of data is a bit, represented as a 0 or a 1. In quantum computing, the bit becomes a q-bit and instead of just being able to represent two states, it can exist in multiple states. “Superposition,” as this phenomenon is called, allows a lot of information to be acted on in a very small space, setting the stage for incredibly fast supercomputers.

Superposition states are fragile, though, and must be coaxed into being. “At this point, very small-scale quantum computers already exist,” says Mikhail Lukin, the head of the Harvard research team. “We’re able to link, roughly, up to a dozen qubits together. But a major challenge facing this community is scaling these systems up to include more and more qubits.”

The new quantum logic gate and switch introduce a new method of connecting particles, using trapped rubidium atoms and photons. The Harvard and MIT scientists created the switch by coupling one rubidum atom with a single photon, enabling both the atom and photon to switch the quantum state of the other particle. Being able to go from a ground state to an excited state, the atom-photon coupling can transmit information like a transistor in a classical computing system.

The German research group used mirror-like sheets and lasers to trap the atom, forming quantum gates, which change the direction of motion or polarization of photons. When the rubidium atom is in superposition, the photon both does and does not enter the mirror, and both does and does not get a polarization change. Via an attribute of quantum physics called entanglement swapping, multiple photons can share superposition information. These engtangled photons are made to bounce repeatedly off the mirror-trapped rubidium atom, acting as the input for the logic gate.

“The Harvard/MIT experiment is a masterpiece of quantum nonlinear optics, demonstrating impressively the preponderance of single atoms over many atoms for the control of quantum light fields,” says Gerhard Rempe, a professor at the Max Planck Institute of Quantum Optics who was part of the research team upon reading the paper from his US counterparts. “The coherent manipulation of an atom coupled to a photonic crystal resonator constitutes a breakthrough and complements our own work … with an atom in a dielectric mirror resonator.”

]]>Moving data in and out of computer memory is a time and energy consuming process, so caches evolved as a form of local memory to store frequently-used data. With the advance of multicore and manycore processors, managing caches becomes more difficult. Researchers at MIT suggest that it might make sense to let software, rather than hardware, manage these high-speed on-chip memory banks, as this article at MIT News explains.

Daniel Sanchez, an assistant professor in MIT’s Department of Electrical Engineering and Computer Science, is one of the main proponents of this new software-based approach. During the International Conference on Parallel Architectures and Compilation Techniques that took place last week, Sanchez and his student Nathan Beckmann gave a presentation on Jigsaw, a cache organization scheme, based on a paper they had co-authored. The tool provides isolation and reduces access latency in shared caches.

Jigsaw operates on the last-level cache. In multicore chips, each core has its own small cache, but the last-level cache is shared by all the cores. Shared caches face two fundamental limitations: latency and interference from other shared cache accesses. Other research has demonstrated that improvement in one issue degrades the other. “NUCA techniques reduce access latency but are prone to hotspots and interference, and cache partitioning techniques only provide isolation but do not reduce access latency,” the authors write.

Physically, this cache is comprised of separate memory banks distributed across the chip to allow each core to utilize the bank closest to it. Most chips assign data to these banks randomly, but Jigsaw optimizes this process by calculating the most efficient assignment of data to cache banks. For example, data only needed by a single core is located near that core, while data used by all the cores is put near the center of the chip. Minimizing data travel is the main role of Jigsaw, but it also optimizes for space as well with more frequently used data receiving a larger allocation.

In a series of experiments, the duo simulated the execution of hundreds of applications on 16- and 64-core chips. They found that Jigsaw improved performance by 2.2x (or 18 percent) over a conventional shared cache, while reducing energy use by as much as 72 percent. Jigsaw even outperforms more advanced partitioning techniques, like NUCA.

Optimizing cache space allocations can itself be a very time consuming process, but the MIT researchers developed an approximate optimization algorithm that runs efficiently even as the number of cores scale and different data types are used.

]]>http://www.hpcwire.com/2013/09/18/managing_memory_at_multicore/feed/0MIT Works toward Cloud Data Protectionhttp://www.hpcwire.com/2013/07/07/mit_works_toward_cloud_data_protection/?utm_source=rss&utm_medium=rss&utm_campaign=mit_works_toward_cloud_data_protection
http://www.hpcwire.com/2013/07/07/mit_works_toward_cloud_data_protection/#commentsSun, 07 Jul 2013 07:00:00 +0000http://www.hpcwire.com/?p=8501Ensuring the security of one’s information in the cloud has proven to be problematic. Researchers at MIT sought to combat that security risk in proposing the building of Ascend, a hardware component that can be coupled with cloud servers and prevents two types of security risks on public information.

]]>Ensuring the security of one’s information in the cloud has proven to be problematic, especially considering the recent revelations regarding the National Security Administration and their accessing of data.

Researchers at MIT sought to combat that security risk in proposing the building of Ascend, a hardware component that can be coupled with cloud servers and prevents two types of security risks on public information.

“This is the first time that any hardware design has been proposed — it hasn’t been built yet — that would give you this level of security while only having about a factor of three or four overhead in performance,” said Srini Devadas, MIT’s Edwin Sibley Webster Professor of Electrical Engineering and Computer Science, whose group developed the new system. “People would have thought it would be a factor of 100.”

While a performance penalty of a factor of three or four is certainly preferable to that of a hundred, those keen on running experiments in the cloud on generalized data may not be too willing to take on such a decrease. However, there are applications whose data is sensitive, particularly in the genomic and other related healthcare fields, where Ascend would be advantageous.

According to MIT, Ascend works by randomly assigning access nodes. Every time Ascend traverses a path down the access tree to retrieve information from a node, it swaps the information with another random node somewhere in the file system. In this way, it becomes difficult for potential attackers to inferring specific data locations based on sequences of memory access.

Further, Ascend would reportedly protect against timed attacks by sending requests at regular intervals to the memory, meaning a spyware application would be unable to determine the runtime of any other particular application.

The importance of keeping one’s data secure is always there, even if it is variable from application to application. The performance penalty of Ascend, once it is built, may be worth ensuring data is kept out of curious hands.

]]>The hunt for new and useful materials got a big boost this week when Intermolecular agreed to lend its advanced combinational processing technology to the Materials Project, a materials-discovery computing project launched by Lawrence Berkeley National Lab and Massachusetts Institute of Technology (MIT). The blend of data and techniques could speed the discovery of new materials by a factor of 10, researchers say.

The Materials Project was founded in late 2011 with the goal of accelerating the discovery of novel compounds by giving materials scientists and engineers open access to supercomputer resources. It currently takes an average of 18 years to bring a new material–such as a battery compound, a fuel, or crystalline structure–from lab into commercial production using traditional techniques, the group says. However, using the power of supercomputers, researchers can now predict the properties of materials before they’re even synthesized in the lab.

The pace of material-related innovation should improve now that the privately held, San Jose, California-based company Intermolecular has agreed to lend its proprietary High Productivity Combinatorial (HPC) tools and research data to the Material Project. Intermolecular’s HPC involves using advanced combinatorial processing systems that allow dozens or hundreds of experiments to be conducted in parallel, as opposed to traditional sequential tests. The results are then analyzed, and work continues on the most promising results.

The use of Intermolecular’s trademarked HPC approach and data will be a boon to the Materials Project’s HPC (as in high performance computing) resources at the National Energy Research Scientific Computing Center (NERSC), according to Berkeley Lab scientist Kristin Persson, who is also the co-founder of the Materials Project.

“Access to high-quality experimental data is absolutely essential to benchmark high-throughput computational predictions for any application,” Persson says in a story on the Berkeley Lab website. “We begin every materials discovery project with a comparison to existing data before we venture into the space of undiscovered compounds. This is the first effort to integrate private sector experimental data into the Materials Project, and could form the basis of a general methodology for integrating experimental data inputs from a wide-range of scientific and industrial sources.”

Persson sees Intermolecular’s data helping in two ways. First, if the values generated by the Materials Project are significantly off from the Intermolecular data for a given problem, it will tell researchers they may need to refine their methodologies and models. If the values are close, it will give researchers the confidence that they’re on the right path, she says.

The Materials Project is one of several interrelated projects that fall under the umbrella of the Materials Genome Initiative. The Obama Administration’s Office of Science and Technology Policy launched the Materials Genome Initiative in 2011 to foster cooperation between industry and academic researchers, with the goal of doubling the pace of development of advanced materials, such safer and more fuel efficient vehicles, packaging that keeps food fresher and more nutritious, and vests that better protect soldiers.

]]>http://www.hpcwire.com/2013/06/26/intermolecular_lends_genomics_data_to_materials_project/feed/0What Will the Sequester Mean to HPC (and Federal) Research?http://www.hpcwire.com/2013/03/20/what_will_the_sequester_mean_to_hpc_and_federal_research_/?utm_source=rss&utm_medium=rss&utm_campaign=what_will_the_sequester_mean_to_hpc_and_federal_research_
http://www.hpcwire.com/2013/03/20/what_will_the_sequester_mean_to_hpc_and_federal_research_/#commentsWed, 20 Mar 2013 07:00:00 +0000http://www.hpcwire.com/?p=4159<img src="http://media2.hpcwire.com/hpcwire/argonne_crop.jpg" alt="" width="94" height="72" />Prominent figures in government, national labs, universities and other research organizations are worried about the effect that sequestration and budget cuts may have on federally-funded R&D in general, and on HPC research in particular. They have been defending the concept in hearings and in editorial pages across the country. It may be a tough argument to sell.

]]>On Friday, March 15, President Obama gave a speech at DOE’s Argonne National Laboratory, and light-heartedly expressed his concerns about the effects of sequestration on budgets at the country’s national laboratories.

Noting that some of the employees were standing in the crowded auditorium, he quipped, “I thought [that at] Argonne, one of the effects of the sequester [was that] you had to get rid of chairs!”

People laughed. Outside of that speech, however, nobody in a federal lab is chuckling over the possible impact of sequestration. Prominent heads of national labs, university researchers and technology executives are very concerned about how budget stalemates between the White House and Congress will affect government-funded research across the country.

Sequestration, because it demands cuts in government spending almost across the board, has brought the issue directly to the datacenter. If left in place, it will put federally funded R&D this year at a level $12.5 billion less than the amount spent in 2011 – an 8.7% decrease. Several organizations have already instituted budget cuts to prepare for the decrease in funding. The National Institutes of Health has said it is cutting grant levels by 10 percent and will offer fewer grants. The National Science Foundation says it will eliminate 1,000 grants this year.

Moreover, sequestration has sparked an op-ed debate over the value of government-funded research itself. It’s a debate that could extend well beyond the current stalemate.

Locating the speech at Argonne and putting energy research on the table was itself a strategic move to highlight the importance of funding national labs. President Obama also tried to offer new funding in a palatable way. He did not call for additional taxes or even preventing future cuts, but suggested using a non-tax form of revenue to fund energy research. The approach would take $2 billion over the next 10 years from leases paid by energy companies that develop fossil fuel resources on federal land. That money would fund a very specific type of research: developing electric vehicles, homegrown biofuels, and domestically produced natural gas.

But that still leaves the longer-term question open. Is it a good idea to use tax revenue to fund research that may or may not have future benefits to the country? The heads of government organizations, national labs, universities and other supporters of technology are now defending the concept in hearings and in editorial pages across the country.

William Brinkman, director of the Office of Science at DOE, testified before a House Appropriations Subcommittee on Energy and Water Development on March 5. He said that sequestration would cut this year’s budget for the Office of Science by $215 million from 2012, something the country cannot afford at a time when “other countries around the world are challenging our scientific leadership in essentially all the scientific disciplines that we steward.” HPC research is a big part of that. “Since the inception of high-performance computing, the United States has been a world leader in this field,” Brinkman continued.

But that may no longer be the case. Budget cuts will affect research intended to “accelerate the next generation of supercomputers at a time when international competition in this domain is growing,” he said.

In fact, the US is not the clear leader it once was. In 2011, a 700,000-core Fujitsu K computer installed at the RIKEN Advanced Institute for Computational Science (AISC) hit the summit of the TOP500 list. It dropped to third position on the November 2012 list because of competition from newer machines, but 31 of the 50 most powerful computers on that list are based outside the US. Throughout the world, countries such as China, Japan, the UK, Germany, India and most recently Switzerland are touting the competitive benefits of new supercomputers.

China has joined the competition to become the first country with an exascale computer, as has a European consortium, the Partnership for Advanced Computing in Europe (PRACE). The Indian Institute of Technology Delhi (IIT Delhi) is partnering with NVIDIA to create a research lab to try to reach exascale computing in India by 2017.

Brinkman also argues that federally-funded HPC research is an enormous boon to industry at home. “Growth in computing performance has the potential to advance multiple sectors of our economy, including science, manufacturing, and national defense,” he testified before Congress. As one example, he pointed out that corporations are conducting 15 projects in the Industrial High Performance Computing Partnerships Program at Oak Ridge National Laboratory (ORNL).

Others have also become very vocal in defending federal R&D in general as a boon to the economy. The Washington think tank ITIF estimates that projected cuts in R&D will reduce the GDP by between $203 billion and $860 billion over the next nine years. It also says that sequestration will put the US “$511 billion behind in R&D investment when compared to expected Chinese R&D expenditure growth rates.”

In an editorial in The Atlantic, National Lab Directors Paul Alivisatos (Lawrence Berkeley National Laboratory), Eric D. Isaacs (Argonne) and Thom Mason (ORNL) write that the impact of sequestration “will be felt years – or even decades – in the future, when the nation begins to feel the loss of important new scientific ideas that now will not be explored, and of brilliant young scientists who now will take their talents overseas or perhaps even abandon research entirely.” Federal R&D spending amounts to less than one percent of the federal budget, they argue, and cuts will result in “gaps in the innovation pipeline [that] could cost billions of dollars and hurt the national economy for decades to come.”

In an editorial in The Financial Times, MIT president Rafael Reif and former Intel CEO Craig Barrett argue that “scientific discovery improves life and creates wealth like nothing else. But that notion has essentially been on trial in the US for decades.” They point out that the commerce department has estimated that since WWII, 75 percent of postwar growth came from technological innovation.

Some people, however, dispute those numbers. Roger Pielke a professor of environmental studies at the Center for Science and Technology Policy Research at the University of Colorado at Boulder, has become something of a de-facto spokesman to counter the economic arguments. He is also a Senior Fellow at The Breakthrough Institute, which he describes as a “progressive think tank.” He argues that the numbers claiming economic growth from R&D are bogus. “It would be remarkable if true,” he writes at the organization’s website. “Unfortunately, it is not.” He says that there is no statistical basis for the claims. He also says that early proponents of the theories that economic growth is sparked by “creative destruction” in the economy (Joseph Schumpeter) or “technical change” (Robert Solow), which led to the arguments of the economic impact of R&D, have been misunderstood.

Many fiscal conservatives in Congress are likely to agree. The result so far is that the debate continues and budget cuts may still slice into funding of HPC centers, federal labs, and federal R&D in general. It’s an impact that may be felt for years to come.