Back when the Parallels beta was first announced in April (on the same day that Boot Camp was released), everyone started talking about the possibilities of running Mac and Windows apps side-by-side one day. Parallels made that easier for us, but at the time, we were still limited to this odd, virtual environment that many of us wished to escape.
There is a little-known feature built into the newest beta for Parallels, though, that will undoubtedly make everyone's virtualization lives that much better, called Coherence. It allows you to hide the Windows desktop and run Windows apps, through Parallel, side-by-side with your OS X applications as if they were all running together in one, big, happy family.
How is this possible? Why would you want to do that? Adam Pash of LifeHacker thought the same thing:
He goes through a pretty detailed set of instructions for how to set it all up, and it's really not all that complex. None of his tips are even required in order to run Parallels in Coherence mode, they're just recommended in order to "help keep the line between Windows and your Mac pretty thin."
His tips, after installing Parallels and a copy of Windows, include:
Then, all you have to do is select "Coherence" from the View menu and you're all set to run programs from the two OSes side-by-side. You can even launch Windows apps from the Mac, which he details in his writeup as well. It makes use of a third-party app from VerySimple Dev since Parallels does not yet support this feature, but what Parallels does support now is "seamless drag and drop" between Windows and Mac.
Oh Parallels gods, thank you for bestowing this wonderful feature upon us. If only Apple would stop denying rumors of including such virtualization abilities in Leopard, we could all die happy.

« Joule and Red Rock Biofuels intend to merge; solar fuels plus biomass F-T | Main | Two years in, BMW i3 is the best-selling EV in Germany and the 3rd top seller worldwide; 80% of buyers new to BMW »
Researchers led by Ashley Weaver, assistant professor at the Virginia Tech-Wake Forest University Center for Injury Biomechanics, have developed a method to compute crash injury metrics and risks as functions of precrash occupant position.
The process allows for quantification of the sensitivity and uncertainty of the injury risk predictions based on occupant position to understand further important factors that lead to more severe motor vehicle crash injuries. The modeling results provide details not available from using crash test dummies (anthropomorphic test devices, or ATDs).
More than 33,000 Americans die in motor vehicle crashes annually, according to the Centers for Disease Control and Prevention. Modern restraint systems save lives, but some deaths and injuries remain, and restraints themselves can cause some injuries. Although crash-test dummies help engineers design safer cars, they provide only limited information about forces the body experiences during impact.
Computer models of vehicle crashes, on the other hand, can provide more sophisticated information on how to improve restraints and other safety systems. The models also help researchers simulate the effects of thousands of variables that would be far too slow to test in physical crash tests.
The Crash Injury Research and Engineering Network (CIREN) has created a database of real-world vehicle crashes for researchers to test with computer models. Working with Joel Stitzel and graduate students and staff from the Center for Injury Biomechanics, Weaver developed a 3-phase real-world motor vehicle crash (MVC) reconstruction method to analyze injury variability as a function of precrash occupant position for 2 full-frontal CIREN cases.
The researchers used the NSF-supported Blacklight supercomputer at the Pittsburgh Supercomputing Center and the DEAC Cluster at Wake Forest University to run thousands of simulations drawn from hundreds of cases. The simulations used virtual versions of the Toyota Camry and Chevrolet Cobalt.
Weaver worked with members of the Extreme Science and Engineering Discovery Environment (XSEDE) Extended Collaborative Support Service team—staff with expertise in many areas of advanced computing—who helped set up the cyberinfrastructure and workflows needed to run the simulations.
Supported by a five-year, $121-million NSF grant, XSEDE provides a collection of integrated digital resources that scientists can use to access advanced computing resources, data and expertise.
Using the Total Human Model for Safety (THUMS), developed by Toyota Central Research and Development Labs, Weaver and her team showed that simulations can reproduce real-world injury patterns and predict details crash-test dummies can’t provide.
Along the way, they demonstrated how injury-causing stress moves from the foot to the lower leg as a driver’s head comes forward into a frontal airbag, and that more reclined seating positions can lead to a higher risk of head and chest injuries.
Weaver and her colleagues published their findings in an open-access paper in Traffic Injury Prevention.
The reconstruction process allows for quantification of the sensitivity and uncertainty of the injury risk predictions based on occupant position, which is often uncertain in real-world MVCs. This study provides perspective on the injury risk sensitivity of precrash occupant positioning within the vehicle compartment. By studying a variety of potential occupant positions, we can understand important factors that lead to more severe injuries and potentially mitigate these injuries with advanced safety systems to protect occupants in more dangerous positions. Evaluating additional cases in further detail will allow for development of new injury metrics and risk functions from real-world crash data to assess the effectiveness of restraint systems to prevent and mitigate injuries that are not easily studied using postmortem human subjects or ATDs.

More than 33,000 Americans die in motor vehicle crashes annually, according to the Centers for Disease Control and Prevention. Modern restraint systems save lives, but some deaths and injuries remain — and restraints themselves can cause some injuries.
"Crash-test dummies" help engineers design safer cars, but provide only limited information about forces the body experiences during impact. Computer models of vehicle crashes, on the other hand, provide more sophisticated information on how to improve restraints and other safety systems. The models also help researchers simulate the effects of thousands of variables that would be far too slow to test in physical crash tests.
"There's really limited information you can get from a crash-test dummy — you get only about 20 data points," says Ashley A. Weaver, an assistant professor at the Virginia Tech-Wake Forest University Center for Injury Biomechanics and a former National Science Foundation (NSF) graduate research fellow. "The human body model gives us much more, predicting injuries in organs that aren't in that dummy, such as lung contusions."
The Crash Injury Research and Engineering Network (CIREN) has created a database of real-world vehicle crashes for researchers to test with computer models. Working with Joel Stitzel and graduate students and staff from the Center for Injury Biomechanics, Weaver used the NSF-supported Blacklight supercomputer at the Pittsburgh Supercomputing Center and the DEAC Cluster at Wake Forest University, to run thousands of simulations drawn from hundreds of cases. The simulations used virtual versions of the Toyota Camry and Chevrolet Cobalt.
Weaver worked with members of the Extreme Science and Engineering Discovery Environment (XSEDE) Extended Collaborative Support Service team — staff with expertise in many areas of advanced computing — who helped set up the cyberinfrastructure and workflows needed to run the simulations.
Supported by a five-year, $121 million NSF grant, XSEDE provides a collection of integrated digital resources that scientists can use to access advanced computing resources, data and expertise.
Using the Total Human Model for Safety, developed by Toyota Central Research and Development Labs, Weaver and her team showed that simulations can reproduce real-world injury patterns and predict details crash-test dummies can't provide.
Along the way, they demonstrated how injury-causing stress moves from the foot to the lower leg as a driver's head comes forward into a frontal airbag, and that more reclined seating positions can lead to a higher risk of head and chest injuries.
Weaver and her colleagues published their findings in Traffic Injury Prevention in October 2015.
The simulations allowed the researchers to quantify the sensitivity and uncertainty of the injury risk predictions based on occupant position, which is difficult to determine in real-world motor vehicle crashes.
"By studying a variety of potential occupant positions," the team concluded, "we can understand important factors that lead to more severe injuries and potentially mitigate these injuries with advanced safety systems to protect occupants in more dangerous positions."

When researchers need to compare complex new genomes, or map new regions of the Arctic in high-resolution detail, or detect signs of dark matter, or make sense of massive amounts of functional MRI data, they turn to the high-performance computing and data analysis systems supported by the National Science Foundation (NSF).
High-performance computing (or HPC) enables discoveries in practically every field of science — not just those typically associated with supercomputers like chemistry and physics, but also in the social sciences, life sciences and humanities.
By combining superfast and secure networks, cutting-edge parallel computing and analytics software, advanced scientific instruments and critical datasets across the U.S., NSF's cyber-ecosystem lets researchers investigate questions that can't otherwise be explored.
NSF has supported advanced computing since its beginning and is constantly expanding access to these resources. This access helps tens of thousands of researchers each year — from high-school students to Nobel Prize winners — expand the frontiers of science and engineering, regardless of whether their institutions are large or small, or where they are located geographically.
Below are 10 examples of research enabled by NSF-supported advanced computing resources from across all of science.
Pineapples don't just taste good — they have a juicy evolutionary history. Recent analyses using computing resources that are part of the iPlant Collaborative revealed an important relationship between pineapples and crops like sorghum and rice, allowing scientists to home in on the genes and genetic pathways that allow plants to thrive in water-limited environments.
Led by the University of Arizona, Texas Advanced Computing Center, Cold Spring Harbor Laboratory and University of North Carolina at Wilmington, iPlant was established in 2008 with NSF funding to develop cyberinfrastructure for life sciences research, provide powerful platforms for data storage and bioinformatics and democratize access to U.S. supercomputing capabilities.
This week, iPlant announced it will host a new platform, Digital Imaging of Root Traits (DIRT), that lets scientists in the field measure up to 76 root traits merely by uploading a photograph of a plant's roots.
2. Designing new nanodevices
Software that simulates the effect of an electric charge passing through a transistor — only a few atoms wide — is helping researchers to explore alternative materials that may replace silicon in future nanodevices.
The software simulations designed by Purdue researcher Gerhard Klimeck and his group, available on the nanoHUB portal, provide new information about the limits of current semiconductor technologies and are helping design future generations of nanoelectronic devices.
NanoHUB, supported by NSF, is the first broadly successful, scientific end-to-end cloud computing environment. It provides a library of 3,000 learning resources to 195,000 users worldwide. Its 232 simulation tools are used in the cloud by over 10,800 researchers and students annually.
Earthquakes originate through complex interactions deep below the surface of the Earth, making them notoriously difficult to predict.
The Southern California Earthquake Center (SCEC) and its lead scientist Thomas Jordan use massive computing power to simulate the dynamics of earthquakes. In doing so, SCEC helps to provide long-term earthquake forecasts and more accurate hazard assessments.
In 2014, the SCEC team investigated the earthquake potential of the Los Angeles Basin, where the Pacific and North American Plates run into each other at the San Andreas Fault. Their simulations showed that the basin essentially acts like a big bowl of jelly that shakes during earthquakes, producing more high-shaking ground motions than the team expected.
Using the NSF-funded Blue Waters supercomputer at the National Center for Supercomputing Applications and the Department of Energy-funded Titan supercomputer at the Oak Ridge Leadership Computing Facility, the researchers turned their simulations into seismic hazard models. These models describe the probability of an earthquake occurring in a given geographic area, within a given window of time and with ground motion intensity exceeding a given threshold.
Nearly 33,000 people die in the U.S. each year due to motor vehicle crashes, according to the National Highway Traffic Safety Administration. Modern restraint systems save lives, but some deaths and injuries remain — and restraints themselves can cause injuries.
Researchers from the Center for Injury Biomechanics at Wake Forest University used the Blacklight supercomputer at the Pittsburgh Supercomputing Center to simulate the impacts of car crashes with much greater fidelity than crash-test dummies can.
By studying a variety of potential occupant positions, they're uncovering important factors that lead to more severe injuries, as well as ways to potentially mitigate these injuries, using advanced safety systems.
Since Albert Einstein, scientists have believed that when major galactic events like black hole mergers occur, they leave a trace in the form of gravitational waves — ripples in the curvature of space-time that travel outward from the source. Advanced LIGO is a project designed to capture signs of these events.
Since gravitational waves are expected to travel at the speed of light, detecting them requires two gravitational wave observatories, located 1,865 miles apart and working in unison, that can triangulate the gravitational wave signals and determine the source of the wave in the sky.
In addition to being an astronomical challenge, Advanced LIGO is also a "big data" problem. The observatories take in huge volumes of data that must be analyzed to determine their meaning. Researchers estimate that Advanced LIGO will generate more than one petabyte of data a year, the equivalent of 13.3 years' worth of high-definition video.
To achieve accurate and rapid gravity wave detection, researchers use Extreme Science and Engineering Discovery Environment (XSEDE) — a powerful collection of advanced digital resources and services — to develop and test new methods for transmitting and analyzing these massive quantities of astronomical data.
Advanced LIGO came online in September, and advanced computing will play an integral part in its future discoveries.
What happens when a supercomputer reaches retirement age? In many cases, it continues to make an impact in the world. The NSF-funded Ranger supercomputer is one such example.
In 2013, after five years as one of NSF's flagship computer systems, the Texas Advanced Computing Center (TACC) disassembled Ranger and shipped it from Austin, TX, to South Africa, Tanzania and Botswana to give root to a young and growing supercomputing community.
With funding from NSF, TACC experts led training sessions in South Africa in December 2014. In November 2015, 19 delegates from Africa came to the U.S. to attend a two-day workshop at TACC as well as the Supercomputing 2015 International Conference for High Performance Computing.
The effort is intended, in part, to help provide the technical expertise needed to successfully staff and operate the Square Kilometer Array, a new radio telescope being built in Australia and Africa which will offer the highest resolution images in all of astronomy.
In September 2015, President Obama announced plans to improve maps and elevation models of the Arctic, including Alaska. To that end, NSF and the National Geospatial-Intelligence Agency (NGA) are supporting the development of high-resolution Digital Elevation Models in order to provide consistent coverage of the globally significant region.
The models will allow researchers to see in detail how warming in the region affects the landscape in remote areas, and allow them to compare changes over time.
The project relies, in part, on the computing and data analysis powers of Blue Waters, which will let researchers store, access and analyze large numbers of images and models.
To solve some of society's most pressing long-term problems, the U.S. needs to educate and train the next generation of scientists and engineers to use advanced computing effectively. This pipeline of training begins as early as high school and continues throughout the careers of scientists.
Last summer, TACC hosted 50 rising high school juniors and seniors to participate in an innovative new STEM program, CODE@TACC. The program introduced students to high-performance computing, life sciences and robotics.
On the continuing education front, XSEDE offers hundreds of training classes each year to help researchers update their skills and learn new ones.
High-performance computing has another use in education: to assess how students learn and ultimately to provide personalized educational paths. A recent report from the Computing Research Association, "Data-Intensive Research in Education: Current Work and Next Steps," highlights insights from two workshops on data-intensive education initiatives. The LearnSphere project at Carnegie Mellon University, an NSF Data Infrastructure Building Blocks project, is putting these ideas into practice.
9. Experimenting with cloud computing on new platforms
In 2014, NSF invested $20 million to create two cloud computing testbeds that let the academic research community develop and experiment with cloud architectures and pursue new, architecturally-enabled applications of cloud computing.
CloudLab (with sites in Utah, Wisconsin and South Carolina) came online in May 2015 and provides researchers with the ability to create custom clouds and test adjustments at all levels of the infrastructure, from the bare metal on up.
Chameleon, a large-scale, reconfigurable experimental environment for cloud research, co-located at the University of Chicago and The University of Texas at Austin, went into production in July 2015. Both serve hundreds of researchers at universities across the U.S. and let computer scientists experiment with unique cloud architectures in ways that weren't available before.
The NSF-supported "Comet" system at the San Diego Supercomputer Center (SDSC) was dedicated in October and is already aiding scientists in a number of fields, including domains relatively new for supercomputer integration, such as neuroscience.
SDSC recently received a major grant to expand the Neuroscience Gateway, which provides easy access to advanced cyberinfrastructure tools and resources through a web-based portal, and can significantly improve the productivity of researchers. The gateway will contribute to the national BRAIN Initiative and deepen our understanding of the human brain.