Funding for the U.S. Department of Energy's (DOE) supercomputer efforts would need at least another $400 million annually to possibly build an exascale computer by 2020, according to Argonne National Laboratory's Rick Stevens. At current funding levels, the United States will not have an exascale computer until the middle of the next decade, Stevens adds. The proposed fiscal 2014 budget shows the DOE Office of Science requesting $465.59 million for the Advanced Scientific Computing Research program and the National Nuclear Security Administration requesting $401.04 million for the Advanced Simulation and Computing Campaign. Both Japan and China have large government investments in exascale development, and Stevens says under current funding levels, China will develop an exascale computer years ahead of the United States. In addition, achieving exascale capability requires advances in several areas of computing, according to other experts. "Current system architectures today can't simply be scaled up to produce a useable and cost-effective system," says Lawrence Livermore National Laboratory's Dona Crawford.

A crisis map Web page was created by Google's Crisis Response unit to assist the community of Moore, OK, in its cleanup and recovery following its devastation by a tornado on May 20. Featured on the Moore Crisis Map page is a map of the area, the path of the storm, and links to help residents and recovery personnel easily find information in the wake of the disaster. There also are links for people to make Red Cross donations, and a Safe and Well website where affected residents can post information to alert friends and relatives of their safety. Crisis map visitors can tailor its appearance by adding or subtracting various map layers so they can see the map in several formats, including as a normal map image or using satellite imagery before and after the tornado struck. Damaged areas are color-coded so visitors can see the track of the storm and its aftermath. Locations where public shelters are available for victims are displayed as well, along with the open or closed status of schools and houses of worship. Also viewable on the map is current traffic on area roadways.

The European Laboratory for Particle Physics (Cern) recently launched a public appeal for files, hardware, and software from the Internet's earliest days. Unfortunately, the files and data for many of those first pages have been lost because of the way the World Wide Web creators Tim Berners-Lee and Robert Caillou worked as they were developing the technology. However, Cern's outreach has produced a copy of the Web page demonstrated by Berners-Lee in 1991 as he was trying to get support for the idea of the Web. In those days, Berners-Lee had to carry around a computer with the Web files on it in order to demonstrate its capabilities. One of the people he showed it to, Paul Jones, kept a copy that has survived. There might be more relics from the Web's beginning on that machine, but for the moment they remain hidden because the password for the computer's hard drive has been forgotten, Jones says. Caillou and Berners-Lee carried out their early research at Cern, which wants to use any early artifacts it finds to create an online exhibit. Cern's Dan Noyes notes the organization's request for material has elicited a huge public response.

Hackers Find China Is Land of Opportunity New York Times (05/22/13) Edward Wong

In China, hacking is openly discussed and promoted at trade shows, inside university classrooms, and on Internet forums, and thrives across official, corporate, and criminal worlds. For example, the Ministry of Education and Chinese universities join companies in sponsoring hacking competitions attended by army talent scouts. Experts say the driving force behind hacking's universal acceptance in China is the government's insistence on maintaining surveillance over anyone deemed suspicious. Although the U.S. government criticizes what it considers state-sponsored attacks, China's hacking attacks tend to be simple, generally occurring only between normal business hours, and without much effort to hide their actions. "They’re using the least amount of sophistication necessary to accomplish their mission," says FireEye's Darien Kindlund. "They have a lot of manpower available, but not necessarily a lot of intelligent manpower to conduct these operations stealthily." Although some hackers pursue criminal activities, there are many legitimate ways skilled hackers can earn generous salaries in China, as an increasing number of cybersecurity companies offer cybersecurity services to the government, state-owned enterprises, and private companies.

The Khronos Group plans to create an open and royalty-free application programming interface (API) for controlling mobile and embedded cameras and sensors. Developers would gain access to features such as burst modes and flash, according to the organization, which develops standards such as OpenGL. "It enables developers to flexibly control the operation of the camera and sensor to generate innovative sequences of images that can be processed by leading-edge applications," says Khronos president Neil Trevett. The newly established Camera working group will start on the task in June, and interested companies can make contributions, influence the work, and gain early access to draft specifications by joining Khronos. A native API will be developed, but there could be an opportunity to bring the advanced camera control into browsers by building it into JavaScript. The Camera working group will discuss the initiative at the SIGGRAPH 2013 conference, which takes place July 21-25 in Anaheim, Calif.

University researchers in the United Kingdom are working on solutions to the growing problem of power consumption as mainstream processors are expected to contain hundreds of cores in the near future. Power consumption outpaces performance gains when additional cores are added to processors so that, for example, a 16-core processor in an average smartphone would cut the maximum battery life to three hours. In addition to mobile devices, data centers crammed with server clusters face mounting energy demands due to the rising number of cloud services. Left unchecked, the power consumption issue within three processor generations will require central-processing unit (CPU) designs that use as little as 50 percent of their circuitry at one time, to restrict energy use and waste heat that would ruin the chip. The University of Southampton is part of a group of universities and companies joining in the Power-efficient, Reliable, Many-core Embedded systems (PRiME) project to explore ways that processors, operating systems, and applications could be redesigned to enable CPUs to intelligently pair power consumption with specific applications. PRiME is studying a dynamic power management model in which processors work with the operating system kernel to shut down parts of cores or modify the CPU's clock speed and voltage based on exact application needs.

NSF and SRC to Fund Research to Create Failure-Resistant Systems and Circuits for Tomorrow's Computing Applications National Science Foundation (05/21/13) Lisa-Joy Zgorski

The U.S. National Science Foundation (NSF) and the Semiconductor Research Corp. (SRC) will fund 18 projects involving the design of failure-resistant circuits and systems for future computing applications. NSF and SRC will provide $6 million over three years to support 29 researchers at 18 universities. The aim of the Failure Resistant Systems program is to ensure that systems are designed at the outset to be self-corrective and self-healing with minimal or no external intervention during the entire life of their operation. Cellphones, aircraft flight controls, autonomous vehicles, sophisticated weapon systems, and tiny medical devices are common examples of miniaturized electronics that form part of pervasive, efficient, and complex electronic systems; accurate functioning of these systems is often a matter of life and death. "New fundamental design techniques have the potential to yield major advances in the reliability of electronic systems," says NSF's Farnam Jahanian. "This program builds on more than a decade of successful partnerships with SRC and provides the academic research community a new opportunity to do ground-breaking, long-term, basic research."

Massachusetts Institute of Technology (MIT) researchers who proposed solutions to practical problems with quantum key distribution (QKD) as a method of secure data transmission have now demonstrated their method experimentally, proving all of their theoretical predictions. QKD is intended for cryptographic key distribution for non-quantum cryptography, because every bit received requires the transmission of an enormous volume of bits, which is acceptable for key distribution but not for general-purpose communication. In addition, QKD systems depend on photon properties and thus are highly susceptible to signal loss, especially over large distances, and usually only work across distances of about 100 miles. The MIT team addressed these challenges with a new quantum communication protocol that is far more resilient to signal loss than QKD, and transmits only one bit for every one received. The mutual dependency of electron spins orbiting the nucleus of an atom at the same distance is known as entanglement, which is delicate and begins to break down as soon as particles interact with their immediate environments. With the new protocol, even if the entanglement between two light beams breaks down and correlation returns to classical limits, it can remain much higher than it would be if the beams had started with a classical correlation.

Researchers at Disney Research, Zurich, have developed DuctTake, a technique they say makes video compositing easy, faster, and more affordable. Modern filmmaking includes heavy, labor-intensive tasks such as creating special effects, replacing backgrounds, and combining multiple takes of an actor's performance, but the researchers say DuctTake simplifies the segmentation of elements that are to be added or subtracted from a video. The system uses an algorithm to find a spatiotemporal seam through the video frame that enables two or more videos to be joined together. DuctTake can only combine scenes that have overlapping content, and works well when combining multiple takes of the same shot. The researchers recently demonstrated that the technique can be used across a wide range of video composites. DuctTake also includes tools for creating a composite that will look realistic, such as a feature that adjusts the seam between frames to compensate for camera movement or content movement. Other tools adjust for differences in brightness, contrast, and hue between takes, blend images along seams that are visible in a common background, and boost the blurriness in some video to match blurring that occurs in the video with which it is being combined.

The application of artificial intelligence (AI) to the law aims to let automated systems handle arguments in which the logic is not clear. Fraunhofer Institute researchers are developing Elterngeld, an AI application designed to make automatic decisions on child benefit claims. The developers are currently in talks with Germany's Federal Employment Agency about how to deploy the system, although it is not yet ready to replace humans because the text of each law needs to be broken down into a structured, machine-readable format. In the future, new laws could be drafted with machines in mind so that each is built as a structured database containing all of the law's concepts, which would enable AI software to implement legislation on a wide scale. Another Fraunhofer Institute AI tool, called TrademarkNow, measures the similarity of new trademarks to those already in existence, to help avoid potential legal issues. The system works by mining the databases of the European and U.S. trademark registers.

Technology is playing an increasingly large role in the translation industry, due to advances in machine-aided translation (MAT). The Center for Development of Advanced Computing (C-DAC) soon will release a pattern-directed, rule-based English to Malayalam MAT system called Paribhashika. C-DAC also will unveil a Malayalam book translated from English using the new system. “The key feature of the software is that intelligible translation can be carried out and it shows all possible translation," says C-DAC's Badran V. K. "Text input and file input facilities are provided, also post editing option is available." Translating a chapter from English to Malayalam typically takes three months, but work can be completed within a month using the Paribhashika system, he notes. The MAT system performs the bulk of the translating work, with human translators contributing the final editing. Government departments are expected to use the software to translate reports, and the State Institute of Languages is working with C-DAC to translate its publications. C-DAC will make a Web version of the software available online for users to test and provide feedback.

Rice University researchers have developed a method for arranging metal nanoparticles in geometric patterns that can act as optical processors that transform incoming light signals into output of a different color. The researchers used the method to create an optical device in which incoming light could be directly controlled with light via a process known as four-wave mixing. The researchers say their disc-patterning method is the first that can produce materials that are tailored to perform four-wave mixing with a wide range of colored inputs and outputs. "That means not only can we send in beams of two different colors and get out a third color, but we can fine-tune the arrangements to create devices that are tailored to accept or produce a broad spectrum of colors," says Rice professor Naomi Halas. She says processing information with light rather than electricity could lead to computers that are both faster and more energy efficient. "The methods used to create this device can be applied to the production of a wide range of nonlinear media, each with tailored optical properties," Halas says.

IBM has given Rensselaer Polytechnic Institute an open-ended three-year charter to improve the intelligence of IBM's Watson software. Rensselaer professor Jim Hendler says one goal of this initiative is to use Watson to understand vast sets of open, unstructured data. "What we want to do is use Watson's capabilities to put together the descriptive unstructured part, so the thing that says what the data set does, the metadata--so the data about the data set--when it was released, by who, and for what purpose, and some of the things we can find actually in the data," Hendler notes. This challenge is compounded by the lack of real standards for data and the existence of few protocols specific to data transfer and aggregation. Hendler also sees man-machine collaboration as an area where work with Watson could yield potential benefits. He emphasizes that technology such as Watson could help find information beyond the capabilities of humans, which humans could connect to make better decisions. "People are very good at pulling it together once they have the information, but finding those needles across those many haystacks is something Watson can help with," Hendler says.