OWL 2, a new Internet language developed by an international team led by computer scientists from the University of Manchester and Oxford University, is designed to enable computers to understand and interpret the contents of its pages. "The World Wide Web as we see it today is rather like a collection of linked documents," says Oxford professor Ian Horrocks, who helped develop the language. "Whilst humans are very good at analyzing the data contained in these pages, languages such as HTML do not help computers to 'bridge the meaning gap,' and understand that, for instance, 'paracetamol,' 'acetaminophen,' and 'para-acetylaminophen' are all names for the same thing." One of the initial applications for OWL 2 is helping computers understand and analyze special medical terms. For example, the NCI Cancer Thesaurus has more than 50,000 medical terms, and ensuring that these terms are described, updated, and linked correctly has been a huge task for humans. However, OWL 2 can allow definitions to be written in such a way that computer programs can automatically update terms and identify errors. "The first stage was writing the NCI Thesaurus in the original version of the language, OWL, but now OWL 2 enables computer programs to interpret these terms in a much more human-like way," says Manchester's Bijan Parsia.

Testing the Accessibility of Web 2.0 University of Southampton (ECS) (10/27/09) Lewis, Joyce

The University of Southampton's School of Electronics and Computer Science (ECS) is launching a study that will explore how well people with disabilities can access Web services such as blogs, wikis, and social networking sites. The study, led by Mike Wald and E.A. Draffan in ECS' Learning Societies Lab, is based on an accessibility toolkit that will enable users to test the accessibility of Web 2.0 services. The accessibility tools were developed as a result of the LexDis project, which identified strategies learners can use to enhance their e-learning experience. Part of the toolkit, Web2Access, provides an online checking system for any interactive Web-based service such as Facebook. Another key feature of the accessibility kit is Study Bar, which can work with all browsers and reads text out loud, spell checks, provides a dictionary, and can enlarge or change text fonts and colors to make text more readable. "We developed it because nowadays users contribute as well as read information and so you cannot just click on a button to see if Web sites are accessible and easy to use," Draffan says. Wald says it is the first time that there has been a systematic way to evaluate and provide the results of accessibility testing of Web services.

Cornell University computer scientists have developed a method for generating the crashing and rumbling noises of objects made from thin harmonic shells such as cymbals and garbage can lids. The method, developed by professor Doug James and graduate students Jeffrey Chadwick and Steven An, will be presented at ACM's SIGGRAPH Asia conference, which takes Dec. 16-19 in Yokohama, Japan. When a thin-shelled object falls or is struck, the metal or plastic slightly deforms and then snaps back into place, creating a vibration. Previous methods of synthesizing these noises did not account for the coupling effect that occurred when energy transfers from one vibration to another and back again, which resulted in a clean, clear sound that is more appropriate for a bell or chime. The new method accounts for this interaction and maps how the sound waves radiate to determine how the event will sound to a listener in any particular location. The researchers say that although their method is significantly faster than existing systems, the computations for a simple demonstration still take about an hour on a laptop. However, the researchers are hopeful that the simulation process can be accelerated by making some approximations. Their research is part of a larger project to synthesize various sounds, including dripping and splashing fluids, small clattering objects, and shattering glass.

A severe pandemic could choke the Internet's capacity to handle the surge in traffic caused by a greater number of teleworkers, according to a report issued by the U.S. Government Accountability Office (GAO). "Increased demand during a severe pandemic could exceed the capacities of Internet providers' access networks for residential users and interfere with teleworkers in the securities market and other sectors," says the report, which refers to studies by the U.S. Department of Homeland Security (DHS) and Internet service providers (ISPs). GAO cites ISPs' limited ability to prioritize traffic or take other actions in aid of vital teleworkers, while e-commerce could be adversely affected by actions such as lowering customers' transmission speeds or obstructing popular Web sites. The GAO study cites DHS's failure to devise a strategy to address potential Internet congestion or cooperate with federal partners to guarantee the existence of sufficient authorities. It also criticizes DHS for not analyzing the feasibility of running a campaign to secure public cooperation to reduce nonessential Internet use, and for not coordinating with other federal and private sector bodies to evaluate other actions that could be performed or ascertain what authorities may be needed to act. DHS's Jerald Levine has written a letter stating that the department will take action to ameliorate the effects of any pandemic-related congestion on the systems the federal government uses to convey critical national security/emergency preparedness information--but he says it is not the department's responsibility to address Internet congestion for other communications.

The programming tool Scratch is the focus of the cover story of the November issue of Communications of the ACM (CACM). Researchers from the Massachusetts Institute of Technology's Media Laboratory and colleagues from a company in Canada and the University of Pennsylvania discuss how "the YouTube of interactive media" has helped to improve the digital, design, and problem-solving skills of young people. Meanwhile, technology writer Leah Hoffmann takes a look at the challenges the health care industry faces in implementing digital record systems. CACM Editor-in-Chief Moshe Y. Vardi tackles computer science's "image crisis." The latest issue of CACM also features an article by Susan Landau of Sun Microsystems and Whitfield Diffie, former chief security officer at Sun and current visiting professor at the University of London, which examines wiretapping and the threat that wiretapping of the modern telecommunications system poses to national security. In another article, Microsoft Research technical fellow Butler Lampson weighs the tradeoffs of computer security versus user privacy. CACM also features the 2007 ACM A.M. Turing Lecture by model checking pioneers Edmund M. Clarke, E. Allen Emerson, and Joseph Sifakis, which was presented at the Design Automation Conference in 2008.

Mathematics, science, and technology must continue to be a priority for U.S. higher education to ensure that the United States remains globally competitive, according to a report by the American Association of State Colleges and Universities. A new year-long study, "Leadership for Challenging Times," found that U.S. residents ages 25 to 34 are less likely to earn degrees in math, science, and technology than their parents' generation. The report addresses issues such as the declining interest in math and science among students, as well as the current state of elementary and secondary math and science education. Moreover, the report says the United States should expect fewer students from other countries such as China and India to attend American colleges now that other nations are spending more on higher education, the study says. The report recommends that college presidents emphasize the importance of math, science, and technology, and recommends that colleges encourage students to learn foreign languages and study abroad.

Sequoia Voting Systems, which has been criticized for resisting public examination of its proprietary systems, recently announced plans to make the source code for its new optical-scan voting system available to the public. The new voting system, called Frontier Election System, will be submitted for federal certification and testing in the first quarter of 2010. The system's source code will be released for public review in November, according to Sequoia's Web site. Sequoia's announcement comes five days after a non-profit foundation announced the release of its open source election software for public review, although Sequoia's Michelle Shafer says the timing of the announcements are unrelated. "Fully disclosed source code is the path to true transparency and confidence in the voting process for all involved," says Sequoia's Eric Coomer in a press release. Previously, Sequoia had fought any efforts to examine the source code of its proprietary systems and even threatened to sue Princeton University computer scientists if they disclosed anything they learned during a court-ordered review of its software. The firmware for Sequoia's new Frontier optical-scan machines is written in C# and runs on Linux. "It's good to know the vendors are developing a new transparent optical-scan system," says Verified Voting president Pamela Smith. "That is probably the biggest recognition of the direction that the voting public wants to see the market going."

Researchers at the Massachusetts Institute of Technology's (MIT's) Computer Science and Artificial Intelligence Lab are helping make programmers' move to parallel programming less onerous as computer chip manufacturers produce multicore technology to upgrade performance. "Just writing anything parallel doesn't mean that it's going to run fast," says MIT professor Saman Amarasinghe. "A lot of parallel programs will actually run slower, because you parallelize the wrong place." Amarasinghe also thinks that computers are capable of automatically determining when to parallelize as well as which cores to assign which jobs. His group's multicore computing effort is split along two lines--tools to ease programmers' switch to parallel programming and tools to optimize programs' performance once that switch has been accomplished. Amarasinghe and two graduate students have designed a system to increase the predictability of multicore programs by assigning a core attempting to access a shared resource a priority not according to the time of its request but according to the number of tasks it has performed. Amarasinghe's lab has several projects focusing on parallel program optimization, one of which helps programs adjust to changing conditions on the spur of the moment. His group has devised a language that asks the developer to specify different techniques for executing a given computational job. When the program is operational, the computer automatically identifies the method with maximum efficiency.

Columbia University researchers, working with mechanics from the U.S. Marine Corps, have developed an augmented reality (AR) system for performing vehicle maintenance repair tasks. Initial results suggest that AR systems could help users find and begin a maintenance task in half the normal time. Current practices require a Marine mechanic to refer to a technical manual on a laptop when performing vehicle repairs. In the Columbia study, mechanics used a head-worn display that projected three-dimensional (3D) arrows that pointed to relevant components, text instructions, floating labels and warnings, and animated, 3D models of appropriate tools. A smartphone worn on the mechanic's wrist provided touchscreen controls for advancing to the next series of instructions. The AR instructions were created using laser scans and photographs of the inside of the vehicle to create a 3D model of the vehicle's cockpit. The researchers then developed software for directing and instructing users in performing individual maintenance tasks. Ten cameras inside the cockpit were used to track the position of three infrared LEDs on the head-worn display, enabling the system to understand where the mechanic was looking. The researchers say that it may be more practical for future systems to have the cameras or sensors incorporated into the head-worn display.

Roadrunner, housed at Los Alamos National Laboratory (LANL), recently completed its initial shakedown phase while performing accelerated petascale computer modeling and simulations for several unclassified science projects. The completion of the shakedown will allow Roadrunner, the world's fastest supercomputer, to begin its transition to classified computing. Scientists used the 10 unclassified projects to optimize how large codes run on the machine. The 10 test projects were chosen from academic and research institutions across the United States. Some of the projects include research into dark matter and dark energy, creating a HIV evolutionary tree to help researchers focus on potential vaccines, nonlinear physics in high-powered lasers, modeling minuscule nanowires over long time periods, and exploring how shock waves cause materials to fail. Roadrunner, developed by IBM along with LANL and the National Nuclear Security Administration, uses a hybrid design to achieve its record-setting performance. Each compute node in a cluster contains two AMD Opteron dual-core processors and four PowerXCell 8i processors that act as computational accelerators. Roadrunner will now be used to perform classified advanced physics and predictive simulations.

Researchers at Deutsche Telekom and the University of Newcastle have developed a photo-viewing process that uses cell phones as the center of an interaction that resembles passing around prints of photographs. Initially, the researchers tried sitting groups of five people at a table and asked them to swap digital photos one-to-one on their cell phones using Bluetooth, but that resulted in people breaking into pairs. To create an interactive group experience, the researchers explored a method that could share photos with all members of a group simultaneously. "We came up with the idea of using spatial regions, like auras, around the table," says Newcastle lead researcher Christian Kray. An aura in the middle of the table was used to upload pictures to the entire group, while a concentric outer aura was established for downloading and viewing pictures. Software created for the project displayed a different barcode-like pattern at the top of every phone screen, so an overhead camera could recognize which aura each phone was in. When a phone was in the upload aura, the camera signaled a Bluetooth-enabled PC to broadcast the phone's photos to the other smartphones. "People really enjoyed sharing pictures this way, with everyone getting the photos at the same time and all having something to hold," Kray says.

Interoperability between data sources is the fundamental challenge of data integration, and NASA computer scientist Richard Keller says that although standards and organizational policies can help to some degree, "data standards can be difficult to legislate and are onerous and expensive to institute." Semantic integration hinges on exercising rigorousness in the capture of semantic metadata, he says. "If you describe the meaning of the data, then you can automate the process of recognizing connections across data sources and allow them to be married together properly," Keller says. He describes ontology mapping as the next major challenge for semantic integration. An ontology map supplies data to support the translation of the objects, properties, and relations from one ontology model into those of another, and the difficulty arises when the underlying data models differ from a conceptual point of view. "More broadly, I think the challenge for making semantic integration work in the marketplace is to make it quicker and easier to specify data semantics," he says. There are commercially available tools that can streamline the specification process, but Keller says the cost/benefit calculations are not favorable enough to facilitate widespread implementation. The SemanticIntegrator project seeks to develop a framework to support semantic integration of NASA data assets through the integration of information sources using ontologies in combination with explicit integration rules.

The Seafood Spoilage and Safety Predictor (SSSP) is a free program developed at the Technical University of Denmark's National Institute for Aquatic Resources in Denmark and designed to help seafood producers ensure that their products are safe for consumption until the sell-by date. The software can read specific temperature measurements to evaluate the effect of the temperature variation that seafood undergoes throughout the supply chain. SSSP is available in 15 languages to help the seafood industry around the world and because seafood products are often transported long distances. "We had completed extensive laboratory studies, developed mathematical models to predict shelf-life and safety of seafood, and published our findings in all the right places," says SSSP developer Paw Dalgaard. However, the researchers were concerned that the results of their work wouldn't reach the industry, so they created the software to provide access to their data. "The difficult bits, i.e. the mathematical models, have been kept out of sight, and the predictions are easy to obtain and ready to use," Dalgaard says.

Rensselaer to Lead Multimillion-Dollar Center for Social and Cognitive Networks Inside Rensselaer (10/23/09) DeMarco, Gabrielle

The Rensselaer Polytechnic Institute has received $8.6 million from the Army Research Laboratory (ARL) to launch the Center for Social and Cognitive Networks, a new interdisciplinary research center dedicated to the study of social and cognitive networks. The center is part of the ARL's newly created Collaborative Technology Alliance (CTA), which features four centers across the United States focused on different aspects of network science. CTA will receive a total of $16.75 million in funding for the first five years and an additional $18.75 million is anticipated from the ARL for the second phase, bringing the total to $35.5 million over 10 years. Other partners in the program include IBM, Northeastern University, the City University of New York, and collaborators from Harvard University, the Massachusetts Institute of Technology, New York University, Northwestern University, the University of Notre Dame, Indiana University, and the University of Maryland. "Together with other centers of the CTA, we are creating the new discipline of network science," says Rensselaer professor Boleslaw Szymanski, the center's director. The Center for Social and Cognitive Networks will connect social scientists, neuroscientists, cognitive scientists, physicists, mathematicians, engineers, and computer scientists in an effort to uncover, model, understand, and predict the complex social interactions that occur on social networks. The center will focus on dynamic processes in networks and the human interactions and underlying technological infrastructure they use, organizational networks and how knowledge spreads from peer to peer in the modern military, the study of adversarial networks and how to deal with terrorists and hidden groups in a society, trust in social networks and measuring the level of trust within a network, and using computational systems to predict how human error or bias influences judgment.