European Computer Scientists Seek New Framework for Computation European Science Foundation (10/29/08)

One of the challenges still remaining for electronic computation is the ability to break down large complex processes into small, more manageable components that can be reused in different applications. This goal can be accomplished in a variety of ways, but none of them can manage all the processes very well. The major problem is that the dependent links, or correlations, that interconnect computer processes or programs cannot be broken down. These dependent links are common to all processes in which computation is involved, including biological systems, quantum computing, and conventional programming. European computer scientists believe that now is the time to create a coordinated effort to solve the correlation problem, and the European Science Foundation recently held a workshop to establish a framework for additional research. The workshop identified that correlations in computer science represent an important problem common to the entire field of programming and concluded that the evolution of general purpose computing has reached a point where the correlation problem will hinder additional progress. The workshop discussed progress in the relatively new field of aspect-oriented software development (AOSD), which is creating new techniques for isolating the correlations bridging software components. AOSD techniques make it possible to modularize those aspects of a system or process that cut across different components, enabling them to be broken down into reusable components or objects.

University of California, San Diego (UCSD) computer scientists have developed software that can duplicate a key using only a photograph of the key. A key's bumps and depressions represent a numeric code that describes how to open the key's corresponding lock. "We built our key duplication software system to show people that their keys are not inherently secret," says UCSD professor Stefan Savage. "Perhaps this was once a reasonable assumption, but advances in digital imaging and optics have made it easy to duplicate someone's keys from a distance without them even noticing." Savage presented the research at ACM's Conference on Communications and Computer Security, which takes place Oct. 27-31 in Alexandria, Virginia. In one demonstration of the new software, the researchers took pictures of a residential house key with a cell phone camera and ran the image through their software, producing the information needed to create identical copies. In another demonstration, the researchers used a five-inch telephoto lens to take pictures of keys sitting on a cafe table from more than 200 feet away. Savage notes that locksmiths and lock vendors have been able to copy keys by hand from high-resolution photographs for some time. However, the threat has reached a new level, with cheap image sensors making digital cameras readily available, and basic computer vision techniques are able to automatically extract a key's information without requiring any expertise.

Multi-core technology has the potential to dramatically change human-computer interaction and become more helpful and less invasive at the same time, writes Microsoft Research head Andrew Herbert. The tradeoff of this advance is the massive programming challenges of addressing the sophisticated interactions between multiple processors. Herbert observes that although people have a strong familiarity with the technology supporting human-computer interaction, there also is a lack of ease, and the prevalence of the mouse and keyboard despite the availability of gesture, handwriting, and voice interfaces is testament to this fact. Multi-core technology promises to make handwriting recognition and other alternative interfaces more accurate and intuitive through a combination of speculative execution and machine learning, Herbert says. "A multi-core computer can learn what I'm like--and what I like--and through speculative execution, start making educated guesses about how I want to travel and what I want to do next," he says. Herbert predicts that speculative execution will allow users to carry out Web searches by asking computers direct questions rather than entering keywords. "Our objective, as computers play a greater role in our lives, is to ensure that they are imbued with human concepts such as ownership, privacy, and personal freedom," he says. "What is important is that as humans, we are aware of technology's implications and given choices on how we interact with it."

Many business method patents could be challenged in court as a result of a ruling by the U.S. Court of Appeals for the Federal Circuit. The court said a business method patent must be tied to a machine or lead to a transformation, standards set by the U.S. Supreme Court, in rejecting a patent for a way to smooth energy costs. Patenting an abstract process, or thoughts, was the crux of the matter for the court. Amazon's one-click process to buy goods on the Internet is an example of a business method patent. WeatherWise's Bernard Bilski and Rand Warsaw are expected to appeal the case to the Supreme Court. "Some folks will look at the Bilski decision as a new weapon to attack business method patents," says patent attorney Erika Arner. Software makers and Internet companies have been following the case, and the potential challenge that technology presents for the machine-or-transformation test was acknowledged in the majority opinion. "Thus, we recognize that the Supreme Court may ultimately decide to alter or perhaps even set aside this test to accommodate emerging technologies," wrote Chief Judge Paul Michel.

Researchers Show Off Advanced Network Control Technology Network World (10/29/08) Greene, Tim

A new, experimental technology called OpenFlow enables researchers to adjust network infrastructure to increase bandwidth, optimize latency, and save power. OpenFlow is a proof-of-concept technology that could someday be used in business networks to engineer traffic, says Stanford University professor Nick McKeown. OpenFlow is part of the Clean Slate Initiative, which is investigating how the Internet could be re-engineered to make it more responsive to how it is actually used. Researchers created OpenFlow as a way to test new network protocols on existing networks without disrupting production applications. OpenFlow allows users to define flows and determine what paths those flows take through a network, without interrupting normal network traffic. Policies can be created to find paths with certain characteristics, such as greater bandwidth, less latency, and fewer hops so they use less power. The technology behind OpenFlow consists of flow tables installed on switches, a controller, and a proprietary OpenFlow protocol for the controller to communicate securely with the switches. The OpenFlow Consortium has created an experimental network between California and Japan to demonstrate the technology at the Global Environment for Network Innovations Engineering Conference.

Bandwidth efficiencies have improved enormously thanks to recent progress in satellite technology, and the newly developed digital video broadcast satellite second generation (DVB-S2) protocol is reportedly 30 percent more efficient than its predecessor. "Using satellite resource management tools, based on cross-layer techniques, the IMOSAN project is trying to push that technology even further, in order to make it more attractive not only from the technical aspects, but from the business point of view as well," says IMOSAN project coordinator Anastasios Kourtis. The Shannon Limit represents the theoretical ultimate limit of a channel for specific bandwidth and signal-to-noise ratio, and DVB-S2's Adaptive Coding and Modulation (ACM) technology permits a satellite system to adjust, in real time, to various transmission conditions and service demands, bringing satellite channels very close to their Shannon Limit. "The IMOSAN consortium developed innovative software and hardware modules and protocols, called the Satellite Resource Management System (SRMS) that apply ACM to voice, data, and TV in a clever way, allowing the provision of cost-effective 'triple-play' satellite services to users in rural or isolated areas," Kourtis says. This breakthrough should help address the challenge of making broadband accessible anywhere and resolving the problem of the digital divide. In addition to SRMS, the IMOSAN team devised hardware and software that supports MPEG-2 HDTV, as well as software capable of using both the older Multiprotocol Encapsulation scheme and the newer Ultra Light Encapsulation scheme. In addition, both have been optimized for IPv4 or IPv6.

Google has agreed to settle two copyright lawsuits over its efforts to digitize books with a $125 million payment, which will let the company make millions of out-of-print volumes available for reading and purchasing online. "I think that it is a stupendous victory for rights holders of the written word, because it has established that we should and must maintain control over the intellectual property that writers create and that we invest in," says Simon & Schuster CEO Carolyn Reidy. Payment from book sales, advertising revenue, and other fees will be channeled to publishers and authors through a new system administered by a digital book registry, with Google taking a percentage. Google has been collaborating with university and research libraries for four years to digitally scan their collections, and as many as 5 million of the roughly 7 million books Google has scanned so far are no longer in print. The content of those books is made available by Google's search service, but only excerpts of text are displayed unless Google has consent from the copyright holder to reveal more. The agreement allows Google to show as much as 20 percent of the text at no charge to users, while the entire volume will be available online for a fee. Some people are concerned that the agreement gives Google an excessive amount of control over books and other content that form the spine of the U.S. library system, and the Internet Archive's Rick Prelinger cautions that "when you start to see a single point of access developing for world culture, by default, it is disturbing." The settlement does not address the issue of whether Google's unsanctioned digitization of copyrighted books is permitted by copyright law.

Computerised Agents to Cope With Disasters University of Southampton (ECS) (10/30/08) Lewis, Joyce

The University of Southampton's Autonomous Learning Agents in Decentralized Data and Information Networks (ALADDIN) research project, led by professor Nick Jennings, has reached the halfway point of a five-year initiative to develop a decentralized information system that can use networks of computerized agents to operate effectively during a disaster situation. "We use computerized agents which can sense, act, and interact in order to achieve individual and collective aims," Jennings says. "Central to this endeavor is the effective coordination of the different actors and, to this end, we've developed a rich series of algorithms for inter-agent cooperation and negotiation." ALADDIN has brought together experts in complex adaptive systems from Southampton as well as the University of Bristol and Imperial College; in fusion, inference, and learning from the University of Oxford and Imperial College; and in decentralized architectures from BAE Systems. The ALADDIN team will seek to gain a collective view on behavior by bringing their expertise in information fusion, inference, decision-making, and machine learning to multi-agent systems, game theory, mechanism design, and mathematical modeling.

Mummy, That Robot Is Making Faces at Me New Scientist (10/29/08) Graham, Flora

University of Bristol robotics engineers have developed facial-expression software for Jules, a robotic head that can mimic the facial expressions and lip movements of a human being. Jules, created by roboticist David Hanson, features flexible rubber skin that is moved by 34 servo motors. The Bristol team developed software to transfer expressions recorded by a video camera into commands to make the servo motors produce similarly realistic facial movements. The researchers filmed an actor making a variety of expressions, and had an animator select 10 frames showing a different variation of each expression and manually set the servos in Jules' face to match the frames. The training was used to create software that can translate a face on video into an equivalent setting on Jules' face. The researchers say that Jules' human appearance makes perfectly matching the expressions extremely important to avoid the "uncanny valley," in which human-like robots or animations that are not quite true-to-life are perceived by people as being unnerving or alarming. "We are really attuned to how a face moves, and if it's slightly wrong, it gives us a feeling that the head is somehow creepy," says lead researcher Neill Campbell. Campbell says reaching the other side of the uncanny valley, or achieving such realism that people react to robots the same as they do to humans, would have significant benefits.

'Digital Dark Age' May Doom Some Data University of Illinois at Urbana-Champaign (10/27/08) Ciciora, Phil

A "digital dark age" may be an unintended consequence of our rapidly digitizing world, warns University of Illinois at Urbana-Champaign professor Jerome P. McDonough. McDonough says the issue of a potential digital dark age originates from the massive amount of data created by the rise of the information economy, which at last count contained 369 exabytes of data, including electronic records, tax files, email, music, and photos. The concern for archivists and information scientists is that, with the ever-changing platforms and file formats, much of the data created today could be lost due to inaccessibility. "If we can't keep today's information alive for future generations," McDonough says, "we will lose a lot of our culture." So far, electronic data has been far more volatile than books, journals, and other physical media, with electronic formats such as WordPerfect and the 8-inch floppy disk quickly becoming obsolete. To avoid a digital dark age, McDonough says we need to determine the best way of keeping valuable data alive and accessible by using a multi-prong approach of migrating data to new formats, devising methods of reviving old software to work on existing platforms, using open source file formats and software, and creating media-independent data. A major part of preserving data will be moving away from proprietary software to open source software, McDonough says.

Researchers with the Mining in Semi-Structured Data (MISTA) project, backed by the Netherlands Organization for Scientific Research (NWO), are developing ways to quickly and effectively recognize the behavior of Web surfers. In analyzing large quantities of semi-structured data, NWO researcher Edgar de Graaf says some patterns revealed a quick succession of visits, while others showed they occurred on a weekly basis. De Graaf says the timing issue should be studied further, but adds that different types of information could be presented in certain ways so that the user is able to find it at a single glance. Meanwhile, NWO researcher Jeroen De Knijf says allowing the user to determine in advance the minimum requirements of a pattern would make it easier for data-mining programs to recognize behavior faster. Compressing an entire collection of documents into a single document would lessen the amount of results, adds De Knijf, who also shows that the essential information would be part of the summary.

Intel senior fellow Kevin Kahn does not expect to see spintronics used in computer processors for at least another 10 years. Kahn's comment came after a futurist keynote during the recent Intel Developer Forum in Taipei. Using the angular momentum of electrons and the direction of the momentum could boost the performance and capacity of chips. However, Kahn said shrinking chips and making them more efficient will occur in a more incremental and mundane manner because such a pace is more understood, quantifiable, and cost-effective. "We will push existing technology architectures as long and hard as we can because we know how to do them," he said, noting that spintronics is a step-change that would demand significant planning for use with existing products. "No matter how hard you push on the new ones, it'll be a while." Kahn discussed how to make computers smarter than humans during his talk, and also displayed a robot that could sense the location of objects via an electronic field.

Victoria University professor Dale Carnegie has led the development of the Mobile Autonomous Robotic Vehicle for Indoor Navigation (Marvin), a robot with emotions. Marvin gets in a bad mood when encountering obstacles. "We've given Marvin the emotion of anger or frustration," Carnegie says. "If he finds that he's trapped and can't get out, he'll become more agitated and more frustrated in his movements." Marvin serves as a security guard in Victoria's engineering faculty building. The robot, which looks like a person and is about the size of an average man, navigates the building using a built-in map, and can ask questions and answer simple queries thanks to voice-recognition software. Marvin also has human-like mannerisms, including looking at the someone when talking to them, and leering over them aggressively when he believes someone does not belong in the building, turning his green eyes to red and using a more demanding and strident voice. Carnegie also is working to teach Marvin new emotions, and helping him learn and adapt so he can act automatically. Marvin is even able to find faster routes by investigating side paths. When making good progress, Marvin will become happy, and when obstructed Marvin will become sad, which encourages the robot to do something about his predicament, Carnegie says. He says robotic intelligence is currently at about the level of a lizard. However, he says that "if computers carry on with their current trend, we should be hitting human-like intelligence by about 2050."

Researchers at Brown University have created a computer program that is capable of generating a realistic computerized image of a person's body, without having the person stand still with their clothes off. The three-dimensional body-shape model makes use of digital images or video of a person to determine the gender and calculate the waist size, chest size, height, weight, and other features. The program is based on 2,400 detailed laser range scans of men and women in minimal clothing, combining information from a person in multiple poses, and also integrating the detection of skin in the images. The model can determine what people look like underneath their clothes, and takes into consideration how the clothes fit on different parts of the body as the individual moves. "Each pose gives different constraints on the underlying body shape, so while a person's body pose may change, his or her true shape remains the same," says Brown professor Michael Black. "By analyzing the body in different poses, we can better guess that person's true shape." The program would be helpful for forensics, but also could find use in the fashion, film, gaming, and sports medicine industries. Black and graduate student Alexandru Balan note that the program does not use X-rays, does not see through clothing, and is not invasive.

Epic Games, with its best-selling Gears of War game, illustrates the trend in the video-game industry to develop immersive games that mirror real life in that they require tactical thinking, subtle judgments based on paltry data, constant awareness of multiple factors as they change throughout the course of the game, and the spatial sensitivity to control one's movement through a space in which the appropriate direction is not always obvious. The environment of Gears of War stands out from other game environments in that it has a specific mood, while the third-person viewpoint allows the player to assume the dual role of both participant and observer. The central avatar of Gears of War also is more realistic in terms of behavior, in his display of caution and even fear, than most game characters, which are often cyphers without personalities. The game's mechanics are instrumented to present a world that has the illusion of internal consistency while also supporting a compelling experience for gamers. For example, the game is designed to punish players who do not seek cover. Games with multiplayer options are very popular, given that single-player games can produce a feeling of isolation.