Friday, April 16, 2010

A combination of simple bio-acoustic sensors and some sophisticated machine learning makes it possible for people to use their fingers or forearms — potentially, any part of their bodies — as touchpads to control smart phones or other mobile devices.

The technology, called Skinput, was developed by Chris Harrison, a third-year Ph.D. student in Carnegie Mellon University's Human-Computer Interaction Institute (HCII), along with Desney Tan and Dan Morris of Microsoft Research.

Skinput could help people take better advantage of the tremendous computing power now available in compact devices that can be easily worn or carried. The diminutive size that makes smart phones, MP3 players and other devices so portable, however, also severely limits the size and utility of the keypads, touchscreens and jog wheels typically used to control them.

"With Skinput, we can use our own skin — the body's largest organ — as an input device," Harrison said "It's kind of crazy to think we could summon interfaces onto our bodies, but it turns out to make a lot of sense. Our skin is always with us, and makes the ultimate interactive touch surface"

In a prototype developed while Harrison was an intern at Microsoft Research last summer, acoustic sensors are attached to the upper arm. These sensors capture sound generated by such actions as flicking or tapping fingers together, or tapping the forearm. This sound is not transmitted through the air, but by transverse waves through the skin and by longitudinal, or compressive, waves through the bones.

Harrison and his colleagues found that the tap of each fingertip, a tap to one of five locations on the arm, or a tap to one of 10 locations on the forearm produces a unique acoustic signature that machine learning programs could learn to identify. These computer programs, which improve with experience, were able to determine the signature of each type of tap by analyzing 186 different features of the acoustic signals, including frequencies and amplitude.

In a trial involving 20 subjects, the system was able to classify the inputs with 88 percent accuracy overall. Accuracy depended in part on proximity of the sensors to the input; forearm taps could be identified with 96 percent accuracy when sensors were attached below the elbow, 88 percent accuracy when the sensors were above the elbow. Finger flicks could be identified with 97 percent accuracy.

"There's nothing super sophisticated about the sensor itself," Harrison said, "but it does require some unusual processing. It's sort of like the computer mouse — the device mechanics themselves aren't revolutionary, but are used in a revolutionary way." The sensor is an array of highly tuned vibration sensors — cantilevered piezo films.

The prototype armband includes both the sensor array and a small projector that can superimpose colored buttons onto the wearer's forearm, which can be used to navigate through menus of commands. Additionally, a keypad can be projected on the palm of the hand. Simple devices, such as MP3 players, might be controlled simply by tapping fingertips, without need of superimposed buttons; in fact, Skinput can take advantage of proprioception — a person's sense of body configuration — for eyes-free interaction.

Though the prototype is of substantial size and designed to fit the upper arm, the sensor array could easily be miniaturized so that it could be worn much like a wristwatch, Harrison said.

Testing indicates the accuracy of Skinput is reduced in heavier, fleshier people and that age and sex might also affect accuracy. Running or jogging also can generate noise and degrade the signals, the researchers report, but the amount of testing was limited and accuracy likely would improve as the machine learning programs receive more training under such conditions.

Harrison, who delights in "blurring the lines between technology and magic," is a prodigious inventor. Last year, he launched a company, Invynt LLC, to market a technology he calls "Lean and Zoom," which automatically magnifies the image on a computer monitor as the user leans toward the screen. He also has developed a technique to create a pseudo-3D experience for video conferencing using a single webcam at each conference site. Another project explored how touchscreens can be enhanced with tactile buttons that can change shape as virtual interfaces on the touchscreen change.

Skinput is an extension of an earlier invention by Harrison called Scratch Input, which used acoustic microphones to enable users to control cell phones and other devices by tapping or scratching on tables, walls or other surfaces.

"Chris is a rising star," said Scott Hudson, HCII professor and Harrison's faculty adviser. "Even though he's a comparatively new Ph.D. student, the very innovative nature of his work has garnered a lot of attention both in the HCI research community and beyond."

An analysis of dietary data from more than 400,000 men and women found only a weak association between high fruit and vegetable intake and reduced overall cancer risk, according to a study published online April 6, 2010 in the Journal of the National Cancer Institute.

It is widely believed that a diet rich in fruits and vegetables can reduce the risk of cancer. In 1990, the World Health Association recommended eating five servings of fruit and vegetables a day to prevent cancer and other diseases. But many studies since then have not been able to confirm a definitive association between fruit and vegetable intake and cancer risk.

To address the issue, Paolo Boffetta, M.D., M.P.H., of the Mount Sinai School of Medicine in New York, and colleagues analyzed data from the EPIC study (European Prospective Investigation into Cancer and Nutrition), which included 142,605 men and 335,873 women recruited for the study between 1992 and 2000. The participants were from 23 centers in ten Western European countries--Denmark, France, Germany, Greece, Italy, the Netherlands, Norway, Spain, Sweden and the United Kingdom. Detailed information on their dietary habit and lifestyle variables was obtained. After a median follow-up of 8.7 years, over 30,000 participants were diagnosed with cancer.

The authors found a small inverse association between high intake of fruits and vegetables and reduced overall cancer risk. Vegetable consumption also afforded a modest benefit but was restricted to women. Heavy drinkers who ate many fruits and vegetables had a somewhat reduced risk, but only for cancers caused by smoking and alcohol.

The authors caution against attributing any risk reduction to diet and they conclude that any cancer protective effect of these foods is likely to be modest, at best.

"In this population, a higher intake of fruits and vegetables was also associated with other lifestyle variables, such as lower intake of alcohol, never-smoking, short duration of tobacco smoking, and higher level of physical activity, which may have contributed to a lower cancer risk," they write.

In an accompanying editorial, Walter C. Willett, M.D., Dr.P.H., of the Harvard School of Public Health, notes that "this study strongly confirms" the findings of other prospective studies that high intake of fruits and vegetables has little or no effect in reducing the incidence of cancer, although it has been shown to affect the risk of cardiovascular disease. He suggests that future research investigate the potential cancer-reducing benefits of specific fruits and vegetables and also study the effects of fruit and vegetable consumption at earlier periods of life.

Researchers at North Carolina State University have developed a new approach to software development that will allow common computer programs to run up to 20 percent faster and possibly incorporate new security measures.

The researchers have found a way to run different parts of some programs – including, for the first time, such widely used programs as word processors and Web browsers – at the same time, which makes the programs operate more efficiently.

In order to understand how they did it, you have to know a little bit about computers. The brain of a computer chip is its central processing unit, or “core.” Computing technology has advanced to the point where it is now common to have between four and eight cores on each chip. But for a program to utilize these cores, it has to be broken down into separate “threads” – so that each core can execute a different part of the program simultaneously. The process of breaking down a program into threads is called parallelization, and allows computers to run programs very quickly.

However, some programs are difficult to parallelize, including word processors and Web browsers. These programs operate much like a flow chart – with certain program elements dependent on the outcome of others. These programs can only utilize one core at a time, minimizing the benefit of multi-core chips.

But NC State researchers have developed a technique that allows hard-to-parallelize applications to run in parallel, by using nontraditional approaches to break programs into threads.

Every computer program consists of multiple steps. The program will perform a computation, then perform a memory-management function – which prepares memory storage to contain data or frees up memory storage which is currently in use. It repeats these steps over and over again, in a cycle. And, for difficult-to-parallelize programs, both of these steps have traditionally been performed in a single core.

“We’ve removed the memory-management step from the process, running it as a separate thread,” says Dr. Yan Solihin, an associate professor of electrical and computer engineering at NC State, director of this research project, and co-author of a paper describing the research. Under this approach, the computation thread and memory-management thread are executing simultaneously, allowing the computer program to operate more efficiently.

“By running the memory-management functions on a separate thread, these hard-to-parallelize programs can operate approximately 20 percent faster,” Solihin says. “This also opens the door to development of new memory-management functions that could identify anomalies in program behavior, or perform additional security checks. Previously, these functions would have been unduly time-consuming, slowing down the speed of the overall program.”

Using the new technique, when a memory-management function needs to be performed, “the computational thread notifies the memory-management thread – effectively telling it to allocate data storage and to notify the computational thread of where the storage space is located,” says Devesh Tiwari, a Ph.D. student at NC State and lead author of the paper. “By the same token, when the computational thread no longer needs certain data, it informs the memory-management thread that the relevant storage space can be freed.”

A new technique for revealing images of hidden objects may one day allow pilots to peer through fog and doctors to see more precisely into the human body without surgery.

Developed by Princeton engineers, the method relies on the surprising ability to clarify an image using rays of light that would typically make the image unrecognizable, such as those scattered by clouds, human tissue or murky water.

In their experiments, the researchers restored an obscured image into a clear pattern of numbers and lines. The process was akin to improving poor TV reception using the distorted, or “noisy,” part of the broadcast signal.

“Normally, noise is considered a bad thing,” said Jason Fleischer, an assistant professor of electrical engineering at Princeton. “But sometimes noise and signal can interact, and the energy from the noise can be used to amplify the signal. For weak signals, such as distant or dark images, actually adding noise can improve their quality.”

He said the ability to boost signals this way could potentially improve a broad range of signal technologies, including the sonograms doctors use to visualize fetuses and the radar systems pilots use to navigate through storms and turbulence. The method also potentially could be applied in technologies such as night vision goggles, inspection of underwater structures such as levies and bridge supports, and in steganography, the practice of masking signals for security purposes.

The findings were reported online March 14 in Nature Photonics.

In their experiments, Fleischer and co-author Dmitry Dylov, an electrical engineering graduate student, passed a laser beam through a small piece of glass engraved with numbers and lines, similar to the charts used during eye exams. The beam carried the image of the numbers and lines to a receiver connected to a video monitor, which displayed the pattern.

The researchers then placed a translucent piece of plastic similar to cellophane tape between the glass plate and the receiver. The tape-like material scattered the laser light before it arrived at the receiver, making the visual signal so noisy that the number and line pattern became indecipherable on the monitor, similar to the way smoke or fog might obstruct a person’s view.

The crucial portion of the experiment came when Fleischer and Dylov placed another object in the path of the laser beam. Just in front of the receiver, they mounted a crystal of strontium barium niobate (SBN), a material that belongs to a class of substances known as “nonlinear” for their ability to alter the behavior of light in strange ways. In this case, the nonlinear crystal mixed different parts of the picture, allowing signal and noise to interact.

By adjusting an electrical voltage across the piece of SBN, the researchers were able to tune in a clear image on the monitor. The SBN gathered the rays that had been scattered by the translucent plastic and used that energy to clarify the weak image of the lines and numbers.

“We used noise to feed signals,” Dylov said. “It’s as if you took a picture of a person in the dark, and we made the person brighter and the background darker so you could see them. The contrast makes the person stand out.”

The technique, known as “stochastic resonance,” only works for the right amount of noise, as too much can overwhelm the signal. It has been observed in a variety of fields, ranging from neuroscience to energy harvesting, but never has been used this way for imaging.

Based on the results of their experiment, Fleischer and Dylov developed a new theory for how noisy signals move through nonlinear materials, which combines ideas from the fields of statistical physics, information theory and optics.

The research was funded by the National Science Foundation, the U.S. Department of Energy and the U.S. Air Force.

Their theory provides a general foundation for nonlinear communication that can be applied to a wide range of technologies. The researchers plan to incorporate other signal processing techniques to further improve the clarity of the images they generate and to apply the concepts they developed to biomedical imaging devices, including those that use sound and ultrasound instead of light.

A multidisciplinary research team at the National Institute of Standards and Technology (NIST) has found that an organic semiconductor may be a viable candidate for creating large-area electronics, such as solar cells and displays that can be sprayed onto a surface as easily as paint.

While the electronics will not be ready for market anytime soon, the research team says the material they studied could overcome one of the main cost hurdles blocking the large-scale manufacture of organic thin-film transistors, the development of which also could lead to a host of devices inexpensive enough to be disposable.

Silicon is the iconic material of the electronics industry, the basic material for most microprocessors and memory chips. Silicon has proved highly successful as a substance because billions of computer elements can be crammed into a tiny area, and the manufacturing process behind these high-performance chips is well-established.

But the electronics industry for a long time has been pursuing novel organic materials to create semiconductor products—materials that perhaps could not be packed as densely as state-of-the-art silicon chips, but that would require less power, cost less and do things silicon devices cannot: bend and fold, for example. Proponents predict that organic semiconductors, once perfected, might permit the construction of low-cost solar cells and video displays that could be sprayed onto a surface just as paint is. "At this stage, there is no established best material or manufacturing process for creating low-cost, large-area electronics," says Calvin Chan, an electrical engineer at NIST. "What our team has done is to translate a classic material deposition method, spray painting, to a way of manufacturing cheap electronic devices."

The team's work showed that a commonly used organic transistor material, poly(3-hexylthiophene), or P3HT, works well as a spray-on transistor material because, like beauty, transistors aren't very deep. When sprayed onto a flat surface, inhomogeneities give the P3HT film a rough and uneven top surface that causes problems in other applications. But because the transistor effects occur along its lower surface—where it contacts the substrate—it functions quite well.

Chan says the simplicity of spray-on electronics gives it a potential cost advantage over other manufacturing processes for organic electronics. Other candidate processes, he says, require costly equipment to function or are simply not suitable for use in high-volume manufacturing.

The hunter-gatherers who inhabited the southern coast of Scandinavia 4,000 years ago were lactose intolerant. This has been shown by a new study carried out by researchers at Uppsala University and Stockholm University. The study, which has been published in the journal BMC Evolutionary Biology, supports the researchers' earlier conclusion that today's Scandinavians are not descended from the Stone Age people in question but from a group that arrived later.

"This group of hunter-gatherers differed significantly from modern Swedes in terms of the DNA sequence that we generally associate with a capacity to digest lactose into adulthood," says Anna Linderholm, formerly of the Archaeological Research Laboratory, Stockholm University, presently at University College Cork, Ireland.

According to the researchers, two possible explanations exist for the DNA differences.

"One possibility is that these differences are evidence of a powerful selection process, through which the Stone Age hunter-gatherers' genes were lost due to some significant advantage associated with the capacity to digest milk," says Anna Linderholm. "The other possibility is that we simply are not descended from this group of Stone Age people."

The capacity to consume unprocessed milk into adulthood is regarded as having been of great significance for human prehistory.

"This capacity is closely associated with the transition from hunter-gatherer to agricultural societies," says Anders Götherström of the Department of Evolutionary Biology at Uppsala University.

He serves as coordinator of LeCHE (Lactase persistence and the early Cultural History of Europe), an EU-funded research project focusing on the significance of milk for European prehistory.

"In the present case, we are inclined to believe that the findings are indicative of what we call "gene flow," in other words, migration to the region at some later time of some new group of people, with whom we are genetically similar," he says. "This accords with the results of previous studies."

The researchers' current work involves investigating the genetic makeup of the earliest agriculturalists in Scandinavia, with an eye to potential answers to questions about our ancestors.