U.S. lawmakers are working toward bipartisan legislation that would offer expedited visas to foreign graduates with advanced technical degrees, amid complaints from companies that the United States is training highly skilled workers only to have them go to other nations. Rep. Lamar Smith (R-Texas) plans to introduce a proposal supplying up to 10,000 visas annually to foreign students graduating from U.S. universities with doctorates in engineering, information technology, and the natural sciences. The White House said in May that the country should be "stapling a green card" to the diplomas of all science, technology, engineering, and math Ph.D. grads, and to select masters' grads, so that they can "contribute to the American economy and become Americans over time." That was part of a broader immigration agenda that was derailed due to strong Republican opposition, and the high-tech community subsequently started lobbying for standalone legislation. Smith's bill would have visas diverted from initiatives such as the diversity lottery, which allows people from underrepresented countries to vie for 55,000 annual immigrant visas. To be eligible for such visas, students would need to have job offers and make a minimum five-year commitment to staying in the United States.

Researchers at Carnegie Mellon University and Disney have developed SideBySide, a mobile phone system that enables animated images from two separate handheld projectors to interact with each other on the same surface. The researchers say the system is suitable for games, education, and other applications, and could lead to a more participatory and intimate style of interaction than is possible with computers or overhead projectors. "Now that handheld projectors have become a reality, we finally have a technology that allows us to create a new way for people to interact in the real world," says Carnegie Mellon Ph.D. candidate Karl D.D. Willis. The handheld projectors emit both visible and infrared light, and contain a camera that can monitor the projected images, a ranging sensor, and an inertial measurement unit. The infrared light helps the system to recognize when the images are moving or overlapping, and to communicate information between the devices.

A recent U.S. National Science Foundation (NSF) study found that minority doctoral holders are still poorly represented as faculty members at U.S. institutions, even as the number of minority students has climbed over the last 20 years. "Both minority doctorate numbers and minority faculty numbers remain low, especially in the leading research institutions," according to the NSF report. "Data on [science, engineering, and health (SEH)] doctorate recipients show that Blacks, Hispanics, and American Indians/Alaska Natives, as a group, earned about 3,300 SEH doctorates from U.S. universities in 2008, 9 percent of all SEH doctorates." The study also found that minority faculty members are less likely to receive full professorships, less likely to win tenure, and less likely to work at research universities with very high research activity, compared to non-minority faculty. The report highlights the need for minority candidates to navigate the system more wisely in order to reach higher ranks. In addition, the report found that lower percentages of Black, Hispanic, and Asian doctoral faculty with SEH doctorates are full professors, with a larger percentage being assistant professors.

Massachusetts Institute of Technology (MIT) professor Patrick Winston and his students are using engineering methodology to develop systems that think and comprehend as humans do using computational methods. The researchers are focused on creating systems that use previously acquired common sense knowledge and knowledge of plot patterns when presented with story-understanding problems. "My approach to the scientific question of what we humans do differently is to develop computational solutions to behavioral-based problems," Winston says. The current Genesis model relies on START natural language processing systems to comprehend language and a vision system developed by MIT researcher Sajit Rao to understand visual information. To evaluate different stories, the system is first provided with common sense knowledge in English, then inferred knowledge, and finally higher level knowledge, referred to as the plot pattern level. "It's a kind of amplification of human intelligence by things that won't be as smart as people for a very, very, long time, but which, nevertheless, can sometimes see things through the fog of conflict and urgency that would not otherwise be obvious," Winston says.

The Eyes Have It: Computer-Inspired Creativity University of Leeds (10/19/11) Paula Gould

Researchers at the University of Leeds and the Open University have developed the Designing with Vision project, eye-tracking technology that, when combined with computer-aided design (CAD) tools, will help designers use intuitive elements of the design process that are normally suppressed by CAD. "The eye-tracking system identifies which part of the design sketch the user is drawn to, making the human-machine interface far more fluid," says Open University professor Steve Garner. Designers usually work with shapes to focus on certain areas in initial sketches, and then use these starting points to develop the rest of the design. However, this element of subconscious selection is difficult to replicate with CAD. The researchers solved this problem with the eye-tracking technology, which gives CAD a more cohesive human-machine interface. "We envisage a future for design that combines creativity and digital technologies, and in this scenario, is able to support designers working with shapes early in the design processes, before the shape has been fixed," says Leeds professor Alison McKay.

Microsoft Research Cambridge has developed Holodesk, a prototype virtual display that enables users to interact with virtual objects using their hands. Holodesk creates the realistic illusion of direct physical touching and maneuvering three-dimensional (3D) graphics by using an optical see-through display and a Kinect camera. The display renders a virtual image of a 3D scene through a half-slivered mirror and spatially aligns it with the real world for the viewer. In a video demonstration of the Holodesk technology, a user looks down on a pane of glass at virtual, realistic-looking balls and other objects. The user then positions his hands underneath the glass and moves them like he was manipulating real objects and scoops a virtual ball into cups. The research project stands out from other 3D experiments because Holodesk uses optical devices called beam splitters and a graphic processing algorithm to provide a life-like experience. The researchers say Holodesk could be used in gaming as well as in design and research.

Microsoft's Roslyn: Reinventing the Compiler as We Know It InfoWorld (10/20/11) Neil McAllister

Microsoft recently launched Project Roslyn, a compiler-as-a-service technology that aims to bring powerful new features to C#, Visual Basic, and Visual Studio. Roslyn is a complete reengineering of Microsoft's .Net compiler toolchain, exposing each phase of the code compilation as a service that can be consumed by other applications. Roslyn will allow the entire compile-execute process to be invoked from within .Net applications. If the code is put into a loop that accepts input from the user, Roslyn will create a fully interactive read-eval-print loop console for C#, enabling users to manipulate and experiment with .Net application programming interfaces (APIs) and other objects in real time. Roslyn APIs expose the syntax and binding data, allowing developers to write their own code refactoring algorithms in addition to the ones that ship with Visual Studio. Microsoft has made the technology available as a Community Technology Preview, but it has not committed to making it into a product for Visual Studio. "Roslyn represents not merely a new iteration of the Visual Studio toolchain but a whole new way for developers to interact with their tools," writes InfoWorld's Neal McAllister.

University of Arizona researchers Paul Cohen and Ian Fasel recently received a $3 million U.S. Defense Advanced Research Projects Agency (DARPA) grant for their Robotoddler: Grounded Language Learning for Shared Human-robot Tasks project, which is aimed at developing a robot that can learn simple language and be instructed to perform tasks in that language. Fasel's team used a seedling grant from DARPA last year to build a self-learning robot that "pokes at the world and acts much like a scientist doing little experiments," Fasel says. The researchers say they took a machine-learning approach, giving the robot some general strategies for learning. The robot then must determine what it should do to best learn about the objects around it. The robot can "formulate beliefs about what kind of object it is" while learning concepts and how to conduct experiments at the same time, Fasel says. The next step is to expand what the robot knows about objects and the different ways of describing those items. "Then it can figure out the sequence of actions to solve the task using the objects in the room," according to Fasel.

Gild has issued a plea to improve the way math and computer programming is taught in U.S. schools after the results from its new study found that Chinese developers outscored U.S. developers on math and logic by 20 percent. Nearly 500,000 developers from 150 countries use the social networking and skills sharing site. Gild conducted the international programming study to assess the programming capabilities and skills of its users. U.S. developers fared much better in programming core languages, scoring 22 percent higher than their Chinese counterparts in C language programming, 26 percent higher on C#, and 19 percent higher on C++. U.S. developers also scored 24 percent higher on Java and 24 percent higher in Oracle database programming. China and other developing nations continue to focus on core skills such as math, notes Gild CEO Sheeroy Desai. "Software development remains a bright spot for the U.S., with American programmers the best in the world, but is it sustainable?" Desai asks. "While nothing can replace creativity and ingenuity, the United States cannot afford to ignore the fundamentals."

The genetic sequences-aligning game Phylo, developed by researchers in McGill University's computer science department, has more than 16,000 registered users. Launched last November, Phylo makes the gruntwork of geneticists fun, and the players ultimately contribute data the researchers can use to make scientific discoveries. Computers do not always find the optimal alignment of genetic sequences, and the process also is time-consuming and expensive. Humans can solve the visual puzzles more efficiently, but examining the raw genetic data can be confusing even for trained researchers. Phylo has players align colored puzzle pieces vertically on a screen, which correlates to regions of genomes. No scientific knowledge is required to play Phylo, unlike Foldit, the protein-folding game developed by researchers at the University of Washington. Phylo's users have worked to solve more than 300,000 puzzles. The McGill team says a Phylo app for cell phones and tablets will be available shortly, and the researchers want to make greater use of Facebook to find more players and to get them playing for longer periods of time. "The success of the project is only valid if we build a large and strong community where everybody participates," says McGill professor Jerome Waldispuhl.

Mind Reading Computer System May Help People With Locked-in Syndrome National Science Foundation (10/17/11) Miles O'Brien

The U.S. National Science Foundation's Center of Excellence for Learning in Education, Science and Technology is conducting research, led by Boston University's Frank Guenther, into how brain regions interact to create brain-computer interfaces that help people with locked-in syndrome. "People who have no other means of communication can start to control a computer that can produce words for them or they can manipulate what happens in a robot and allow them to interact with the world," Guenther says. In one experiment, a volunteer uses a speech synthesizer to make vowel sounds just by thinking about moving a specific part of the body. Guenther says the technology could be very beneficial for lots of people, not just those with locked-in syndrome. "This sort of thing would allow them to produce synthetic speech, which could be used to talk to the people around them and mention their needs," he says. In another experiment, researchers analyze how a volunteer's naked eye looks at lights flashing at slightly different frequencies. "The neurons in his visual cortex start firing in synchrony with the checkerboard he's looking at, and so we can pick up the frequency and from that determine which choice he was trying to make, left, right, forward or backward, for example," Guenther says.

Wireless Worries University of Texas at Austin (10/17/11) Aaron Dubrow

University of Texas at Austin researchers have developed a highest resolution electromagnetic human model and are using it to study the effects of microwaves from wireless devices on the human body. Due to the difficulty of measuring the power absorbed by different people under different conditions of use, device manufacturers use a Standard Anthropomorphic Model (SAM) to demonstrate compliance with safety standards. Texas professor Ali Yilmaz and his colleagues are using computer models and algorithms to estimate the absorbed power instead of using the SAM method. The researchers have developed AustinMan, a publicly available model that represents the human body with one-millimeter-cubed resolution, and can perform simulations that predict the electromagnetic power absorbed by the body. "The resolution of these images is higher than what you can get with even the latest [magnetic resonance imaging] and [computed assisted tomography] scans," Yilmaz says. The AustinMan model contains 30 types of tissues, each with individual electromagnetic properties. After testing, the researchers found that the total power absorbed by AustinMan due to an antenna near the ear varies by less than 1 percent.