In a New Age of Impatience, Cutting Computer Start Time New York Times (10/26/08) P. 1; Richtel, Matt; Vance, Ashlee

Over the next few months, major PC manufacturers will introduce a new generation of quick-start computers. Hewlett-Packard, Dell, and Lenovo are launching machines that give people access to basic functions such as email and a Web browser in 30 seconds or less. Meanwhile, Microsoft has pledged to improve Windows' start-up times, saying on a company blog that a very good system should boot up in less than 15 seconds. Today, only 35 percent of machines running Windows Vista boot in 30 seconds or less. Hewlett-Packard research shows that when a start time takes more than a few minutes, users have an exaggerated sense of the time it takes. "Our brains have become impatient with the boot-up process," says Garry Small, a professor at the Semel Institute for Neuroscience and Human Behavior at the University of California, Los Angeles. "We have been spoiled by the hand-held devices." Start-up time frustration is nothing new, but the agitation seems more intense now than in the pre-Internet era, as minor delays in today's Internet-dependent society become huge irritants. Beyond boosting customer satisfaction, computer makers are looking to fast start times as a competitive edge. They say the race for the best start-up time could resemble the auto industry's efforts to have the best time going from zero to 60 mph.

Forget Ringtones; Cell Phones Could Reach Out and Tap You San Diego Union-Tribune (10/27/08) LaFee, Scott

Vibrotactile feedback such as tapping and rubbing would allow cell phones or computer games to deliver a greater level of information faster and more intuitively, according to experts at University of California, San Diego (UCSD) and Microsoft. The researchers discussed the possibilities of using new tactile sensations to improve communication between people and machines last week at ACM's symposium on user interface software and technology in Monterey, while presenting the paper "Tapping and Rubbing: Exploring New Dimensions of Tactile Feedback with Voice Coil Motors." Lead author Kevin Li, 26, a doctoral candidate in UCSD's department of computer science and engineering, suggests that a cell phone user would immediately know that a tap on the shoulder means their boss is trying to reach them, or that a gentle rub on the thigh means that their spouse misses them. He notes that generic vibration alerts often generate an audible buzz, and are not silent. Developing the technology is less of a concern than having to make the case that there is a need for cell phones that tap and rub. "Our goal was to get people thinking about this," Li says. "Now we're hoping that others will take the idea further, though obviously the technology is going to have to get much smaller to fit in a cell phone."

The European Union-funded ENABLED project has 17 prototype devices and software platforms in development that could help the visually impaired gain greater independence. ENABLED project coordinator Wai Yu, a researcher at the Virtual Engineering Center at Queen's University in Belfast, says the project's goal was to give the visually impaired more independence by helping to bridge the information gap with the sighted. To accomplish this, the researchers developed software applications with tactile, haptic, and audio feedback devices to help the visually impaired feel and hear digital maps of where to go, and developed haptic and tactile devices to guide them when they are out. One of the devices, called VITAL, enables users to access a tactile map of an area. Using a device similar to a computer mouse, users can move a cursor around the map and small pins will create shapes under the user's hand to recreate the shape of a city block or a building. Another device, called the Trekker, uses global positioning system (GPS) technology to guide users as they walk around, similar to GPS devices in cars. The Trekker replaces spoken directions with tactile and haptic feedback for users who do not want to draw attention to themselves.

A Cobol programmer may be one of the most secure and steady jobs in IT. Analysts report that Cobol salaries are rising due to a healthy demand for Cobol skills, and there are few offshore Cobol programmers. The troubled economy also bodes well for Cobol programmers, says Interop Systems director of research Jeff Gould, as long as they are working for an organization that intends to keep its legacy Cobol applications. "Many mainframe customers with large mission-critical Cobol apps are locked into the mainframe platform," Gould says. "Often there is no equivalent packaged app, and it proves to be just too expensive to port the legacy Cobol to newer platforms like Intel or AMD servers." Deloitte's William Conner says salaries for Cobol programmers are rising because many Cobol programmers are reaching retirement age and colleges are focusing on Java, XML, and other modern languages instead of Cobol. Dextrys CEO Brain Keane says Cobol programmers are less likely to have their jobs outsourced because the Chinese do not have mainframe experience and recent Chinese computer science graduates have focused on the latest architectures and systems and do not have experience with legacy languages and systems. Meanwhile, warnings that mainframes would disappear have proven to be untrue, particularly because mainframes are very reliable at handling high-volume transaction processing, and companies are increasingly benefiting from integrating legacy mainframe Cobol applications with the rest of their enterprise.

A University of Twente student has created a technique for protecting photographs stored on mobile phones. Ileana Buhan, who recently received her doctorate from the Faculty of Electrical Engineering, Mathematics, and Computer Science, uses a photo of the face of the user of the mobile device to create a biometric record, relying on a mathematical method to store the facial recognition data securely. Her system is capable of recognizing the user even if he or she has changed their hair style. Buhan went a step further in also making the system capable of securely transferring photos from the device owner to another mobile phone user. Her approach is to construct a password from two photos by having two users save their own photos on their PDAs, then take photos of each other, and have the device compare the two photos and generate a security code for a safe connection. Photos are exchanged using this connection, and the photos are stored as a template that contains the key features for recognition. Other biometric recognition systems would be able to apply this safe template transfer.

A nine-person team at Microsoft Research India is focusing on novel technology for use in developing countries. The computer scientists say they have the freedom to forget about PCs and software, freeing them to focus on other problems. For example, the Digital Green project was created through Microsoft's "Farmer Idol" effort, a variation on "American Idol" that featured local farmers. Digital Green researchers distributed training DVDs to farmers in a dozen Indian villages. Microsoft focused on providing training for changes that would result in quick, valuable returns, such as promoting the use of azolla, a fast-growing aquatic fern that can cover the top of a water tank in about a week and lead to much higher milk production from cows. Microsoft also is extending Digital Green ideas to another effort called Featherweight Computing, an experiment with electronic posters and cards that can be sent to farmers and could include reminders about techniques seen in the videos. Another project, Warana Unwired, aimed to help villagers update records and receive pricing data on crops. A network of about 55 villages had been relying on PCs to collect information on fertilizer purchases, water bills, and inventory, but the PCs often broke down or were unavailable. Warana Unwired turned one PC into a hub for collecting short messages sent over cell phones, making the network more manageable.

As computers become increasingly smarter, they could surpass human intelligence, leading to an event dubbed the Singularity. Although technologists are still debating that possibility, even expecting such an event could significantly change human behavior, said Smith College economics professor James Miller at the recent Singularity Summit in San Jose, California. Millers said people will be expecting a singularity long before it happens, which could change how people make choices in life, education, investment, and retirement. The most significant choice people will face will be on extending life, Miller said. "If you think there will be a machine-driven future then your top priority is to survive long enough to make it to the singularity," he said. "Believers will also want to spend more money to increase their chances of making it to the singularity with things such as safer cars and machines that make jobs such as construction safer." Another emerging field could be cryonics, which allows for freezing a body on the belief that it can be resuscitated in the future when the technology is available. As more people believe that the future could be drastically different from today, they are more likely to try to be a part of it. The belief that intelligent machines will dominate also may lead to less spending on education, Miller said.

The Massachusetts Institute of Technology's Media Lab has developed iSet, a prototype device that could help people with neurological conditions such as autism and Asperger's Syndrome better understand facial expressions. ISet, which stands for interactive social emotional toolkit, resembles an oversized cell phone with a camera on one side and a screen on the other. It takes a picture of a person's face and relays information about that person's facial expression to the user. ISet identifies the person's emotional state by placing a colored dot above the corresponding emotion. The dot grows bigger as iSet becomes more confident it has identified the correct emotion. ISet's software combines commercially available face-recognition programs with machine-learning algorithms to enable it to compare new facial expressions to ones it has already seen, and to calculate probabilities that a certain facial expression may mean a certain thing. ISet is being tested on a group of teenagers with Asperger's Syndrome who meet once a month at a school for children with autism and other disabilities. The children are still better at recognizing facial expressions than iSet, and none of them like the idea of carrying around a bulky tablet that they believe would only stigmatize them more. However, the children say they like the idea of a small, discrete device that could act as a pocket translator for emotions.

Cloud computing is expected to have a dramatic impact on campus technology and colleges. Web-based computing is encouraging virtual collaboration by making the projects students or faculty are working on shareable from anywhere. Another advantage cloud computing is bringing to the campus is supercharged research facilitated by access to supercomputing resources through grid computing or Internet connections to supercomputers. A third trend being shaped by cloud computing is joint efforts between institutions to roll out services, an example being the Virginia Virtual Computing Lab supported by a consortium of more than 12 Virginia colleges. The facility will enable students or professors at the various schools to use their own computers to access three-dimensional modeling programs and other specialized software, with the goal of making programs typically found in college computer labs available to students wherever they may be. The biggest challenge to such initiatives is privacy. George Washington University law professor Daniel J. Solove says storing research notes on Google's servers, for example, may make the material easier for government agencies or others to subpoena than if the data were on personal computers, owing to inconsistencies in current law. Another impediment to virtual collaboration is a lack of consensus among partners on operational matters.

Georgia Institute of Technology researchers are designing a robot that mimics the actions of service dogs at a fraction of the cost. The service robot responds to voice commands from the user, who only has to point a laser at the desired location of action. The service robot can open doors and drawers and retrieve medication. "It's a road to get robots out there helping people sooner," says Georgia Tech professor Charlie Kemp. "Service dogs have a great history of helping people, but there's a multi-year waiting list. It's a very expensive thing to have. We think robots will eventually help to meet those needs." Kemp and graduate student Hai Nguyen worked with a team of trainers at Georgia Canines for Independence to research the command categories and interaction that is core to the relationship between individuals and service dogs. For example, service dogs are taught to open doors by biting and pulling a towel tied to a door handle, so the robot was programmed to use the towel in a similar manner. So far, the robot is able to replicate 10 tasks and commands taught to service dogs, including opening a microwave oven and retrieving an object and placing it on a table.

Computer Experts Assist at CERN's Large Hadron Collider University of the West of England, Bristol (10/21/08)

Computer scientists in the Complex Cooperative Systems (CCS) research center at the University of the West of England have developed a tool to control complex workflow procedures. Co-operating Repositories and Information Systems for Tracking Assembly Lifecycles (CRISTAL) is the result of work done by CCS on the management of the processes involved with the construction of the Compact Muon Solenoid (CMS), one of CERN's new generation of experiments at the Large Hadron Collider. CRISTAL has enabled scientists to track thousands of complex activities over CMS' extended 10-year construction period. CCS director Richard McClatchey says CRISTAL is designed to deliver distributed workflow and data management infrastructures and to support technologies. "It provides ready solutions to many of the problems in commercial production management," McClatchey says. "The beauty of the CRISTAL system is that it has been tried and tested on one of the world's most complex, demanding, and high-profile environments at CERN, but can be customized for use by companies and organizations in the manufacturing, telecommunications, and many other sectors." CRISTAL also can be used as a management tool in areas such as bio-informatics and health informatics by tracking research findings or for monitoring the efficiency of different treatment programs, McClatchey says.

Business writer Stephen Baker says in an interview that our casual disclosure of personal data in our daily lives provides grist for computer scientists, mathematicians, programmers, and others who are mining the data to make sense of it. These people, which Baker dubs the Numerati, are modeling and predicting our behavior, and the applications borne from their efforts are dramatically influencing our lives. Baker argues, for instance, that the Numerati are to a certain degree accountable for the global financial crisis because banks' investment decisions are based on algorithms that the Numerati designed according to their understanding of risks, or lack thereof. "It's important to understand that you can have the best math in the world, but if you don't understand human behavior, then you cannot calculate risk when it comes to market behavior," Baker asserts. The use of data mining for predicting terrorism is a particularly thorny issue, as the Numerati usually perform best in scenarios where errors carry a minimal cost. Baker notes that there is a lack of solid data on the day-to-day behavior of potential terrorists, and this absence is compounded by the urgency of locating terrorists, which puts the Numerati in a bind when coming up with effective terrorist-spotting methods that could instead infringe on people's privacy and freedom. Baker predicts the emergence of a market in which "all kinds of companies are going to sell us software that helps us keep control of our data, furnish our data to those who will use it responsibly, and keep it from those who won't."

Jams on the Superhighway? Not for Long New Scientist (10/04/08) Vol. 200, No. 2676, P. 24; Graham-Rowe, Duncan

Network coding is a technique that could reduce bottlenecks and raise efficiency along the information superhighway by encoding separate data packets heading for the same destination into a single packet and then decoding them into their original forms when it is appropriate. Raymond Yeung of the Chinese University of Hong Kong developed such a technique by substituting coding devices for the routers that read packets and forward them to their destinations. Seven years ago Yeung and colleagues at the Chinese University of Hong Kong and Germany's University of Bielefeld applied this method to the butterfly network information flow challenge, while researchers at the Massachusetts Institute of Technology have demonstrated that a network coding-based system can transmit video through a 20-node Wi-Fi network with five times the efficiency of existing router-based systems. Despite such breakthroughs, experts such as Matthias Grossglauser of Nokia Research Center's Internet Laboratory in Helsinki doubt that the replacement of existing routers by coders will happen, given the massive cost and processing requirements. However, network coding is finding favor as a technique for efficiently organizing wireless and peer-to-peer networks through the use of multicasting, in which all coded packets are transmitted to all the nodes in range without blocking up the system as packets cross each other's paths.

Several organizations are working to commercialize augmented-reality (AR) technology, creating applications that combine virtual information with the real world. University of New Zealand Human Interface Technology Laboratory director Mark Billinghurst says AR incorporates three key features--virtual information that is tightly registered or aligned with the real world, the ability to deliver information and interactivity in real time, and the seamless integration of virtual information with the real world. AR has existed primarily in labs, with the exception of heads-up displays in military aircraft, but the recent emergence of highly capable mobile devices is creating a surge of interest. "I think we're on the cusp of widespread application of AR technology, perhaps in a year or two," Billinghurst says. AR-like technology is already being incorporated into industrial manufacturing. InterSence offers process-verification systems that use sensors and cameras to track the positions and motions of tools as workers perform their jobs. Computers then compare actual tool movement with ideal procedures to detect errors or confirm correct completion. Billinghurst estimates that about 40 academic labs spend a combined $50 million to $60 million every year on AR research, and commercial firms are spending two to three times that amount. Advancements in AR depend on new display technologies, such as virtual eyeglasses, tracking systems, cameras, and processors and graphics chips for mobile devices, as well as the ability to deliver wireless AR services whenever and wherever users need them.