ACM TechNews is published every week on Monday, Wednesday, and Friday.

ACM TechNews is intended as an objective news digest for busy IT Professionals. Views expressed are not necessarily those of either AutoChoice Advisor or ACM.
To send comments, please write to technews@hq.acm.org.

The decline of AT&T is reflected by the defection of top talent at the AT&T Labs facility, which was foreshadowed by the corporate downsizing of 200 researchers in 2002. Many former AT&T scientists cite slashed long-range research, travel restrictions, publication of fewer scholarly articles, repeal of perks such as free coffee and bottled water, and the loss of highly respected peers as reasons for their leaving, and some researchers believe the brain drain is indicative of a even more serious trend that America may be losing its innovative edge. All but gone from the facility are specialists noted for their long-term research into machine learning, artificial intelligence, cryptography and network security, statistics, theoretical computer science, and algorithms. Maria Klawe, president of the ACM and dean of engineering at Princeton University, says, "If you're focusing on research that's short-term, to impact products in a year or two, there are all kinds of world-changing discoveries that you simply miss." Recent defectors include Internet privacy expert Lorrie Faith Cranor, who took a teaching position at Carnegie Mellon University; quantum computing guru and MacArthur Fellow Peter Shor, now an MIT math professor; and cryptographer Matt Blaze, who left AT&T for the University of Pennsylvania. Other institutions that have benefited from the erosion of AT&T Labs' talent pool include Microsoft Research, the Weizmann Institute of Science in Israel, Princeton University, and the Defense Advanced Research Projects Agency. Scientists say AT&T Labs' slump emphasizes the dissolution of "blue-sky" research at other facilities, including IBM, Bell Labs, and the Xerox Palo Alto Research Center. The Institute for the Future's Paul Saffo, a former advisor to AT&T Labs, laments that U.S. industry's "relentless race for short-term value is killing our future...AT&T Labs was a national crown jewel--and it's been terribly devalued." The National Science Foundation also reports shrinkage in federal support for basic science since 1980.
Click Here to View Full Article

President Bush and his Democratic rival in the 2004 election, John Kerry, have placed little emphasis on technology-related issues in their campaigns apart from an intense debate on offshore outsourcing, which American University adjunct professor David Holtzman is unsure will expand beyond rhetoric. Kerry and other Democrats castigated Bush in response to presidential economic advisor Greg Mankiw's statement that offshore advertising was a "plus for the economy in the long run," while Bush retorted that Democrats were espousing economic isolationist policies that could cut America off from vital markets. Computer Systems Policy Project executive director Bruce Mehlman says the Republicans are having a hard time effectively communicating offshore outsourcing's alleged benefits to the public. Rich DeMillo of the Georgia Institute of Technology adds that many tech issues are difficult to relate clearly and concisely to voters, and are not the stuff of vigorous debates. The Kerry for president Web site maintains a page of mostly nonspecific tech policy goals including universal Internet access, enforcing trade statutes against nations exploiting the rules, and boosting math and science education; the Bush campaign Web site does not boast such a page. Holtzman points out that cybersecurity could become a hot topic of debate should a major cyberattack be launched against the Internet in the months leading up to the election, while the Patriot Act, which Kerry has criticized, could also rise in importance if any abuses are recorded before the election. NetworkedPolitics.com partner Craig Ullman believes technology will be embedded within the economic presidential debate, noting that "Any government plan that involves stimulating the economy is going to involve the tech sector." He reports that broadband access and e-voting could become political hot potatoes as early as 2006.
Click Here to View Full Article

There is a growing movement to make both computer games and the computer game industry more inclusive: Southern Methodist University students Ryan Champ and Derrick Levy are enrolled in a digital game development graduate program so they can help give minorities better representation in games, as well as put more minorities in game industry studios and boardrooms. One of the major problems the industry faces is a lack of diversity in game environments--a trend that could dampen sales in the long run. Champ and Levy see a golden opportunity for diversifying the game industry, given the enormous revenues games are bringing in, along with growing audience awareness through advertising. For now, however, most games are designed according to a white, middle-class paradigm: Several years ago, the Child Now research organization reported that black and Latino men were represented in best-selling video games primarily as athletes, 86 percent of black female characters were portrayed as victims of violence, and almost all heroic characters were Caucasian males. Levy is worried that these portrayals have a chilling effect on the self-image of minority users, particularly the youngest and most impressionable. He envisions a game in which the hero is a child in a ghetto, and the hero's goal is to effect positive change. Meanwhile, Champ thinks game developers offer a more realistic role model for lower-income, minority kids than the fabulously wealthy comedians, athletes, and musicians they often look up to; not only does such a role model emphasize education and offer a more practicable career path, but provides the game industry with an opportunity to benefit from diverse multicultural viewpoints. Champ would like to open technology centers in lower-income areas where kids can access video games, as well as take courses in game design.
Click Here to View Full Article

NASA scientists have developed a software program that can analyze the nerve signals in the mouth and throat to read a person's mind. "Biological signals arise when reading or speaking to oneself with or without actual lip or facial movement," says Chuck Jorgensen, a neuroengineer at NASA's Ames Research Center in Moffett Field, Calif., who heads the research effort. The NASA researchers have used the computer program to perform simple searches on the Web. Button-sized sensors, placed under the chin and on either side of the Adam's apple, were able to pick up nerve signals from the tongue, throat, and vocal cords. NASA researchers trained the software program to recognize six words, and the system was able to pick up the word thought by participants hooked up to sensors 92 percent of the time. Moreover, they programmed the system with a matrix of the letters of the alphabet with each column and row assigned a single-digit number, then had participants silently spell "NASA" into a Web search engine, which enabled them to browse the Web without using a keyboard. The system eventually could be used to send commands to rovers on other planets, or help an injured astronaut to control machines. The handicapped, including people who are unable to talk, might be able to benefit from the system as well. University of Sheffield computer scientist Phil Green says the technology is "interesting and novel," and although he says it needs more testing, it could be coupled with existing voice-recognition technology for greater effect.
Click Here to View Full Article

Graphical search interfaces use data visualization and relationship analysis methods to provide a different view of available Web data than the list of links typical of most search results. Human Factors International director Phil H. Goddard contends that such tools exploit "the effect that we see patterns and learn patterns and parse patterns faster than we process text." The Gnod.net Web site offers a range of graphical search services--Gnoovies (movies), Gnoosic (music), and Gnooks (books and authors)--whose results are based on users' preference data. For example, a user lists three of their favorite authors, musicians, or films, and Gnod displays a series of choices and asks the user if he or she likes or dislikes them; Gnod fine-tunes its suggestions as it compiles user preference data. Gnod conveys its recommendations through visual representation, using the distance between choices to symbolize how strong those recommendations are. Understanding data linkages is key to providing the best graphical representation of data, and at the core of this understanding is metadata. Anacubis general manager Greg Coyle reports that data overload is the biggest hurdle that visualization faces, adding, "When the data sets get large, it's a challenge to usefully visually represent that and not scare the hell out of the user." In its demo iteration, Anacubis transfers search requests to Amazon.com and delivers the results as icons that users can click to request a related search on Google, which only delivers metadata-free text results; Anacubis refines the results by analyzing the metadata it has compiled from the Amazon search. However, graphical search interfaces may not be as effective or as fast a tool as a text-and-Boolean-command search.
Click Here to View Full Article

The Pentagon believes more effective warfare could be conducted if the identification of underground enemy facilities is fine-tuned before smart bombs are deployed; the Defense Advanced Research Projects Agency's (DARPA) Counter Underground Facilities (CUGF) program seeks to develop passive-sensor systems that read electromagnetic currents, acoustic waves, seismic activities, and other factors to better determine the location, size, and purpose of enemy installations, ascertain the best time to strike them, and monitor the results of a bombing. The sensors will be distributed near suspected facilities or be deployed as a Low-Altitude Airborne Sensor System (LAASS) carried by a drone airplane, and be uploaded to an intelligence-collating aircraft. Upon verification, the data collected by each individual sensor will be examined to infer how much electricity the underground facility is consuming and other basic information. A report will be furnished by software with proprietary algorithms that combine both the raw sensor data and information gleaned from individual analysis of that data. CUGF planners believe at least one version of the report will closely resemble the results page of a Web search, in the form of a probabilistic "hit" list. The biggest hurdle LAASS developers need to overcome is enabling the sensors to filter through electromagnetic interference and radio-frequency emissions generated by the unmanned drone, but current-generation Defense Department computers may not be able to meet such a challenge. "We're trying to figure out whether we need teraflops [trillions of floating-point operations per second] or a whole new computer architecture," explains Joe Guerci of DARPA's Special Projects Office.
Click Here to View Full Article

Nokia has joined Sony and Philips in their effort to develop Near Field Communication (NCF) wireless technology, which would enable electronic devices to use radio-frequency identification (RFID) to interact when touched together. Sony and Philips began the NCF initiative in 2002 (and have working products), but the addition of Nokia should lead to a wider acceptance of the technology in the mobile arena and other sectors, considering the goal is to integrate it in all kinds of products, including PCs, PDAs, video cameras, car navigation systems, TVs, and audio systems. NCF will make it easier for devices to send and receive data, according to the companies. Devices that would be able to transfer data by touching would have an RFID chip, and software for overseeing the connection to another piece of the NCF kit. The companies say people would be able to connect one handheld device to another to swap music, wave a phone at a smart film poster to automatically purchase a ticket, and to turn their mobile phone into an e-wallet. For example, a business traveler would be able to use an NCF phone to check in at an airport, pick up a digital key at a hotel, and pay the bill electronically when it comes time to check out. Devices would use NCF to identify users via RFID, but still use wireless technologies such as Bluetooth or Wi-Fi to transfer data. "This is a new paradigm based on touching, and it will complement these existing wireless technologies," according to an NCF spokesman.
Click Here to View Full Article

The University of Geneva's Group of Applied Physics and its Computer Science department have partnered with id Quantique, a company spun off from the university, to launch the first Web site where true random numbers can be generated and downloaded on demand. Users could request a specific sequence of random numbers from the site by specifying the sequence's parameters, and the numbers would be created by a quantum random number generator linked to the server. The first practical quantum random number generator was developed six years ago by the Group of Applied Physics, and the technology was commercialized by id Quantique. The device produces binary random numbers by harnessing the reflection or transmission of a photon on a semi-transparent mirror. "Quantum physics is the only physical theory predicting that the outcome of certain phenomena is random," explains Group of Applied Physics director Nicolas Gisin. "It is thus a natural choice to use it to generate true random numbers." The University of Geneva's Computer Science department has devised a server/client application that would allow researchers worldwide to download random numbers in the C, C+, Java, or Fortran codes employed for their simulations. The Web site, www.randomnumbers.info, is expected to evolve into the standard online resource on randomness and random numbers. Random numbers are essential in applications ranging from secure encryption of electronic communications to lotteries to scientific calculations, but generating them has always been a formidable challenge.
Click Here to View Full Article

The Chinese government is counting on academic research and development to serve as a catalyst for the growth of the nation's high-tech sector. The government has partnered with the National Central University through the Institute for Information Industry to provide the support necessary to turn its R&D into applicable technologies. The government and the university are working together to establish the Advanced Software Technology Center, and initially will recruit as many as 50 researchers to staff the facility. "The center's research and development areas cover service-oriented software technology, e-learning, and education programs in the future," says Victor Tsan, director of the Institute's Market Intelligence Center. The development of a third-generation information service for automobiles and a digital learning center will be among the subject matters of the new center during its first year. For example, the information service would make use of advanced software communicating with control centers to notify drivers of road congestion and traffic jams, and help taxi companies manage their taxicabs. National Central is the same university that partnered with Sun Microsystems in February to establish grid-computing technology. The Institute teamed up with two other universities earlier in the year to set up similar software research facilities.
Click Here to View Full Article

Security experts wonder whether the software industry has enough incentive or drive to find new ways to stop malicious software, pointing to the recent rash of nasty computer viruses as an example. Some say the antivirus industry is content with a business model that is hard for users but profitable for the companies, given that most solutions require annual paid subscriptions. Packetattack owner Mike Sweeny says the signature model of virus protection is outdated but profitable; alternatives include integrity checking, or building a database that blocks attempts to alter uninfected programs, and heuristic scanners, which anticipate malicious intentions by evaluating program code. "All technologies outside of signature-based scanning were effectively driven from the market in the last decade, as far as the average person or company is concerned," says GlobalSecurity.org senior fellow George Smith. Capital IQ chief security officer Ken Pfeil believes that customers have not made enough fuss for antivirus vendors to move from reactive to preventive methods. Antivirus vendor representatives counter that the signature-file model is the simplest for users, requiring little expertise or technical knowledge, and Trend Micro director Joe Hartmann notes that it is also convenient. Some independent security researchers say that signature-based solutions do protect well against known viruses. McAfee Avert research fellow Jimmy Kuo points out that most users will choose functionality over security anyway. Others blame feature-rich email programs that make it easy for viruses to proliferate. Kuo says, "The simplest way (to prevent viruses) is a text-only email system," and says one should be provided with every new computer.
Click Here to View Full Article

DaimlerChrysler has developed a prototype alert system that produces a vibration in the accelerator to suggest when drivers should ease off the gas, which motorists are more likely to react to faster than dashboard lights. This approach is designed to save fuel and thus money by weaning drivers off the habit of braking for a traffic light or other obstacle at the last minute. The device employs global positioning system technology to pinpoint the vehicle's location on the road network, and supplements this information with data on speed limits, gradients, and road curve radii. Meanwhile, a radar component tells the motorist when the vehicle is coming too close to cars ahead of it. These various measurements are compiled on an in-vehicle PC that anticipates deceleration and computes the best time for the driver to ease off, triggering the vibratory signal at the appropriate moment. "It simply alerts the motorist to the situation and gives him or her a choice about what to do next," explains Klaus-Peter Kuhn of DaimlerChrysler's human-machine interaction group. He believes that it would be easier to advertise the device's promised fuel savings to consumers if the tests used to obtain efficiency figures are revamped. In their current form, the tests do not consider the effects of corners, hills, or traffic on fuel consumption.
Click Here to View Full Article

Haskell is a purely functional programming language and the book, "The Haskell School of Expression: Learning Functional Programming Through Multimedia," is a great introduction to the platform now being used for cutting-edge programming language research. The Haskell community is still rather small, but working on tremendously innovative research such as the meta-programming environment Template Haskell implemented by the Glasgow Haskell Compiler, while Haskell can also be extended with Embedded Domain-Specific Languages for Web authoring, parsing, and controlling humanoid robots without changing the language itself. Paul Hudak's book prepares the reader to begin writing tools or participate in ongoing programming language research. Haskell prepares readers for new developments such as the arrows concept used by the Yale Haskell group; Yale-developed Yampa, a "Functional Reactive Programming" framework, shares concepts with the animation described in "The Haskell School of Expression." The book is written in a tutorial style with alternating conceptual and application chapters. The provided code examples may be insufficient for those who want to jump ahead, but the Web site provides fill-in code. Readers completing the examples given in the book will also need the "November 2002" version of the Hugs interpreter and the User's Guide. "The Haskell School of Expression" eases programmers who are unfamiliar with functional programming into complex components such as recursion; falling back on non-functional style is not allowed with Haskell. The book also tackles the difficult issue of monads by getting readers to use them very early on and explaining the concepts behind them later in the book. Although there is a lot of drawing and music involved in "The Haskell School of Expression," the author includes mathematical background such as the algebraic reasoning for shapes and music, as well as a chapter on proof by induction.
Click Here to View Full Article

Leading security researchers have published a book that teaches how to write hacker code exploiting software security holes. "The Shellcoder's Handbook: Discovering and Exploiting Security Holes," set for release next week, is intended for network administrators, but includes working examples of code and some previously published attack techniques. Malicious hackers frequently use shellcode in their attacks on computer systems. The book has chapters on stack overflows, format-string bugs, and heap overflows, among other topics, but co-author Dave Aitel says the information is necessary for administrators who want to secure their systems. "People who know how to write exploits make better strategic decisions," he adds. Co-authors Chris Anley and David Litchfield say the book has information that can already be obtained online from discussion groups, or from university courses. The book has increased debate over whether researchers should publicly expose software flaws, especially since it contains previously unknown information about how to launch kernel attacks, for example. Novel hacking techniques used for the first time are called "zero day" exploits. Anley says the book is designed to defend against hackers, not instruct them. He says, "This isn't a collection of exploits. It's a book that tells you how to find the bugs and understand what the impact of the bugs is." Despite the controversy, SANS Institute director of research Alan Paller says the book will benefit those working to defend their networks against attack more than it will hackers, since it provides advice that makes sense.
Click Here to View Full Article

Voice over Internet protocol (VoIP) deployments have been announced by enterprises wishing to connect voice to data, slash costs, and boost efficiency, which represents the technology's first significant step out of limbo. Dan Marmion, CIO of hat manufacturer New Era Cap, elected to implement a digital private branch exchange (PBX) and VoIP scheme to solve a "telecommunications nightmare" stemming from the need to link five facilities and cut long-distance costs. He expects the resulting system, comprised of numerous AltiGen components, to save the company about $50,000 annually. Nevada County, Calif., Desktop Services Manager Bill Miller decided to replace the county's antiquated PBX with VoIP, a decision that has resulted in $1.5 million saved in hardware and long-distance costs, and swelled Miller's budget by thousands of dollars that would otherwise be spent on telephone technicians. By the start of 2002, 10 different facilities in Nevada County were piping voice traffic and data along the same connections in a network composed of 1,200 VoIP phones. The Consani Seims dental consultancy employs a VoIP solution in an effort to keep phone tag between consultants and clients to a minimum, according to President Paul Consani. With the system, consultants can talk to customers by plugging IP phones into any Ethernet Internet connection; rapidly program the devices to forward calls to cell phones; and send voice and data over the same connection on the road. Consani reckons that the VoIP system has paid off its $60,000 investment with a quarter of a million dollars, and he boasts, "Now that we [can always] answer our phones, our clients get the impression that we're working harder, that we're in the office a lot more, and that there are more of us."
Click Here to View Full Article

With funding from the CIA, Systems Research and Development founder Jeff Jonas is developing a data-mining system for weeding out terrorists and other criminals without compromising people's personal privacy. The CIA first became interested in the anti-terrorist potential of Jonas' work when he developed Non-Obvious Relationship Awareness (NORA), a software system that can quickly sift through a massive amount of data to infer whether people are connected to shady characters; NORA was created as a tool Las Vegas casinos could use to protect themselves from mobsters or scam artists. Whereas other systems attempt to deduce the behavior of hypothetical terrorists and root them out from population behavior studies, NORA uses real-world suspects and the people they are connected to. A follow-up system to NORA Jonas developed after 9/11 is ANNA, a still untested scheme to cross-index terrorist watch lists while protecting privacy by "anonymizing" data via an encryption method known as hashing. This approach allows private records to be shared with the government and secret watch lists disseminated to private entities. Encrypted records can be matched, after which judicially-approved authorities can request database owners to supply the suspects' identities. Marc Rotenberg of the Electronic Privacy Information Center skeptically observes, "A switch to anonymize can be set to de-anonymize." However, others say ANNA could help bridge the privacy versus security divide, while audit trials as well as existing law should help check potential abuse of ANNA. Jonas himself admits that data-mining alone cannot ensure total safety: "The real problem is hundreds of thousands of people who are brought up to hate us," he concludes.
Click Here to View Full Article

A combination of new technologies and regulatory reforms could spell the end of spectrum scarcity and the beginning of spectrum abundance, changing the face of the communications industry. The notion that spectrum is a physically limited resource is false--what is limited is the right to set up transmitters and receivers that function in specific ways; so-called spectrum scarcity is determined not by the availability of frequencies but by the technologies that can be implemented. The inefficiencies of current spectrum utilization could be overcome by emerging radio transmission and networking technologies, which are being partly driven by the transition from analog to digital transmission. Spread spectrum, for example, will allow spectrum to be "shared" by multiple systems by leveraging the computational intelligence of modern wireless devices, while smart antennas can lower effective interference with other transmitters through their ability to "lock into" a directional signal rather than emitting an omnidirectional signal. Meanwhile, mesh network architecture will allow more devices to be added to networks without apparently raising interference levels, and permit these devices to operate with less power. The FCC is instituting dramatic changes to spectrum management policy on three fronts: The agency is reallocating bandwidth for government and other well-entrenched users to new services; loosening up the technical and commercial strictures on existing spectrum licenses so that more spectrum can be licensed to third parties; and apportioning an unusual amount of spectrum to be employed for unlicensed or shared services. The commoditization of spectrum access over the next decade will likely lead to many industry changes. It will add to the competitive pressure faced by incumbent mobile operators and broadcasters, and benefit manufacturers; investors will chiefly devote their funds to content and service providers rather than enterprises focused on protecting spectrum; and economic power will migrate from government spectrum gatekeepers to consumers.
Click Here to View Full Article

Software errors that led to the deaths of Panamanian cancer patients from overexposure to radiation--and criminal prosecution against the technicians who used the software--illustrates the vital need to anticipate and remove glitches before they become a problem with potentially fatal consequences. Other deadly incidents attributed to buggy software include the 2000 crash of an Marine Corps Osprey tilt-rotor aircraft that left no survivors; the shooting down of friendly aircraft by the Patriot Missile System in Operation Iraqi Freedom; and at least three fatalities resulting from last summer's East Coast power outage. Among the reasons given for bad software is a flawed programming model, poorly thought-out designs, lax testing procedures, and the unpredictability of program interaction. The FDA distributes "guidance" documents suggesting that software manufacturers comply with generally-accepted software development specifications, keep tabs on design specs, and formally review and test the code they create--but without making any specific recommendations. The Panama incident is causing some industry experts to consider the possibility that more stringent regulation of software development is necessary. Observers note that not only are software development regulatory agencies few in number, but existing agencies such as the FDA do not go far enough to ensure quality software. For instance, the FDA approves products under either premarket approval or premarket notification. The former mechanism applies to dramatically unique technologies that are subjected to rigorous testing, while the latter applies to products that fit into existing device categories, and do not require FDA or corporate trials to be approved; the Multidata Systems International software responsible for the radiation overdoses in Panama was certified under the premarket notification process. SCC director William Guttman estimates that there could be 20 to 30 bugs for every 1,000 lines of code generated by corporate programmers or commercial software manufacturers.
Click Here to View Full Article