ACM's President David Patterson warns that reductions in federal funding for long-term, high-risk IT research could cause the U.S. to lose its status as the world's leading IT innovator. He says the National Science Foundation (NSF) and the Defense Advanced Research Projects Agency (DARPA) underwrite most U.S. academic IT research, but both agencies have significantly changed their funding policy and priorities over the last decade in a way that discourages long-term research and the full engagement of the country's top minds. DARPA reduced academic IT research by half in fiscal year 2004, classified many programs that were formerly available to academics, and shortened deadlines for reaching research milestones; NSF, meanwhile, has been inundated with research proposals over the last five years, forcing the agency to withhold funding from many worthwhile projects. In addition, other agencies such as NASA and the Homeland Security Department are not taking up the slack. Patterson also points to the outcome of the recent ACM International Collegiate Programming Contest--in which Asian teams took top honors while the U.S. hit an all-time low--as evidence of Asia's rapid IT ascendancy, which could dominate the global marketplace if current U.S. government policies are not changed. Patterson offers two questions he thinks should be asked at a May 12 hearing of the House Science Committee: Whether a surrender of the nation's technological leadership could result in a wake-up call similar to the launch of Sputnik, and whether the country would be better served by luring top talent to American universities to help nurture the U.S. economy.
Click Here to View Full ArticleFor information regarding Thursday's hearing of the House Science Committee,
visit http://www.acm.org/usacm/weblog/index.php?p=267

The Real ID Act met with House approval last week and is expected to pass through the Senate without difficulty this week, despite fierce opposition by civil liberties groups, government associations, and others. Critics say the legislation was slipped into a larger, relatively uncontroversial spending bill so that a congressional debate could be avoided. The Real ID Act requires U.S. states to produce and distribute standard, tamper-proof driver's licenses that carry machine-readable, encoded data, which opponents claim would create a de facto national ID card, add unnecessary costs and bureaucracy, and endanger privacy. Under the law, all drivers would need to supply four types of documentation to confirm their identity when acquiring or renewing a license, and state motor vehicle departments would have to verify these documents against federal databases as well as store them in a database, along with a digital photo of the card holder. The Real ID Act does not specify what kind of technology the cards must use to be machine-readable, but the ACLU's Barry Steinhardt believes the most likely candidate is contactless RFID chips, which can allow people equipped with readers to collect data stored on the chips without the bearer's awareness. He also says a standardized license would permit the tracking of people by government and industry and lead to the creation of a single national database. The Congressional Budget Office estimates that state costs for the licensing system switchover and worker training will total $100 million over five years, but the National Council of State Legislatures and other critics think a more likely estimate is $500 million to $700 million. The Real ID Act was drafted based on recommendations from the 9/11 Commission, which said tighter controls on identification documents were needed.
Click Here to View Full ArticleFor information on ACM's activities regarding the Real ID Act, visit http://www.acm.org/usacm/weblog/index.php?cat=17

The Apache Software Foundation (ASF) Incubator panel is considering whether to sponsor an open source Java 2, Standard Edition (J2SE) runtime platform. The Harmony project will build on J2SE version 5.0 (Tiger) and be licensed under the Apache License 2.0. The proposal includes a community-developed modular runtime and interoperability test suite, and backers say programmers could begin coding immediately. An open source J2SE would appeal to developers who want to avoid restrictions associated with Sun Microsystems contributions, such as some bindings for OpenOffice.org. The Harmony project came together after open source Java advocates discussed the issue in November in Cambridge, England; after the finalization of J2SE version 5.0, open source expert Danese Cooper says the groups agreed the next step for open source Java was a technology compatibility kit (TCK). Jakarta Project Chairman Geir Magnusson says there are other developing solutions, but they are all limited in one way or another: These include other open source Java projects and alternative approaches to the execution of Java bytecode. Sun Java Software chief technologist Graham Hamilton wrote in his blog that Harmony is complementary with Sun's own J2SE work and approved of the project's commitment to Java Community Process specifications; he said Sun would likely participate in Harmony, but would focus mainly on finishing Sun's J2SE reference implementation.

A 2004 penetration of a Cisco Systems network that led to the theft of software for many of the computers tasked with regulating the flow of the Internet was recently revealed by federal officials and computer security investigators to be one salvo in an extensive series of breaches that demonstrate how easy it is for even sophisticated Internet-connected computers to be compromised. The primary intruder utilized a corrupted version of the SSH software program as a Trojan horse, which was installed in Internet-linked computers whose security software was not upgraded. The Cisco intrusion appropriated passwords to Cisco computers sent from an infected system by a legitimate user who was oblivious to the Trojan horse's presence. An investigator thinks the hacker was trying to gain credibility in online chat rooms with the Cisco theft. Around the time the attacks were first noted last April, University of California, Berkeley, researcher Wren Montgomery's computer was compromised and she started receiving email taunts from a hacker going by the name of "Stakkato," who also claimed to have penetrated military systems as well as NASA computers. Surveillance software installed on the University of Minnesota computer network by security investigators last May revealed the network was being used to launch hundreds of Internet attacks, and about half of more than 100 attempted break-ins over a two-day period were successful. A nearly year-long probe into attacks on computer systems, including those mentioned by Stakkato, has yielded a Swedish teenager as the primary suspect.
Click Here to View Full Article(Access to this site is free; however, first-time visitors must register.)

Intelligence officers stand to benefit from new visualization tools that enable them to generate unique representations of digital communications that could help map out terrorist activity. The Pacific Northwest National Laboratory (PNNL) has developed Starlight 3.0, a new generation of software that graphically displays the relationships and interactions between documents that contain text, images, video, and audio, for the Homeland Security Department. Starlight is a redesign of earlier software that permits interactive analysis of larger datasets, the jettisoning of irrelevant content, and the addition of new data streams as they come in, according to PNNL chief scientist John Risch. The software enables a fourfold increase in the volume of documents that can be analyzed simultaneously, and allows the concurrent opening of multiple visualizations, which Risch says lets users see the time as well as the location of an activity's occurrence. Another PNNL effort involves the continual augmentation of IN-SPIRE software for deriving meaning from large datasets and allowing users to explore the likelihood of alternative hypotheses, says National Visualization and Analytics Center (NVAC) director Jim Thomas, who says the software can search documents in multiple languages at the same time and permits the "discovery of the unexpected." Both Starlight and IN-SPIRE generate visualizations that graphically depict relationships between content by displaying them in multiple formats. Other organizations working on analytical software for the federal government include Intelligenxia, whose IxReveal software can track online message threads and provide answers to unasked queries, says Intelligenxia CTO Ren Mohan.
Click Here to View Full Article

Georgia Institute of Technology and Palo Alto Research Center researchers conducted a study revealing that music playlists can yield clues about a person's character, while strong group identities can form around digital music sharing. The research examines computer "discovery capabilities," and lead investigator and Georgia Tech doctoral candidate Amy Voida says such research is currently "focused on one technology finding another technology, but we wanted to understand what the social impact of discovery might be." The researchers studied the playlists of white-collar professionals in a midsized company equipped with iTunes, and also analyzed their reactions to each other's playlists and how those reactions resonated throughout the office's social structure. Voida says the study repeatedly showed the cultivation of personal, cultural, and social identities through music sharing. Office workers were presented with a puzzle in which they had to deduce the identity of the owner of an unidentified playlist, and solving the puzzle was usually facilitated by the provision of age and ethnicity-related clues. However, the guessers' own musical knowledge and expertise frequently led to mistaken assumptions. In addition, the office manager's participation in the study prompted many workers to change their lists to present a more "balanced" image of themselves. The study was detailed in a paper presented at the Computer-Human Interaction conference (CHI 2005) in April.
Click Here to View Full Article

Thomas Sterling, a faculty associate at the California Institute of Technology's Center for Advanced Computing, discusses in an interview how future high-end computing (HEC) is perceived and what its prospects are. His argument is that general-purpose petascale computing will remain elusive until the causes of performance degradation--latency, overhead, contention for shared resources, and starvation--are dealt with via new architectures. Sterling reasons that perhaps the biggest obstacle is the complacency that seems to have infected the HEC community. He says, "The real problem, in my mind, is that we are programming the wrong computer structures: Trying to force piles of sequential components to pretend they are a parallel computing engine." He expects a system that can deliver a peak performance in excess of a petaflop around the end of the decade, with sufficient funding and infrastructure to satisfy electricity and cooling demands. Sterling attributes the confusion regarding user satisfaction with future HEC to a series of divergent user-workload properties, their highly variable resource requirements, and conventional techniques' widely inconsistent level of suitability; among the innovative system architectures that could address the sources of degradation is Cray's X1 family of architectures, Steve Keckler's TRIPS architecture at the University of Texas at Austin, Caltech and the Jet Propulsion Laboratory's MIND architecture, and Stanford's streaming architecture. Sterling points to government and community studies that evaluate how committed and involved the United States is to HEC, and that make recommendations for prospective R&D programs, as excellent resources for members of Congress to consider. Issues that have hindered the political influence of the HEC community include cost, resistance to change, inadequate community mobilization, and a lack of public awareness of how HEC could affect people's lives in a dramatic and positive way.
Click Here to View Full Article

DNA computing is more suited for algorithmic assembly and other massively parallel problems than traditional computer processing, says New York University chemistry chair Nadrian Seeman, who has spent his career investigating nucleic acid structure, topology, and nanotechnology. Despite its genetic connotations, DNA as a nanotechnology tool is not related to biological sciences but rather chemistry, where it serves as a useful molecule. Research surrounding DNA nanotechnology is expanding rapidly in many directions, though much of the underlying interest is in using DNA as a bottom-up method for organizing nanoelectronic components. Seeman says he first suggested DNA could be used as a lattice for academic and drug discovery purposes 25 years ago. Benefits of the DNA molecule are numerous: The molecule is stiff, with a normal length of 50 nanometers; is conveniently created; has commercially-available modifying enzymes; is physically robust and can withstand temperatures up to 90 degrees Celsius; is compatible with biotechnology approaches; features a readable code; and has hundreds of derivative structures with various properties. Seeman sees DNA computing applied to problems that take advantage of its massively parallel nature, such as algorithmic assembly and biologically-oriented applications, though he admits many other researchers believe DNA computing will cover a broader scope. The most difficult problem facing the field is the lack of easy methods for purifying synthetic DNA strands, and Seeman says more government support is needed before nanorobotic breakthroughs will be realized, including nanotherapy to novel materials. Seeman also advocates more public education about the value of science, and says government needs to be prepared to make tough decisions concerning technology's societal impact.
Click Here to View Full Article

Ecologists are planning to set up more than $1 billion worth of sensor web technology to study diverse environments with an eye toward saving the planet. Dr. Deborah Estrin with UCLA's Center for Embedded Network Sensing says the goal of such deployments is to create the ecological equivalent of MRI or CAT scans, while Dr. Alexandra Isern with the National Science Foundation (NSF) says sensor web technology is helping scientists understand "how different processes in the environment operate at different frequencies." Factors driving the sensor web wave include the support of institutions such as the NSF and the Defense Department, which have respectively financed planning and research into new sensor network deployments and the miniaturization of electronics to yield technologies such as motes and smart dust. Over 100 wireless motes, robots, computers, and cameras are linked into a network in California's wooded James Reserve to measure temperature, humidity, rainfall, soil moisture, and light levels, as well as track wildlife, plant growth, nesting activity, and the production of carbon dioxide in the soil. Other sensor web projects include RiverNet, which will use floating robots, wireless sensors, and distributed computers to track and improve the water quality of the Hudson River; EarthScope, an effort to study North America's continental formation and evolution to gain better insight into fault systems, earthquakes, mineral deposits, and volcanic activity; and Neptune, which involves the deployment of almost 2,000 miles of sensor, camera, and robot-equipped cables under the Pacific to study the ocean environment. Meanwhile, the National Ecological Observatory Network (NEON) initiative's goal is to chart the spread of invasive species and predict shifts in the biosphere to augment land use and restoration strategies.
Click Here to View Full Article(Access to this site is free; however, first-time visitors must register.)

The GCC compiler program is employed to generate nearly all programs in the open-source movement, and the latest version, GCC 4.0, features a new optimization architecture created to improve the conversion of source code written by people into computer-readable binary code. Programmers are currently working to debug and boost the performance of GCC 4.0, which was released on April 22 by lead programmer Mark Mitchell. The KDE graphical interface software refused to compile with GCC 4.0, although Mitchell says the bug responsible has been fixed. Another hindrance for the compiler is a recent review from programmer Scott Ladd, who says GCC 4.0 came up short when compared to GCC 3.4.3, citing bulkier, slower-running code that often took longer to generate. However, Ladd says "no one should expect a 'point-oh-point-oh' release to deliver the full potential of a product." Meanwhile, Mitchell says GCC 4.0 patches hundreds of bugs, can generate software for previously unsupported processors, and compile software written in the highly popular Fortran 95 programming language. Mitchell expects that GCC 4.2 will improve the storage of data in registers; but the increase in registers to choose from, thanks to the advent of 64-bit x86 chips, elevates the level of difficulty in choosing which data to store in registers. Still, Mitchell says greater numbers of registers can speed up software, provided that the right data is kept in the registers.
Click Here to View Full Article

UCLA electrical engineering professor William Kaiser's Individualized Interactive Instruction (3I) computer program enables real-time anonymous interaction between professors and students, eliminating students' reluctance to ask as well as answer questions out of fear of embarrassment at their lack of knowledge. The open source software allows students to solve problems and ask the professor questions using laptops with wireless Internet links, while also permitting real-time monitoring of students' keystrokes by the professor. 3I bears a resemblance to Discourse software from the Educational Testing Service, only it is free and more adaptable to users' needs. Greg Chung, a senior research associate in UCLA's National Center for Research on Evaluation, Standards, and Student Testing who worked on 3I with Kaiser, says the software's simple design and ease of use should make the program particularly helpful to large classes with primarily lecture-based instruction. Second-year computer science and engineering student Adam Wright thinks a system similar to 3I will one day be used by most university classes. Kaiser says the program could be easily deployed in a wide variety of disciplines, and could eventually run on PDAs or tablet computers. Kaiser recently won the 2005 Brian P. Copenhaver Award for faculty who promote innovation in education via technology.
Click Here to View Full Article

Efforts to improve the Internet's stability and security through software design could be significantly aided by a project coordinated by Tel Aviv University computer scientist Yuval Shavitt, which seeks to map out the Internet via a distributed computing model. Scientists from the University of California, San Diego's Cooperative Association for Internet Data Analysis produced a rough map of the Internet two years ago that charted the location of over 12,000 "autonomous systems," some of which function as organizing "hubs." However, all mapping initiatives up to now have begun with a fairly small number of sites, which introduces a bias. Shavitt's strategy involves enlisting volunteers to download a software agent that sends out probing packets to map local links in and around the autonomous system in which the computer resides, using a very small percentage of the host computer's processing power. "We hope that the more places we have presence in, the more accurate our maps will be," says Shavitt. Approximately 800 agents have been downloaded from 50 countries since the project began in late 2004, and Shavitt says roughly 40,000 links between about 15,000 autonomous systems have been mapped out thus far, revealing that the Internet is approximately 25 percent denser than previously believed. He says a complete map of the Internet at the autonomous-system level could be generated in less than two hours once about 2,000 software agents are in operation. The ultimate goal is to map the Internet at the individual-router level, an effort Shavitt claims would require "about 20,000 agents distributed uniformly over the globe."

The MIT Media Laboratory was the site of last week's Symposium on Personal Entertainment, which focused on how electronic entertainment will be transformed by current and nascent technologies and trends. Moderator and CELab director Michael Bove said devices have become smarter and consumers more expectant of better quality and performance, but establishing compatibility among heterogeneous devices is a persistent issue. NBC Universal's Glenn Reitmeier predicted that DVDs will download in 12 minutes by 2010 at 40 Mbps broadband speeds, while consumers will be able to view full-feature DVDs on cheap media players by 2015 thanks to the provision of 120,000 Mips of processing power; a DVD movie will also cost just 4 cents through the availability of 300 TB of storage by 2015. Reitmeier said piracy is impeding the transition to this new entertainment landscape, but was hopeful that "the reliability of legitimate distribution of content will offset the illegitimate downloading, with all its unreliable attachments, to garner a majority of 'honest' consumers." CEO of the University of Southern California's Entertainment Technology Center Charles Swartz said his facility is attempting to discover how consumer demands for quality content can be fulfilled through the intersection of art, technology, and business. He documented the emergence of a "'what I want, when I want it' consumer society," and reasoned that perhaps ways to integrate new technologies to deepen content access and cultivate interactivity are needed to satisfy such desires. Swartz said distributing films as digital files could conceivably be cheaper, more secure, and more flexible than the traditional system of motion picture distribution and exhibition, as well as offer better clarity and quality of image. User interface issues must be resolved before devices and services for the digital home can be thought up, said Majesco Entertainment CEO Carl Yankowski.
Click Here to View Full Article

The entrance of utility companies into the broadband space could help speed up broadband penetration and the emergence of more advanced services in the United States. Although the domestic broadband sector has been expanding by about $3 billion yearly over the past six years and is expected to produce $19 billion in revenue this year, according to PricewaterhouseCoopers, broadband penetration has only reached 36 percent of homes. The U.S. is trailing markets such as South Korea, which has an 80 percent penetration rate and offers access speeds at least four times of those in American homes, and Japan, which also offers services at blazing speeds and at prices that are much lower than what Americans pay. Industry observers say the new competition brought on by utility companies could spur phone and cable companies to be more aggressive in laying more fiber lines and making more of their existing pipe available for broadband. Utility companies will be able to roll out broadband much faster and cheaper than their competitors because they will be providing Web access over their existing power lines, which offer access speeds of 3 Mbps to 4 Mbps. The average cable or DSL connection speed is about 2 Mbps, but new modem chips could push power line service speeds to about 10 Mbps, and potentially to more than 30 Mbps depending on usage. Broadband power lines would be fast enough for full delivery of movies and video programming over the Web, and advanced services such as control of home appliances from remote locations.
Click Here to View Full Article

Biometric identification technology is caught between an eager public-sector market fueled by post-Sept. 11 security fears and immature standards and system scalability. Before Sept. 11, 2001, there was already a growing interest in biometric security solutions in the private sector, but afterwards vendors leapt at new agency-wide deployment opportunities; experts now say they do not expect widespread biometrics adoption in the private sector until the end of this decade, due to the lack of application programming interface standards, common file formats, and data interchange standards. Biometrics is already providing tangible payback for some companies, however, such as Telesis Community Credit Union, which uses biometrics for network and application access instead of passwords. Telesis' Phil Fowler said he decided to abandon password-only authentication when a network password cracker used during a security audit broke probably 80 percent of employee's passwords in just 30 minutes; biometrics won out against single-sign on solutions because Fowler felt uncomfortable using just one gateway into all applications. The Telesis system stores fingerprint profiles directly in the Microsoft Active Directory system in encrypted form. Marriott International has also seen significant benefits from its voice-identification biometric system that's used to automate password resets, which are required every 90 days for about 40,000 employees. Given costs of up to $31 per manual password reset, as estimated by Gartner, the automated option has resulted in huge savings, says Marriott's Al Sample. U.S. Navy CIO David Wennergren says large government biometrics projects such as the Defense Biometric Identification System will drive standards; the Defense Department system must not only be interoperable within different military branches, but also work with those at the FBI to enable data sharing, for instance.
Click Here to View Full Article

Weblogs (blogs) are penetrating corporate culture, and are more widespread than IT managers may realize. Interest in business blogging has primarily concentrated on high-profile blogs open to the public as a pipeline for company/customer communications; meanwhile, internal corporate use of blogs and wikis offers employees a friendlier and cheaper alternative to email as well as unpopular project and knowledge management systems. Such tools are being used without the official endorsement of senior technology managers in some cases, thanks to the increasing availability of inexpensive and simple publishing and collaboration software. "We believe this kind of communication creates community, and that a solid community around a company is not a threat--it's an ideal," says Sun Microsystems President Jonathan Schwartz. "There's an immediacy of interaction you can get with your audience through blogging that's hard to get any other way, except by face-to-face communication." Opening up blogs to public comment can allow a company to draw insights on what kinds of partners, customers, employees, and developers are visiting their sites, and respond to them. Sun technology director Tim Bray says companies can use this communications channel to improve themselves, corporate morale, and market position. Managers must work out the best strategy for adapting the corporate environment to blogs and wikis, taking into account such factors as the considerable time and energy commitment blog and wiki maintenance requires, and the potential disclosure of sensitive information.
Click Here to View Full Article

In a conversation with James Hamilton of Microsoft's SQL Server Team, IBM Fellow and Database Technology Institute founder Pat Selinger discusses how relational database management technology has progressed. She notes that relational database products have transitioned from a rules-based model to a cost-based query optimization model, which is essential to expediting application productivity and reducing the total cost of ownership. Selinger says database administrators are being pressured to organize, administer, and search unstructured data (which comprises about 85 percent of all data, by her estimate), and she sees IBM's autonomic computing effort as critical to this breakthrough. In her current role as IBM Research's VP of area strategy, information, and interaction, Selinger functions to integrate and manage both structured and unstructured information. "The system's understanding of what's in that data is not very deep, so researchers get more involved in semantics and speech understanding and ontologies and categorizations and various other kinds of analytics to be able to understand what's in that data and derive information from it," Selinger explains. She says metadata will be an essential element of this effort, and feels that greater research in mapping and discovering relationships in such data is warranted; metadata's accumulation, updateability, and management are additional research areas she highlights. Selinger sees open source as an important vehicle for bringing the advantages "of the reliability, the recoverability, the set-oriented query capabilities to another class of applications--small businesses--and the ability to exploit the wonderful characteristics of database systems across a much richer set of applications." She expects file systems to prevail, as a centralized repository for unstructured data is impractical and economically infeasible.
Click Here to View Full Article