ACM TechNews is published every week on Monday, Wednesday, and Friday.

ACM TechNews is intended as an objective news digest for busy IT Professionals. Views expressed are not necessarily those of either AutoChoice Advisor or ACM.
To send comments, please write to technews@hq.acm.org.

Newly elected Association for Computing Machinery (ACM) President David Patterson, a University of California, Berkeley professor and developer of the Reduced Instruction Set Computer, promised to make ACM more effective by enlisting new IT professionals, improving high-school students' perception of computing, and bringing "big idea" papers to ACM meetings. He said the appeal of computing can be widened through local ACM-certified events, radical idea sessions at Special Interest Group (SIG) conferences, and more programs for underserved constituencies. ACM Secretary-Treasurer Laura Hill with Sun Microsystems Laboratories believes these underrepresented groups could be bolstered through new community activities facilitated by digital technology. ACM Vice President-Elect and IBM for Internet Technology VP Stuart Feldman has called for the provision of additional services for overseas computing professionals and increased visibility to policy-makers and prospective ACM members. Other newly elected ACM leaders include Laboratorie de Recherche en Informatique director Michel Beaudouin-Lafon, who thinks the computer industry should be better informed on the effects of new technologies on end users; British Computer Society President Wendy Hall, who intends to bring more women into computer science and boost international collaboration in ACM; Indiana University professor and former ACM VP David Wise, who wants to ensure that scientists, practitioners, and libraries have ample and affordable access to ACM's resources; Genentech systems architect John Morris, who has made it a priority to emphasize the relationship between ACM and SIGs; and Rutgers University professor and ACM Fellow Barbara Ryder, who wants ACM to strengthen its presence in public policy and technology issues.
Click Here to View Full Article

Universities and corporate research laboratories are furiously filing nanotechnology patents, hoping to get in at the ground level of what many experts say will be the next transformational technology, worth $1 trillion by 2015, according to government figures. But the rapid number of nanotech patent filings also worries many people who say extensive patenting could stifle innovation in the emerging field and even give rise to a dot-com-like bubble. IBM physical science director Thomas Theis summarizes general sentiment by noting that nanotech is a seminal breakthrough made possible by the miniaturization of many technologies. Patents covering nanotech, which deals with materials and processes smaller than 100nm in size, have tripled since 1996, with some sectors seeing tenfold increases in patent filings in the last three years. The National Science Foundation says IBM leads the way in number of nanotechnology patents won in 2003, with other top-10 notables including Micron Technology, 3M, the University of California, and Canon. But patents that cover fundamental aspects of nanotech, such as the one for carbon nanotubes owned by Japanese firm NEC, stir fears that entire markets could be subject to onerous licensing; NEC's carbon nanotubes, for example, can be used in a number of nanotech Applications, including transistors, TV displays, sensors, and fuel-cell batteries. Sabety + Associates' Ted Sabety says that nanotech holds promise similar to that of IT, except that the formative stages of the IT market were marked by a number of freely licensed innovations: "The patent thicket is going to be a bigger problem in nanotechnology than it was in computing," he says. But the popularity of the patents shows no signs of waning, with one patent attorney describing nanotech as "biotechnology on steroids."

It is doubtful that a bill sponsored by Rep. Rick Boucher (D-Va.) that seeks to overturn the Digital Millennium Copyright Act's (DMCA) ban on the circumvention of digital copyright controls will be passed by Congress this year, but its support in the House of Representatives is giving advocates a sense of hope. Many legislators now regret passing the DMCA, which they fear has limited Americans' fair use rights while increasing the power of copyright holders. The situation has sparked concerns of people being charged as criminals for committing otherwise innocent acts, such as copying a CD they have bought for personal use. "The DMCA has supplanted the balance of the Copyright Act over the last century," argues Electronic Frontier Foundation attorney Fred Von Lohmann. Boucher's bill currently has 19 co-sponsors in the House, including House Commerce Committee Chairman Rep. Joe Barton (R-Texas). Entertainment industry representatives deny that the DMCA negatively affects fair use: David Green with the Motion Picture Association of America (MPAA), for one, contends that there have never been any fair use provisions permitting people to make full backup copies of movies, an argument that many backers of Boucher's bill contradict. Meanwhile, MPAA President Jack Valenti testified before Congress last month that the bill, if passed, would essentially legalize hacking and remove the barriers to rampant piracy of copyrighted digital material. Selverne, Mandelbaum & Mintz entertainment attorney Whitney Broussard admits that fair use lacks a clear legal definition, but thinks that other safeguards already in place for copyright holders may make the DMCA's anti-decryption provisions redundant.
Click Here to View Full Article

Computers are becoming so deeply embedded into Formula One racing that the result is practically a cyborg, but this trend is provoking intense debate among regulatory bodies as to how big a role computing should play in the sport, especially since it is encouraging the wealthiest racing teams to avail themselves of cutting-edge technology in order to gain a competitive advantage. The leading Formula One team, Scuderia Ferrari, employs computers and computer simulation extensively: The team's champion driver, Michael Schumacher, drives a car whose steering wheel is equipped with a computer. The device allows drivers to check their status via a display, and wirelessly transmits data about the race to the Ferrari team every time the cars pass the pits; at the same time, the data is sent over the Internet to Italy for more sophisticated analysis in order to enhance the team boss's ability to work out a racing strategy with the drivers. The system the Ferrari team uses can track more than 500 performance aspects. Silicon Valley computer chip manufacturer AMD, a Ferrari sponsor, is expected to provide the Sauber Petronas racing team with a supercomputer that will be employed for aerodynamic simulation; the machine's processing power is supposed to be more or less equal to that of the 10th most powerful machine in the world. Formula One teams are constantly looking for a technological edge despite the controversy it engenders, which has aroused the discomfort of the International Automobile Federation. When the federation banned the use of turbocharged engines out of a desire to control racing car speeds, car designers looked to computerized systems such as two-way telemetry. The federation subsequently prohibited two-way telemetry, but Formula One racing refuses to let go of computing.
Click Here to View Full Article(Articles published within 7 days can be accessed free of charge on this site. After 7 days, a pay-per-article option is available. First-time visitors will need to register.)

University of Southern California (USC) computer science researcher Mathieu Desbrun has developed an algorithm that enables 3D files to be compressed much smaller than before; although video, audio, and other file formats have been successfully compressed, effective 3D file compression has eluded computer scientists. 3D file compression is increasingly important as many fields use 3D images more, including industrial design, video games, online retailing, animation, and museum display. Desbrun's approach consolidates the smaller triangles that make up the surface of 3D images into larger Triangles and then into polygons while maintaining the shape and compatibility with different varieties of display and 3D applications. USC computer science Professor Gerard Medioni says the technique has a solid formal basis and works well with ordinary 3D shapes: "This is not a hack," he says, referring to an isolated work-around. Desbrun, who won the "Significant New Researcher" Award at last year's ACM SIGGRAPH conference, will present his "Variational Shape Approximation" solution at this year's conference in Los Angeles. He says the method eliminates unnecessary complexity in the way the 3D shape is stored, such as extraneous triangle mesh components that make up flat surfaces; those triangles would be consolidated into an appropriate polygon under Desbrun's technique. Compressing 3D image files has proven problematic for computer scientists because of the computational inability to discern the optimal mix of small and large elements, a problem theoreticians have shown to be "NP hard," or without solution in any finite length of time. Desbrun used a machine- learning technique called "Lloyd Clustering" and was able to automatically divide the surface area of an object into regions and test those groupings. Users can also intervene in the grouping process to fine tune parts that need to be detailed.
Click Here to View Full Article

Unearthing information that could signal a threat to national security by combing the Internet, satellite images, newspapers, and electronic databases is the job of thousands of intelligent software "agents" created by researchers at Oak Ridge National Laboratory (ORNL). Thomas Potok of the lab's Computational Sciences & Engineering Division likens the agents' task to "having a stack of 100,000 pages and having to find the 20 pages that contain information critical to national security." He explains that the behavior of intelligent agents is being patterned after natural and biological models, such as flocks of birds, schools of fish, and natural selection. Potok envisions military scenarios for the agents in which they collate and instantly analyze information from multiple sources, rank the data according to importance, and relay it to commanders or command centers. Potential homeland security applications include connecting surveillance cameras and agents in order to detect threats, as represented by changes to the scene or objects that seem out of place. Issues that still need to be addressed include developing agents that can simulate brain functions more closely and devising a way for vast numbers of agents to communicate with each other and with people. Potok expects the agents to become more skilled and sophisticated as ORNL's computing power grows. "Ultimately, our goal is to be able to detect an imminent threat that no one has been able to see with conventional methods," he notes.
Click Here to View Full Article

Computer game graphics technology is improving at an accelerated rate, but with audiences craving more photorealism--spurred in part by advanced special effects in motion pictures--the race is on to develop even more lifelike renderings of people in games. Epic Games CEO Tim Sweeney calculates that the quality of computer graphics has improved 10,000 times since his firm started making games over a decade ago but reckons that another 15 to 20 years will pass before graphics are sophisticated enough to completely mimic the look and movement of real-world objects and people. Such an achievement will require a significant upgrade in artificial-intelligence programming. "Artificial intelligence, or the way the character behaves, is the weak link in the chain," explains Electronic Arts designer Will Wright. EA's upcoming "Sims 2" is considerably more advanced than the original 2000 version: Characters exhibit more realistic behavior, such as looking and smiling at other characters designated as friends, but communication is restricted to gestures and dialogue bubbles. Meanwhile, the upcoming "Half-Life 2" will reportedly animate speech by synchronizing characters' voice and mouth movements via lip-synching. Jordan Mechner, who co-created UBI Soft's best-selling "Prince of Persia: The Sands of Time," notes that "With each new level of technology, games took a step backward." He illustrates his point by harking back to the introduction of 3D graphics technology, when many game designers failed to develop a way for players to navigate 3D space without becoming disoriented.
Click Here to View Full Article

The nanotechnology industry has reached a turning point, where public perception of nanotech is highly susceptible to potentially industry-killing mass hysteria generated by overzealous media coverage. David Rejeski, director of the Woodrow Wilson International Center for Scholars' Foresight and Governance Project, estimates that 130 nano-based products have been released onto the American market, while the U.S. government reckons that the nanotech industry will be worth $1 trillion by 2012. However, these predictions are tempered by concerns that nanoparticles released from coatings or other products will act as pollutants. "We know very little about the health and environmental impacts [of nanomaterials] and virtually nothing about their synergistic impacts," notes Rejeski. A number of toxicology reports suggest that nanomaterials could pose health risks to various organisms, but their exact effects remain undetermined because exposure to such substances has been very low, observes National Nanotechnology Coordination Office director Clayton Teague. Still, U.S. National Nanotechnology Initiative (NNI) director Mihail Roco believes new nanomaterials should be handled with caution, and the NNI, the National Science Foundation (NSF), and the EPA are among the organizations underwriting studies of nanomaterials' environmental impact. Firms with significant nanotech investments are also scrambling to determine how safe nanomaterials are, but underlying such initiatives are fears of declining public confidence in nanotech. NSF advisor Julia Moore says companies should inform consumers up front about the presence of nanomaterials in products, while David Goldston of the U.S. House of Representatives Science Committee says researchers should not assume that the public is uniformed or unrealistic about nanotech concerns.
Click Here to View Full Article

The open-source Linux operating system is being touted as a new development platform by makers of embedded software, who are promising gadget manufacturers lower costs, faster development, and no dependence on Microsoft. However, most gadgets' memory, power, and storage capacity is paltry compared to the full-fledged computers that Linux runs optimally on, and there are doubts that the consumer electronics market will be a significant revenue stream for software makers. "I don't think there's any money to be made in the embedded software business," argued Embedded Systems Programming editor-in-chief Jim Turley at a recent panel discussion. The concept of a standard OS can be a hard sell for gadget producers, most of whom prefer to completely control the software development process by opting for in-house development. Nevertheless, Linux is available for free, which means there is no need to purchase developer tools in order to enjoy the advantages. A clear indication of Linux's move into the embedded software market is a joint venture between former Linux adversary Wind River Systems and Linux vendor Red Hat to develop an embedded Linux product geared for large companies that already employ Linux on servers and PCs. Still, Linux works easily on full-fledged computers because most use chips based on a quarter-century-old Intel design, while gadgets still lack a similar standard.

The video equivalent of the Holy Grail is a system capable of real-time 3D image rendering and viewer-controlled perspective shifts, but the network bandwidth required to transmit the massive volume of data contained in the video stream is formidable. Swiss Federal Institute of Technology (ETH) scientists have developed a system that converts 2D pixels from multiple cameras into a set of independent points in space and reduces information flow to a tolerable 3Mbps. ETH researcher Stephan Wurmlin says the system permits image rendering on a wide array of devices--handhelds, TV screens, and smart phones, for example--in addition to projection walls. The system can update 3D images by adding, removing, and updating only the changed sections of a video frame, obviating the need to recalculate the image's 3D geometry for each individual frame and shaving off significant computational burden and network bandwidth usage. Conventional 3D graphic images are constructed out of linked triangles, and Wurmlin notes that aligning the triangles of an image derived from multiple cameras is a difficult task; image updating and data compression can therefore become more efficient by eliminating the need for connectivity data. "Our system is capable of capturing arbitrary and multiple objects [and] dealing with wide baselines--currently two to three meters--and arbitrary [camera] setups," Wurmlin declares. He points out that more flexibility needs to be built into the 3D video stream because of its high susceptibility to network errors. He adds that practical uses for the system could be introduced within several years and notes that the technology has been adopted for the next iteration of the Animation Framework Extension of the MPEG 4 specification for interactive and 3D video.
Click Here to View Full Article

Northwestern University computer science doctoral candidate Ayman Shamma and Intelligent Information Laboratory director Kristian J. Hammond have collaborated on an art project entitled "Imagination Environment" that explores a possible link between free association and Internet search. The multimedia environment consists of nine wall-mounted computer monitors: The central monitor displays a live television news broadcast, while software scans the transmission's closed-caption stream and chooses keywords that trigger online searches for images, which are screened on the neighboring monitors shortly after the live audio is heard. The software taps images from two resources: the Web and Index Stock Imagery's commercial photograph database. Shamma had to tweak the software to eliminate undesired search results, such as when the environment gathered dirty pictures in response to the mentioning of female names. When Shamma presented Imagination Environment at last month's World Wide Web Conference, he declared that the project reflects how the Internet uses search engines and its massive depiction of human knowledge. "This is the Web as a storehouse of cultural connections," he exclaimed. In April, the environment was set up in Chicago's Piper's Alley, home of the Second City comedy troupe, who appreciate the project because it parallels the free-associative process of improvisational comedy. However, some researchers are unconvinced that the project has any real value, especially since it employs systems that do not truly understand the text they are retrieving images from: "Although this system may entertain some people, and even give them some new ideas, to my mind, nothing justifies the use of inadequate systems by labeling them as art," asserts MIT professor Marvin Minsky.
Click Here to View Full Article(Articles published within 7 days can be accessed free of charge on this site. After 7 days, a pay-per-article option is available. First-time visitors will need to register.)

Software engineer Scott Collins believes that the Mozilla browser will flourish on Linux, and the chief driver of its prosperity will be the fall of Microsoft, due to the company's overbearing pride and belief that it cannot make mistakes, even as marketplace backlash grows. Collins acknowledges that Mozilla itself has blundered in the past: He blames much of the erosion of Mozilla's market share on a bad executive who caused the company to miss a critical release; he also admits his complicity in allowing XPCOM to be used too deeply and says the company waited too long to roll out native controls. Mozilla successes, Collins Mentions, include fulfilling its promise to release the code as open source on March 31, 1998, issuing a 1.0, and letting go of preconceived notions of Mozilla, which was essential to the development of browsers, such as Firefox, Camino, and Thunderbird. The software engineer does not believe that having Microsoft as an adversary is driving people to do better, and he contends that people are motivated by their own unique goals--hatred of Microsoft, the desire to support one's family, and a love of coding being a few examples. Collins says, "The main thing we've been looking out for is to not build our browser in a way that encourages special standards that no one can keep up with or features that encourage people to do things the wrong way or something that blocks out another browser." He explains that two Mac OS X browsers--Camino and Firefox--are set up to cater to users' individual preferences, as well as maintain the widest reasonable spectrum of clients available. Collins says the Mozilla code base project received much-needed funding from Mitch Kapor of the Open Software Application Foundation, who donated over $1 million from his own pocket.

A March story in New Scientist focusing on conversational software that uses artificial intelligence to uncover pedophiles in Internet chat rooms provoked criticism that prompted a demonstration to more firmly establish whether the program, "ChatNannies," was indeed as good as its creator, Jim Wightman, purported it to be. The test involved the participation of several AI researchers and New Scientist, who would "talk" to the program, but Wightman pointed out that the system's performance would be less than optimal in order to meet certain testing conditions. The software had to be separated from the Internet to ensure that it was not a person, but its database AI component resided at a secure server location, which would restrict its capabilities. Wightman therefore transferred the server to his home, but ChatNannies would still be unable to tap the Internet for information. The AI researchers--Andy Pryke of the University of Birmingham and Nick Webb of the University of Sheffield--reported that the quality of ChatNannies' responses to their inquiries were not as good as those seen on transcripts supplied by Wightman as proof of the software's conversational sophistication, which Wightman explained was because of his inability to move the database to his home. Pryke and Web also noticed that many of ChatNannies' responses precisely matched those of Alice, a Loebner Prize-winning conversational program that is freely available for download on the Internet; in fact, ChatNannies and Alice seemed to produce the same grammatical errors. Wightman admitted that he had little choice but to "grab and generate as much knowledge as possible" by tapping knowledge bases from Alice and other programs when the demonstration of his AI database proved unworkable.
Click Here to View Full Article

Nuala O'Connor Kelly, the chief privacy officer of the Homeland Security Department, says the United States has to work out its response to terrorism without devastating citizens' privacy rights. She is responsible for the agency's privacy policies as it identifies airplane passengers and tracks foreign visitors, for example, and says her challenge is to help create a system "that allows for people to pass through their ordinary daily life in the way they want to but still has some level of security for all of us." O'Connor Kelly previously worked at Internet advertising firm DoubleClick and was tasked with privacy protection at that company after it was sued for sharing personal data on Internet users. In her current job, she has written a report critical of the Transportation Security Administration that deals with that group's handing over of Jet Blue passenger information to the Pentagon. Government officials are struggling with the problem of keeping would-be terrorists out of the country without violating civil liberties and privacy rights. The controversial CAPPS II program would require travel agents and airlines to give the government each passenger's itinerary, name, birth date, address, and phone number to be cross-checked against public and commercial databases. Once a person's identity is confirmed, they would be assigned a risk level. The program has been delayed for months due to privacy concerns. "I think we can achieve security with privacy in mind at all times...we just have to make intelligent choices," says O'Connor Kelly. Still, critics question whether O'Connor Kelly will go far enough to protect Americans' privacy. The American Civil Liberties Union's Barry Steinhardt says, "The question really is: Is her presence a fig leaf, or does it provide some genuine oversight? I think the jury's still out on that."
Click Here to View Full Article

Carnegie Mellon University's (CMU) CyLab initiative involves the participation of government and industry to help secure the Internet and telecommunications infrastructure, shield the personal privacy and identity of all computer users, and thwart malware; the brainpower behind CyLab includes CMU professor and Dean of Engineering Pradeep Khosla and 10 other Indian-born faculty members and researchers. CMU's Computer Emergency Response Team recorded 82,094 cyber-attacks two years ago, while last year the number of cyber-attacks exceeded 114,000 and caused over $140 billion in damages worldwide. CyLab co-director Khosla is in charge of the effort to build a cyberspace immune system, and security agencies, such as the FAA, the Homeland Security Department, and the U.S. Secret Service, are taking a great interest in his students. Areas of CyLab research include the use of biometric identifiers, such as signatures, iris patterns, fingerprints, voice scans, and face recognition technology, to verify the identity of computer users; Khosla thinks future security and ID authentication will involve some blend of these measures. Another project focuses on enhancing computer components, such as disk drives with artificial intelligence to make them capable of taking countermeasures in an attack. CyLab research efforts will be partly funded by the private sector, with especially generous underwriters getting rights to intellectual property developed at the facility. The lab recently received a congressional allocation of $6 million for security research, in return for which it will give the government the rights to use CyLab research for national security. Khosla says it is the goal of the center to educate 100,000 security professionals and 10 million computer users worldwide about cyber-security threats within three years; "The vision I have is making Pittsburgh the cyber-security capital of the country," he explains.
Click Here to View Full Article

The 2004 Kyoto Prize for Advanced Technology has been bestowed upon UCLA adjunct professor of computer science Alan C. Kay for "creating the concept of personal computing and contributing to its realization." The prize is the third major scientific award Kay has received this year: In February, Kay shared the National Academy of Engineering's 2004 Charles Stark Draper Prize with three colleagues for creating the first practical networked PC at Xerox's Palo Alto Research Center in the 1970s, while earlier in June he was honored with the Association for Computing Machinery's 2003 Turing Award for his precedent-setting notions of personal computing and his leadership of the research team that created Smalltalk, the first full-fledged dynamic object-oriented programming language. During his tenure at Xerox, Kay's team developed the graphical user interface (GUI), and from his research on learning and creative processes he included the use of icons as representations of computer functions. Kay employed Smalltalk as a tool for teaching computing to elementary school students based on his discovery that children learn better through a blend of tactile input, symbols, imagery, and plain text. During his student days at the University of Utah, Kay created dynamic object-oriented programming, and was part of the research group that devised continuous tone 3D graphics for the Advanced Research Projects Agency. He also helped integrate the GUI with an object-oriented operating system in an early desktop computer, and invented the Dynabook laptop PC. UCLA computer science department Chair Milos Ercegovac declared that Kay rightfully earned the Kyoto award for his "tremendous contributions" to the education and computing fields.
Click Here to View Full Article

Adidas, VectraSense Technologies, and MIT are among the outfits developing embedded intelligence for footwear, and analyst Rob Enderle believes it marks the beginning of a trend to incorporate microcontrollers into apparel, which has the potential to generate a multibillion-dollar market in five to seven years. Adidas opted to go with electronic intelligence to solve the "adaptable shoe" problem, and the resulting product, scheduled to hit the market this December, can change its cushioning level in response to the runner's size, speed, and fatigue, according to compression measurements recorded by a Hall-effect sensor. Adidas reports that the 20MHz microprocessor can adjust the shoe's cushioning faster than a human knee-jerk reaction by sampling the sensor at 1,000 times per second. The software algorithm the microcontroller uses to decide whether to change the cushioning was based on a sole-compression database Adidas compiled in-house. MIT's Media Lab has several intelligent shoe projects, among them a shoe that allows wearers to create musical streams through foot movement, microelectronics-based systems that generate power for other wearable components, and a recently completed student Ph.D. project that embeds a wearable sensor package in footwear. This last product is designed to act as a physical therapy tool. VectraSense has been rolling out intelligent shoes since 2001, which saw the debut of the ThinkShoe. ThinkShoe consisted of a Motorola microcontroller meshed with an integrated air pressure system to keep cushioning optimal, while VectraSense is now hinting that upcoming products will enable shoe-to-shoe communication and information sharing.
Click Here to View Full Article

The 2004 InfoWorld Compensation Survey of approximately 1,100 IT professionals indicates a slowdown in the downward trend in salary and overall compensation, and most respondents anticipate stabilized or increased IT expenditures from their companies. The percentage of respondents who expect to be confronted with a hiring freeze in the next 12 months, 21 percent, is 21 percent less than those who actually faced the issue in the past 12 months, and 28 percent less than those who reported facing a freeze in the mid-2003 survey. Eighteen percent expect to face a salary freeze in the coming year, compared to 28 percent who faced it in the past 12 months and 30 percent in the mid 2003 survey. The Pacific Firm CEO Stacie Blair reports a general feeling that the economy is starting to rekindle demand for IT pros that will consequently lead to higher salaries. Meanwhile, the number of people satisfied with their salaries has fallen slightly, while the ranks of those dissatisfied with their salaries have increased: 51 percent of respondents agree that they were fairly compensated, down from 53 percent last year and 56 percent two years ago, while 41 percent report dissatisfaction with their compensation, up from 40 percent in 2003 and 37 percent in 2002. Forty percent of polled IT pros insist that they are not seeking another job, 35 percent say they are passively looking, 13 percent admit they are looking at another company, 6 percent are seeking opportunities within their company, 3 percent are passively looking at opportunities outside of IT, and 2 percent are actively seeking work outside of IT. Just 22 percent of respondents express fears of offshore outsourcing, a far cry from the percentage of those who feel their jobs may be threatened by budgetary constraints, lower demand for their companies' products and services, their senior-level status or overcompensation, or issues with co-workers or management. Fifty-one percent feel their jobs are secure, and 23 percent say their jobs are "absolutely secure."
Click Here to View Full Article

Former Global/Pacific Electronic University Consortium VP Parker Rossman envisions a Cosmopedia--a multimedia resource containing the sum total knowledge of mankind--that revolutionizes the research, recording, and exchange of information, and fosters new forms of learning and communication that transcend boundaries between experts and amateurs in all fields. The author contends that the Cosmopedia will help people learn when no human mentor is accessible, though he cautions that the tool should not completely substitute for the human element; the Cosmopedia could also help erode people's growing unwillingness to read. Technology consultant Pierre Levy notes the emergence of learners raised on computer games who are becoming comfortable with "a continuously updated, distributed knowledge base maintained by a sprawling community of players," and Rossman believes this generation will play a key role in the Cosmopedia's evolution. He also foresees the Cosmopedia becoming a fully immersive experience with the advent of advanced virtual reality technology. However, the Cosmopedia's success could be hindered if prospective users face overwhelming volumes of material, and Rossman points to the need to monitor content and continuously test links to information in order to prevent the resource from being sullied by malicious, exploitive, or ignorant users. It is most probable that Cosmopedia users will pay for the privilege to access its data, though this toll should be quite acceptable if billions of people worldwide use it on a regular basis. Rossman anticipates a time many decades from now in which a resource, such as the Cosmopedia, is used to tap the collective brainpower of mankind and place it in a collaborative relationship with artificial intelligence to solve global ills, such as crime, disease, famine, and environmental degradation. He expects every section of the Cosmopedia to contain material appropriate for all age levels in order to interest children and spur the public to be more active and political.