ACM TechNews is published every week on Monday, Wednesday, and Friday.

ACM TechNews is intended as an objective news digest for busy IT Professionals. Views expressed are not necessarily those of either AutoChoice Advisor or ACM.
To send comments, please write to technews@hq.acm.org.

The use of paperless electronic voting machines in the upcoming presidential election is generating concern about the accuracy of recounts, despite advocates' claims that the systems are accurate and reliable. Ballots cast digitally by touch-screen machines are recorded and stored in a cartridge that a central computer reads to count the votes; should a recount become necessary, election officials print out the results from each machine and recalculate the total manually, but this does not ensure accurate vote recording. Furthermore, the e-voting system's software code cannot be divulged except by court order, which means candidates must assume that the machines were correctly programmed and properly tested before the election. Computer scientists argue that the complexity of computer code makes e-voting machines particularly susceptible to programming bugs or deliberate tampering, while the practice of sending machines home with election workers is an open invitation for fraud. Proposals for e-voting systems to supply a paper trail, such as the one sponsored by Rep. Rush Holt (D-N.J.), have gained credibility in light of recent election snafus: Votes cast electronically by 134 Broward County, Fla., citizens during a special state election in January were not recorded, while a probe of e-voting machines used in a November 2003 election in Fairfax County, Va., uncovered evidence that votes for one candidate were subtracted. Candidates who have experienced digital recounts firsthand warn that the legitimacy of an election could be compromised. California Secretary of State Kevin Shelley has mandated that all e-voting systems comply with strict security requirements by November, but three California counties have objected to this directive while election officials in one country are openly refusing to comply.
Click Here to View Full Article

A bill to relax the strictures of the Digital Millennium Copyright Act (DMCA), which critics perceive as too narrow, was a lightning rod for controversy at a May 12 hearing of the House Energy and Commerce Committee's subcommittee on commerce, trade, and consumer protection. Entertainment industry representatives maintained that the law, as it currently stands, is needed to curb digital piracy, even if it does ban the copying of digital content for personal use. Witnesses who took the opposite view included 321 Studios CEO Robert Moore, who claimed that his company is "on the brink of annihilation" because a federal judge prohibited the sale of DVD-copying software developed by 321. Rep. Rick Boucher (D-Va.), who is supporting the proposed DMCA amendment along with 15 other House members, accused the law of trampling over consumers' fair-use rights. Other witnesses testified that the DMCA is limiting the availability of material that libraries and other educational institutions can provide for long-distance learning. Those in favor of revising the DMCA posited that anti-piracy efforts should concentrate on the conduct of pirates rather than the technology itself. Speaking for the entertainment industry, Motion Picture Association of America President Jack Valenti contended that current copy-protection technology is incapable of telling the difference between digital pirates and people who copy content for personal use. "Once you allow one person to break [protection technology], you allow everyone to do it," he declared. So far, the House bill has no companion legislation in the Senate.
Click Here to View Full Article(Access to this site is free; however, first-time visitors must register.)

A scarcity of gender diversity in the technology industry is a source of vexation, particularly for the few women who are able to attain the vaunted status of CEO. "Although three of the eight women CEOs in the 500 largest U.S. corporations are in technology companies, in fact most women at technology companies say they still run into a glass ceiling," notes Institute for Women in Technology President and CEO Telle Whitney. A 2003 Catalyst survey found that only 9.3 percent of board seats and 11 percent of corporate-officer positions at technology companies were held by women, compared to 12.4 percent of board seats and 15.7 percent of corporate-officer jobs at other companies. Whitney and other proponents for women in technology argue that the shortage of women tech executives should not be blamed on gender discrimination so much as on a lack of mentors, role models, and guides. The pressures of balancing work and family life often discourage women from pursuing corporate advancement beyond the midlevel echelon. Some women leave to launch their own companies so that they can achieve a better work/life balance, and have greater autonomy than they would in larger enterprises. However, VentureOne estimates that women founded only 6.16 percent of all startups last year, a mere 1.98 percent rise over 1997 numbers. Christian & Timbers CEO Steven Mader reports that corporations have made it a priority to fill board seats with women so they will not be criticized for overlooking a segment of the population that is an even greater tech consumer than men.
Click Here to View Full Article

Wireless sensors that can network into smart clusters could raise environmental awareness to a new level, once economic and technical hurdles are overcome. The central technology of wireless sensor networks are "motes," tiny devices that each come equipped with a processor, computer memory, battery, and low-power radio transceiver. Harbor Research predicts that hundreds of millions of such devices will stem from the Internet and other networks by 2010, while wireless sensor network equipment and services could generate over $1 billion in revenues. Harbor President Glen Allmendinger says, "There is clearly a groundswell of activity going on." Working to bring this vision about are startups such as Dust Networks, Intel and other major corporations, venture capitalists, academic institutions, and government agencies such as the CIA and the Defense Advanced Research Projects Agency. Motes are currently too costly and too large to perform many of the services inventors have conceived, but researchers expect the devices will cost as little as $1 dollar each and shrink to the size of an aspirin or grain of rice in the next several years. Other challenges developers are working on include extending the life of mote batteries to keep maintenance to a minimum, the eventual transition to solar- and kinetic energy-based power sources, operating and network standards, and data filtering and analysis tools. Initial applications for wireless sensor networks envisioned by developers include battlefield operations enhancement, building automation, and industrial equipment maintenance.
Click Here to View Full Article

The underwhelming performance of autonomous vehicles in the Defense Advanced Research Projects Agency's (DARPA) Grand Challenge was attributed to poor vision systems and the vehicles' failure to learn from their navigation experiences. Most robot vehicles currently employ laser range finders and stereo cameras to map out their immediate path, while path-planning software selects the route that avoids all danger areas designated in the map; however, the software is usually programmed to identify specific impediments, not unexpected obstacles. DARPA plans to fix this problem with its three-year Learning Applied to Ground Robots (LAGR) initiative, which will reportedly represent a milestone in robot navigation across uneven terrain. Under LAGR, two 70-cm-long robots--one with intelligent code and one without--will negotiate an obstacle course every month for 18 months, at the end of which the intelligent robot should be completing the course 10 percent faster than its dumb counterpart. An additional 18 months of testing should result in the intelligent robot outracing the other by a factor of 100. Roboticists who participated in the Grand Challenge think DARPA is on the wrong track, insisting that the technology their vehicles used in the race was adequate, and only needed a few more months of refinement to work properly. "We had developed an adaptive 'learning' vision algorithm for finding the road that actually worked very well, but we ran out of time," argues David Armstrong, leader of the University of Florida's Team CIMAR. Carnegie Mellon University research engineer Bryon Smith claims the school's Sandstorm robot would have performed well if its navigation system had not been damaged in an accident just days before the race.
Click Here to View Full Article

The U.S. Department of Energy (DOE) has announced plans for a 50-teraflop supercomputer to be housed at Oak Ridge National Laboratory (ORNL) and used for open, peer-reviewed scientific research. ORNL won the $25 million project with its collaborative proposal; partners include Cray, IBM, and Silicon Graphics, though research for the project will include the Argonne National Laboratory, other DOE laboratories, and universities. Energy Secretary Spencer Abraham said the new supercomputer, which will likely be the fastest computer in the world once built, would "revitalize the U.S. effort in high-end computing." The aim of the project is to support existing users while exploring next-generation supercomputing architecture. Besides the new 50-teraflop machine, ORNL's current Cray system will be upgraded to 20 teraflops in 2005 and a new 100-teraflop system is planned for 2006. ORNL's proposal involves constructing a 170,000-square-foot facility with 40,000 square feet devoted to computers and data storage; the computing infrastructure will be supported by 400 staff and 12 megawatts of power from the Tennessee Valley Authority. Once built, scientists from all over the world will be able to access the machine on a competitive, peer-review basis. The DOE Office of Science solicited its 10 national laboratories and received four responses that were analyzed by six external reviewers and then decided upon by panelists. The Office of Science also supports a longer-term UltraScale Scientific Computing Capability that will be located at multiple sites and increase overall computing capacity 100-fold.
Click Here to View Full Article

The NextFest Exposition in San Francisco this weekend showcases the type of fantastic technology nearly forgotten during the economic slump. Among the 100 exhibitions at the show are a Web browser that is controlled through brain waves, a device that allows people to "see through" solid objects, and a small flying machine slated to sell for less than $50,000. The exposition is hosted by Wired magazine and focuses on extraordinary technology. Georgia State University offers one of the more intriguing innovations with its brain-wave-controlled Web browser: Sensors attached to a person's scalp detect carefully shaped thoughts and perform commands accordingly, including "clink on link" and "go back;" researchers plan to develop the system using extensively paralyzed testers who will have the motivation to harness their thoughts. The University of Tokyo presents another new technology at the exhibition that allows people to camouflage objects using images of what is behind those objects. A person looking through a viewfinder sees real-world images combined with images taken from a video camera and projected in front of the user; because the images behind a solid object, which must be covered by a reflective gray material, are projected onto that object, it appears invisible. The AirScooter II is a one-person flying vehicle smaller than a normal car, and uses two oppositely rotation 14-foot blades to fly for up to two hours at 60 mph. Hewlett-Packard is also showcasing a camera-equipped PDA that can read and translate written foreign text, and George Washington University researchers bring a wired glove that translates sign language into computer-readable signals for computer display or speech.
Click Here to View Full Article

Mono, a Linux version of Microsoft's .Net Framework developed by Novell, allows .Net developers to build applications that are interoperable with the Linux and Unix platforms, while permitting Linux developers to work with a greater assortment of programming tools such as the C# language. Included within Mono is an implementation of the .Net Common Language Infrastructure and a second suite of native Linux application programming interfaces. Forrester VP Randy Heffner is concerned that technical hurdles could impede Mono's deployment within the enterprise, particularly because of a lack of support for major Microsoft application components: "It may be interesting in some cases [using Mono] to develop on Linux and then deploy on Windows, but at this point you can't go the other direction without doing a detailed analysis of which components are available and which are not," he notes. Heffner adds that Mono might become more credible if Novell details the role the project plays in the company's product development scheme. Mono founder and Ximian CEO Miguel de Icaza insists that Mono's support for Web services and Web application development is sufficient, and says that Mono plans to address most of the remaining interoperability holes over the next year. Meanwhile, open source consultant Bruce Perens is concerned that Mono might be waylaid by patent licensing issues, even though Microsoft has agreed to license .Net components submitted to the ECMA standards group under non-discriminatory terms. "There will be other class libraries and components licensed under royalty terms, and that's a problem for open source developers because there's no one to pay the royalties," Perens explains. De Icaza counters that the royalty issue has been greatly exaggerated by open source proponents, but concedes that Mono could run into trouble with Microsoft.
Click Here to View Full Article

Jim Herbsleb, a professor at Carnegie Mellon University's International School of Computer Science, says the open source project development model is not flourishing in the industrial sector because the programmers are mainly designing the software's functionality for themselves rather than for the mainstream user. Herbsleb explains that commercial software developers hardly ever use the software, while project managers are the ones who give the product its functionality. In addition, companies can develop software in one location in about half the time it takes to develop software in disparate locations. Meanwhile, Nancy Frishberg of Sun Microsystems points to the deceptiveness of the open source credo that "everyone can contribute," because there is little to add to open source beyond code, bugs, and patches. Furthermore, contributors are usually limited to programming experts, which shuts a lot of creative people out of the loop. In commercial projects managers assign the work to programmers, while the open source model allows the programmers to choose their own work assignments; this leads to a situation in which open source developers work in their own personal area of expertise, and can focus on their individual interests. Herbsleb observes that open technical discussions are inspired by open source development projects, while participation by all is often frowned upon in commercial projects. The disparities between these two developmental models has led to a situation in which "it is sometimes said [that lack of] usability is the Achilles' heel of open source," according to Steve Easterbrook of the University of Toronto's Knowledge and Media Institute.
Click Here to View Full Article

Presenters at MIT's Designing Bits & Pieces symposium on May 10 discussed the future trends they anticipate for consumer electronic gadgets, with simplicity and ease-of-use foremost. CELab director Michael Bove explained that giving gadgets these qualities, as well as the ability to be aware of each other, will be key challenges, while MIT Media Lab Chairman Nicholas Negroponte agreed that simplicity is vital in order to combat "featuritis," the tendency for designers to stuff gadgets with too many features for no other reason than just to have them. Among the gadgets touted at the symposium was a "do-it-all" cell phone from Motorola due out this summer: The device features an MP3 player with stereo surround sound speakers, removable flash memory, and an integrated camera. Software radio was also emphasized as a technology to keep an eye on, though Vanu CEO Vanu Bose did not expect consumer products to emerge for at least five years. Motorola's Dave DeMuro said that batteries are in no danger of being phased out by alternative technologies such as fuel cells, which have yet to reach the prototype phase and would be subject to usage regulations because they contain flammable liquid. DeMuro added that users and vendors would balk at increasing the size of batteries in order to boost their power capacity. Among the technologies he foresees are wireless and device-to-device recharging, curved batteries, and a transition from embedded batteries to removable batteries. The MIT event concluded with an open house where students showcased and discussed various projects they were working on, such as memory improvement technologies and software that can determine conference attendees who are participating too much or too little.
Click Here to View Full Article

Wearable wireless displays are moving closer and closer to reality thanks to progress in the thriving microdisplay industry. Researchers and startups have determined a way to build minuscule displays with magnifying optics that give users the illusion of having a large monitor suspended a few feet in front of them that moves with them when they change the position of their heads. "The technology is here and it has the right price point," says Kopin CEO John Fan, whose company makes microdisplays embedded in products from Interactive Imaging Systems, MicroOptical, JVS, and Matsushita. MicroOptical incorporates the Kopin technology into small clip-on screens used to enhance surgical and military operations, while Interactive Imaging Systems' Second Sight head-mounted display is designed primarily to let industrial engineers perform data checks and repair logs as they work. These vendors are partnering with firms such as Essilor International to shrink the size of the eyewear down and more deeply embed the technology in the eyewear itself. "In the future, electronic eyewear will have to be like sunglasses, which perform a function but also look cool," predicts MicroOptical's Mark Basler, who adds that these glasses will eventually be wirelessly linked to handheld devices. Thad Starner of the Georgia Institute of Technology anticipates that mobile computers' usability will be amplified as floating displays penetrate the mass market, while wearable computers will help usher in the age of "augmented reality." The assistant computing professor says perfecting the design of mobile personal displays remains a key challenge.
Click Here to View Full Article

Howard Anderson writes that scientific researchers and businesspeople rely on pattern recognition to solve problems and tackle the various challenges they put their minds to. He attests that both business practices and scientific methodology follow the same formula--they construct hypotheses and test the hypotheses by gathering data, and pattern recognition helps them to read data points better or more quickly. Anderson contends that breakthroughs usually result when scientists or businesspeople are able to stack developments on top of each other. He cites as an example the experience of his friend Gordon Matthews, whose development of voice mail revolved around three patterns he recognized: That eight out of 10 business phone calls were not completed in real time because there was no actual communication between the caller and the responder; that managers were increasingly placing and answering their own calls because the secretary pool was shrinking; and that email demonstrated that enormous productivity could be gained from a store-and-forward technology. Anderson also points to neural networks as a tool for inferring non-obvious patterns and relationships that both businesspeople and scientists can use by blending neural computer technology with data mining. Anderson is William Porter Distinguished Lecturer at MIT's Sloan School of Management and the senior managing director at YankeeTek Ventures.
Click Here to View Full Article

Federal science funding agencies anticipate tough times in coming years, as the president's proposed 2005 budget and five-year plan decreases the amount of money for nondefense research; many agency budgets will remain flat or increasing at less than the rate of inflation, while the rising costs of research and increasing salaries mean extra burden for those groups. Keeping research spending level basically means less science is done, noted one official. The federal research community examined the budget proposal at three events, including the IEEE's Engineering R&D Symposium, the 29th Annual Forum on Science and Technology Policy held by the American Association for the Advancement of Science (AAAS), and the National Science Board's Science & Engineering Indicators. Overall, the proposed 2005 budget increases slightly, but with the majority of that increase going toward defense-related research. Hopes had been high for the 2005 budget, given the recent doubling in National Institutes of Health funding between 1998 and 2003; the expectation was that the largesse would be eventually extended to other agencies. Robert Richardson of the National Science Board noted the budget is not yet finalized and said that in his non-governmental roles he was pushing hard for increases. The National Science Foundation is slated for a slight increase next year, though relatively flat spending until 2009 will leave it with less money than in 2004, adjusting for inflation. Other groups set to see decreases are the Department of Energy's Office of Science, the Commerce Department, and the National Institute of Standards and Technology, which will have its Advanced Technology Program eliminated entirely. AAAS R&D budget and policy programs director Kei Koizumi said the financial cutbacks were due largely to increased military spending and other domestic priorities; but Richardson noted science was critical to improving other presidential priorities, including defense, homeland security, and education.
Click Here to View Full Article

University of Tulsa computer science professor John Hale and doctoral student Gavin Manes have patented a technique for deterring unlawful peer-to-peer file sharing by inundating the P2P network with spoofed files that resemble stolen music. Software developed by the researchers produces counterfeit files with attributes designed to make them look legitimate, when in actuality they are white noise, ads, or poor-quality recordings. The software further thwarts P2P users by sending out decoys by the thousands. Hale adds that artists who want to share their music on P2P networks would still be able to do so, since content owners could tag only specific files for spoofing. Hale and Manes are working on a way to commercialize and market their invention in collaboration with their university. Entertainment companies are teaming up with content-protection firms such as Overpeer to swamp P2P networks with faked material, but Hale does not know if their methods are truly dissimilar to his, since the companies are not keen to disclose their secrets. Hale claims his technology could be used to shield all kinds of sensitive or confidential data, and says the method is a less-intrusive anti-piracy measure than interdiction.
Click Here to View Full Article

Open-source veteran Brian Behlendorf says the developer community and companies continue to strengthen open-source software in the legal, commercial, and technical realms. The Apache Web server continues to grow in popularity and is popular due its low cost, he says. Apache-related efforts mod perl, Tomcat, and PHP all bolster the case for using Apache in even mission-critical Web functions. Behlendorf says the SCO lawsuit is nearly dead, but that the ordeal strengthened open source in that it required developers and organizations to take intellectual property and license issues more seriously: Now, for example, developers pay much more attention to the history of the code they are using. Going forward, the SCO lawsuit will also push more people toward Apache, BSD, and MIT licenses instead of the GPL, which Behlendorf says uses a stick rather than carrot to ensure compliance. Behlendorf currently serves as CTO for CollabNet, which he co-founded, and he says the managed online collaboration services group is pioneering a revolutionary new business paradigm: CollabNet offers an hosted infrastructure for online workplace collaboration, and recently merged with Enlite Technologies, which has a collaborative project management tool and keeps its engineering team in India. Tightly integrated online platforms allow collaborative international teams to overcome traditional barriers, Behlendorf says. He explains that as long as the open-source community allows for a Darwinian-type environment, the concept will continue to grow in terms of market share; companies will also depend on the open-source community to research new technologies faster. Although the system is inefficient in some areas, it works well when viewed from a larger scope, and Behlendorf says some open-source projects have worked better than others to eliminate development overlap, including OpenOffice, Gnome, and the Mono project.
Click Here to View Full Article

Quantum computers promise to up-end a gamut of research fields including cryptography and weather forecasting, and the Defense Advanced Research Projects Agency (DARPA) plans to inject millions of dollars into quantum computing research in an effort that could transform the computer industry. Potential revolutionary benefits of quantum computing include exponentially faster database searches and calculations, unbreakable encryption, and the ability to solve problems beyond the capacities of cutting-edge silicon-based computers. DARPA's proposed Focused Quantum Systems (Foqus) program has set as its goal the development of a quantum computer that can factor a 128-bit number in half a minute with 99.99 percent accuracy; expected Foqus participants such as IBM and MIT's Lincoln Laboratory will be tasked with defining the computer's design, along with error correction and read/write data parameters. IBM Research's Nabil Amer explains, "This will be a highly coordinated effort with the serious goal of bringing us to a go/no-go point: Will we be able to build this computer or not?" There is no consensus about the best methodology for constructing a quantum computer: Possibilities being explored include spintronics, the generation of quantum bits by superconducting materials, and the containment of atoms or charged ions in electromagnetic traps. It remains unclear whether practical quantum computers can even be built, while the theoretical constructs that serve as the foundation for quantum computing are also indefinite. The progression of quantum computing experiments is a matter of debate as well, with researchers such as Hewlett-Packard's Stan Williams arguing that scientists "need to take baby steps and bootstrap an industry out of quantum computing." A functional quantum computer is not expected to be ready for two or more decades.
Click Here to View Full Article

Confidence is high that mobile enterprise technology is poised for a resurgence thanks to better security, faster wireless data services, and hardware improvements. The premier providers of email security solutions partly owe their popularity to their products' ability to withstand frequent disconnections, while PalmSource's upcoming Cobalt operating system will support a comparable security model by allowing enterprises to "plug in" to security solutions, and multitask as well. New software may also enable Pocket PCs to support secure email. AMR Research's Dennis Gaughan cautions that companies seeking returns and success in implementing mobile enterprise technology should not rely on email; enterprise software is the more obvious choice, since it can boost mobile worker productivity. The Yankee Group's Adam Zawel explains that developers must determine how applications already deployed in the enterprise can be adapted to preclude the need for expensive one-off development projects for mobile devices: "The real challenge is to decide what needs to be on the device all the time and what's needed in real time--and to take an existing application and separate those components," he says. Gaughan notes that a major impediment to the deployment of wireless mobility in the United States is carrier networks, which are still trying to come up with the best approach to sell enterprise applications. On the other hand, Gartner Dataquest's Todd Kort observes that "The convergence of better processors, better displays, and better operating systems is allowing enterprise applications to become more acceptable for use on PDAs;" he anticipates that enterprise mobile application development will be encouraged by the emergence of hybrid phone/PDAs as well as new smart phones.
Click Here to View Full Article

Homeland Security Department National Cyber Security Division (NCSD) director Amit Yoran says the federal government has been working to implement its strategy to secure cyberspace since it was first released, noting the creation of the National Cyber Alert System, the development of private-public partnerships, and better planning and communication for crises. Some technology executives believe that Yoran has done well in promoting cybersecurity issues, but some security experts do not think the government is doing enough. Yoran says the alert system is a major step in increasing IT security awareness, adding that over a quarter of a million users subscribed to it within a week of its launch. He also says that NCSD is working with companies operating critical infrastructures so they better protect their systems and develop security measures, but contends that more work is needed in the private sector. Democrats on the House Homeland Security Select Committee have reported a number of shortcomings in federal cybersecurity, including the lack of a coordinating structure for public and private agencies should a major electronic terrorist attack take place, and they say that some of the Homeland Security Department's moves seem to be repeats of previous efforts. Counterpane Internet Security CTO Bruce Schneier thinks that the NCSD lacks authority because the government is afraid of harming business. Schneier suggests regulation, such as requiring ISPs to provide their customers with personal firewalls and requiring software vendors to release only secure code. Yoran does not rule out regulatory measures, but admits the current national strategy does not advocate such action.
Click Here to View Full Article

A successful speech application's development and deployment calls for an approach that mixes science, art, and an emphasis on the basics of software development. Designers must choose the best application "persona" in prompts customized by careful selection of age, gender, and a domain-appropriate demeanor; the best dialog style, be it directed dialog, natural language dialog, mixed initiative, or Form Filling; and support for new and repeat users, which bring their own individual quirks regarding the presence of help, grammar, and prompts. Most speech applications implemented over the last 10 years employ grammar-constrained recognizers, but Statistical Language Models built from the collection of many sample responses, though an expensive measure, allow more recognition of varied responses. Integration of the voice user interface front end with the back end system is critical, and there are several approaches supported by the Internet client/server architecture: The common gateway interface strategy works well if the application is coded with static markup language, while executing code on the server at runtime interacts with the client via dynamically generated markup and with the back end or legacy process via an open API. Designers are advised before starting any application development project to look for reusable components or templates to avoid the time and expense of building from scratch. Coding applications in markup or high level languages come with their own set of advantages: The first approach can result in more efficiency and speedier response times, while the second usually entails easier application development and maintenance. Usability testing confirms that the prompts provoke the correct understanding in the users, and that their responses are understood by the computer. Different testing tools are needed once the prototype or pilot phase is reached, as glitches are likely to appear with the application's exposure to a wider user community.
Click Here to View Full Article