ACM TechNews is published every week on Monday, Wednesday, and Friday.

ACM TechNews is intended as an objective news digest for busy IT Professionals. Views expressed are not necessarily those of either AutoChoice Advisor or ACM.
To send comments, please write to technews@hq.acm.org.

C++ programming language inventor and Texas A&M University professor Bjarne Stroustrup said at the ACCU Conference that a backlash against newer languages such as C# and Java has sparked a resurgence in C++ usage, claiming there are now upwards of 3 million C++ programmers. He said the lack of a "propaganda campaign" is the chief reason why people are ignorant of this trend. For example, Sun Microsystems aggressively hyped Java's role in the Mars Rover program, while Stroustrup says C++ was employed for scene analysis and route planning in the vehicle's autonomous driving system. Evans Data appears to challenge Stroustrup's claim of C++ growth with data indicating a 30 percent decline in the percentage of developers using C++ between spring 1998 and fall 2004, although the firm expects the decrease to slow considerably over the next several years. A recent Forrester Research survey of over 100 companies estimated that 59 percent of respondents used C/C++ in production systems, 61 percent used Visual Basic, and 66 percent used Java, leading Forrester analyst John Rymer to conclude that Stroustrup's assertion of approximately 3 million C++ developers is "plausible." RedMonk analyst James Governor said the assumption that Java and Microsoft languages such as Visual Basic and C# are the primary languages used by developers is erroneous. "C++ still has a role and dynamic scripting languages, such as PHP and Python, are growing, not shrinking, in importance," he declared.
Click Here to View Full Article

University of North Carolina at Chapel Hill professor Dan Reed, University of Tennessee professor and ACM Fellow Jack Dongarra, and director of Rice University's Center for High Performance Software Research Ken Kennedy contend that "the prospects for continued deployment and support of high-end facilities for open scientific research are in more serious doubt than they have been in decades." They describe the National Science Foundation's supercomputing centers as being on "life support," with sparse funding for significant upgrades; meanwhile, the Energy Department's open high-end computing initiative is still awaiting funding. The authors say new-generation high-performance computing systems are necessary for critical scientific and industrial breakthroughs, including higher resolution simulations for cosmological studies and experiments, personalized medicine enabled through drug chemistry models, and integrated vehicle designs with lifetime warranties. The authors cite Reed's argument before the House Science Committee on the future of high-end computing at a 2004 hearing, in which he proposed an interagency high-performance computing program. This initiative would be founded on a long-term strategic plan that details each agency's responsibilities in terms of scope, financial scale, and obligation; regular implementation and maintenance of the world's highest performance computing centers for open scientific use; management and support for national science, engineering, security, and economic competitiveness priorities; guaranteed technology transfer and economic leverage via vendor participation; confirmable metrics of interagency cooperation, community participation, and technology progress linked to agency funding; and active enlistment of computational researchers and promotion of cross-disciplinary education and cooperation.
Click Here to View Full Article

E-voting trials in the United Kingdom have shown that such programs can increase voter turnout among young people while protecting security, but cost and scalability worries remain. The results come amidst news of widespread voter apathy: Political research group YouGov found that nearly half of first-time voters in the United Kingdom had voted for a contestant in the Big Brother TV reality show, but just 40 percent intend to cast a ballot in the upcoming general election. The U.K. government funded e-voting trials in Sheffield in 2003, giving 174,000 residents the option of voting via traditional paper ballot, text-messaging, touch-tone telephone, or touch-screen Internet kiosk at a polling station; the trial was technically successful, with only a minor occurrence preventing kiosks from functioning because of a line disconnect. Partly due to creative marketing, 40 percent of Sheffield voters chose an electronic method of voting: The Chorley Borough Council spread the news about new text-message vote capability by printing instructions on beer mats, for example. Though voter turnout among young people increased, overall turnout was only minimally higher, reflecting e-voting results from other efforts, according to St. Andrews University computer science researcher Tim Storer. Meanwhile, adding electronic channels doubled the cost per voter in the Sheffield trial, though e-voting was crucial for citizens with disabilities who could not access some polling stations. Chorley Borough Council official Martin O'Loughlan said most of the costs came from network capacity, security, and system administration, but the Sheffield system could have handled millions more voters, which would have spread out the cost. Officials involved in the e-voting trials also said social issues and politicians were mainly responsible for increased voter turnout, not technology alone.
Click Here to View Full Article

Quantum computing is making the transition from impractical science fiction concept to a subject taught in computer-science classes, even though the science is still mostly theoretical. Most quantum researchers believe it will be impossible for computer chip manufacturers to keep pace with Moore's Law without quantum computing, although Intel co-founder Gordon Moore, who formulated the law, doubts that a practical quantum computer will ever be constructed given the complexity of the physics involved. The quantum computing equivalent to bits of information, qubits, can represent either a 0 or a 1, or every possible number between 0 and 1 simultaneously when they are unobserved; in addition, qubits can be connected and experience shared effects. Researchers have shown that a quantum computer can decode in a few seconds an encrypted message that would take an average computer an astronomically long time to decipher, though the quantum system would require thousands of qubits to accomplish this feat. Lawrence Berkeley Lab physicist Thomas Schenkel intends to fabricate a qubit from a phosphorus atom that is precisely deposited on a pure silicon wafer; the up or down "spin" of such atoms correspond to the 0 or 1 necessary for computing. Schenkel has successfully deposited one or two atoms on the wafer, although communicating with the qubit, coaxing a reply out of it, or linking it to other qubits, is further down the road. Quantum computing research such as Schenkel's is the focus of a class taught at the University of California, Berkeley.

Panelists at the Association for Computing Technology's Intellectual Property and Technology Summit discussed intellectual property (IP) as a enabler or inhibitor of innovation. Anthony Colarulli with the Intellectual Property Owners Association supported IP as a promoter of innovation, and called for the establishment of a balanced patent system and exploration of investment strategies. Colarulli criticized the U.S. Patent and Trademark Office for churning out technology patents of dubious quality, a feeling shared by University of Colorado economics professor Keith Maskus. Former Open Source Initiative general counsel Larry Rosen argued that patents--particularly software patents--do not serve as an incentive to innovation; rather, they have the opposite effect. He said software development cycles usually last 18 months, and argued that "the tiny amount of time it takes to do the programming is nothing like the time it takes to build a business out of it, and there's no reason the patent system should have to protect that." Rosen said the open-source community has petitioned the Organization for the Advancement of Structured Information Standards (OASIS) and the World Wide Web Consortium (W3C) to make software standards freely available by objecting to, in OASIS' case, a policy permitting companies to hold reasonable and non-discriminatory (RAND) patent licenses on patents submitted to the organization. OASIS did not jettison the RAND provisions, but Rosen said the open source community still triumphed because IBM and the other major companies will not be doing any RAND patents in the standards body. At the Linux on Wall Street conference, Black Duck Software general counsel Karen Copenhaver predicted that "a lot of community involvement" will be infused into the patent process.
Click Here to View Full Article

Mobile devices could play a role in closing the digital divide for people who are blind or visually impaired, according to researchers involved in the Enabled initiative in Europe. Researchers from Queen's University's Virtual Engineering Centre and Sonic Arts Research Centre are leading the project to make the Web more accessible for people with visual disabilities, and is being assisted by 13 other universities and groups on the continent. The researchers see mobile devices as a potential strategy in that embedded devices would be placed in public areas, such as a shopping mall, and have the ability to serve as an audio guide. As a blind person with an enabled personal data device walks through the mall, the position of stores would be revealed. "If you have embedded devices they could advertise what the shop is, by saying 'I'm a butchers' through a mobile device," says Queen's University professor Alan Marshall. The scheme could also include tactile display screens that act as maps for the blind as they navigate unfamiliar buildings. The project has received 3.8 million euros in funding from the EU.
Click Here to View Full Article

A federally funded "smart highway" project headed by Rensselaer Polytechnic Institute's Center for Infrastructure and Transportation Studies seeks to address gridlock by tracking traffic via a wireless network of cars equipped with global positioning system (GPS) devices. Motorists participating in a pilot project receive feedback from in-vehicle computers on a continuous basis. Each vehicle transmits drive-time data to a server once a minute; the server processes this information and extracts a picture of traffic around a radius of 40 miles, while speed is computed by monitoring progress between virtual checkpoints. Updates are relayed by the in-car computers, which give the driver directions and warnings via a synthesized voice. Rensselaer Center research director Al Wallace believes the system could be especially beneficial for mid- and small-sized cities bedeviled by rush-hour traffic, noting that its deployment would be less costly than setting up pole-mounted cameras or road sensors. The collection of data from road cameras, "black box" computer chips, and electronic toll tags has provoked fears of exploitation from privacy proponents. Rensselaer Center director George List says deactivating the GPS units are a simple way to avoid monitoring. Intelligent Transportation Society President Neil Schuster says transportation officials and private companies are investigating GPS and other technologies for upgrading traffic systems, while the auto industry is considering a wireless network for moving cars that could be hosted on federally dedicated spectrum.
Click Here to View Full Article

The Carnegie Mellon University in Qatar (CMUQ) is the site of very promising research in computer-controlled vehicles for automated and assisted driving. CMUQ teaching assistants David L. Duke and Justin Carlson, who hail from CMU's Pittsburgh-based Robotics Institute, are experimenting with two-wheeled Segway Human Transporters modified for robotic operation. "One of the many goals in robotics down the line is to have robots that really operate efficiently and autonomously among humans," notes Carlson; such capabilities can be enabled either by giving the robot highly accurate maps of its center of operations, or by making the machine intelligent enough to determine the position of people and obstacles by processing input from cameras and other sensors in order to formulate proper responses. Carlson says his area of concentration is the construction of high resolution maps to facilitate self-navigation, while Duke will focus on making the robots "smart" enough to exhibit appropriate behavior. The Robotic Segways are outfitted with computers, laser scanners, and global positioning systems (GPS), while lead acid batteries serve as their power source. The vehicles are currently directed by video game control pads, and Duke and Carlson plan to familiarize the machines with Carlson's maps by driving them around campus. Duke cites the Graduate Robot Attending a ConferencE (GRACE) and Minerva as examples of robots that operate among humans: GRACE is a social robot that can communicate via a speech synthesizer and comprehend responses with a microphone and speech recognition software, while Minerva is an autonomous Smithsonian tour guide that actively approaches and interacts with people.
Click Here to View Full Article

At the Association of C & C++ Users (ACCU) conference in Oxford, England, Cambridge University security engineering professor Ross Anderson petitioned software developers for access to their bug databases so that empirical research could be conducted on development methodologies. He wanted data that would enable software researchers to study development methodologies' role in security and quality, similar to how medical researchers use data from controlled clinical trials. Such research would provide insight into whether open source or closed source software was more secure, or how much of an impact methodologies such as extreme programming had on software quality. The data would also shed light on the best method for issuing and applying security patches. Anderson said software code was now large enough that statistical methods could be applied successfully, and that theoretical computer science research had reached its limits. Cambridge University research student Andy Ozment already conducted statistical research into bugs found in OpenBSD software between 1997 and 2000, concluding that the open source operating system benefited from the publishing of vulnerabilities and quick release of software patches. Previously, security expert Eric Rescorla contended that publishing software vulnerabilities helped hackers who exploited the holes faster than administrators could apply fixes.
Click Here to View Full Article

A Compuware-Forrester Research survey of software quality assurance practices examined the QA strategies of 305 U.S. and European senior IT executives from large companies, as well as the approach companies take to application quality and the best practices for improving the rollout of high-quality applications. Eighty-five percent of respondents listed application quality as either critical or extremely critical to the effective demonstration of business value, while 63 percent launched application quality improvement initiatives over three years ago. Fifty-four percent made investments in application development quality testing tools, but only 29 percent of this segment reported substantial gains. The respondents listed a dearth of standardized quality procedures as the No. 1 obstacle to improving application quality, yet fewer than 50 percent of polled execs said their application quality improvement efforts were based on a formal methodology. The survey also found that about two-thirds of the 32 percent of respondents who saw massive gains in application quality consistently apply a formal QA plan, and 45 percent of the 117 execs who regularly use a formal plan experienced major application quality improvements. More than half of the 129 execs who rigorously adhere to a formal QA discipline said such a strategy was very effective at winnowing out defects prior to implementation.
Click Here to View Full Article

The Monterey Bay Marine Research Institute (MBARI), founded by Hewlett-Packard co-founder David Packard in 1987, has built a pioneering video archive system for its 11,000 hours of deep-sea videotape recordings that catalogue animals and geological features on the Monterey Bay seabed. The center makes over 300 dives per year with remotely controlled vehicles, which means painstaking annotation of animal activity by technicians to tag the type of animal, location, environmental conditions, and other data that help with cross-reference research. Tape media is used to preserve video details that might otherwise be lost through digital compression, says MBARI video lab manager Nancy Jacobsen Stout. The annotation data is entered into an online knowledge base anyone can access free of charge via a custom-built query system. The knowledge base uses over 3,500 terms in its lexicon, allowing researchers to easily pull up information about a particular species, its interaction with another animal, or activity at a specific location. MBARI is also developing a neural network-based technology to automatically locate and identify creatures on new videotapes to lessen the burden on human annotators, who are limited to four hours work per day because of eyestrain and fatigue. Custom feature- and motion-detection chips will use algorithms that mimic biological vision systems. The technology will be specifically tailored for mid-water and deep-sea animals.
Click Here to View Full Article

Researchers from Carnegie Mellon University say their new robotic easy chair is just as big on design as it is on technology. Designers played a major role in the development of the SenseChair prototype, to avoid creating something that would intimidate seniors. "We feel that for elders, who are our first audience, the metal man is probably not the right model," says assistant professor of design and human-computer interaction Jodi Forlizzi, head of the SenseChair team. Although the SenseChair is embedded with sensors, motors, sounds, lights, a computer, and wireless technology, the therapeutic chair has a contemporary shape, and its fabric is welcoming and modern. The SenseChair consists of 12 sensors that identify vital signs, sleep patterns, and normal activity level; 14 motors that gently rouse users to shift positions that they remain in too long; sounds and voices to wake users from naps; and eight lights to illuminate a room when users awake during the evening. In-home trials are set for the summer for the chair, which also has the ability to alert caregivers and medical personnel when vital signs and activity patterns fall below normal levels.
Click Here to View Full Article

Yahoo! senior VP Jeff Weiner disagrees with assertions that the Web could soon drive traditional mass-media outlets such as newspapers and TV into obscurity, predicting at the recent Wharton Technology Conference that consumer personalization is the media wave of the future. This personalization trend carries challenges to commercialization, cost containment, and complexity management: Weiner says the increase in "unique," narrowly-focused search engine queries ramps up the difficulty search companies face to sell sponsorship of broad searches. Weiner predicts search companies will keep tweaking their software to make search engines capable of "recognizing" users and producing increasingly relevant results. Still, he acknowledges the Web is problematic, citing the presence of shady individuals or "black hats" who are trying to trick search engines into listing Web sites that may have no bearing on the user's search. Yahoo! is trying to address this problem with automated techniques, but Weiner says the best current strategy is "tapping into all of us and forming self-policing communities." Increased product customization is an area of focus for software companies that cater to small and mid-sized businesses, but the challenges they face may be even more formidable than those for search companies. Small and mid-sized companies usually cannot afford IT personnel to customize software to their unique business requirements, and Earth Sun Moon Trading technology director Tim Levine said at the conference that more attention should be paid to individual small firms' needs. Customization adds up to complexity, which has historically led to cost increases, but Microsoft's Taylor Collyer believes Web services could solve the problem.
Click Here to View Full Article(Access to this site is free; however, first-time visitors must register.)

The recent intrusion into Carnegie Mellon University (CMU) business school computers illustrates that not even top IT security institutions can completely guard themselves against cyberthreats and that an entirely new way of designing systems is needed, according to security and privacy experts. The CMU hack left personal information of about 20,000 applicants, graduate students, and staff open to misuse, though there is no evidence identity thieves have tried to use that data. The incident is similar to other high-profile cases at well-known organizations. University systems are especially vulnerable to hacking because of their interconnectivity and mission as providers of information. University of California, Berkeley, computer science professor and cybersecurity expert Doug Tygar called the CMU incident unlucky and did not think the problem was due to poor computer security practices. UC Berkeley suffered a serious privacy breach in March when an administrative laptop was stolen, and the school has launched an extensive audit of network and information security including policy and user access review. Cornell University computer science professor Kenneth Birman says news about major privacy breaches emerges every few hours nowadays, and notes that the recently funded TRUST center would join academic research groups to find a more permanent solution. "We can try to tackle problems when they happen and apply the latest patch, or we can design trustworthy computers from the get-go," he says. The new $19 million TRUST effort is funded by the National Science Foundation and will investigate ways to build fundamentally secure systems.
Click Here to View Full Article

The researchers detail an analytical methodology for uncovering key trust issues when designing pervasive computing systems, following a systematic examination of plausible scenarios through the use of a Trust Analysis Grid comprised of 11 trust issue categories split into three groups. Personal responsibility, reasoning, usability, and harm are relegated to the subjective trust issues group, while audit trail, authorization, identification, availability, and reliability fit into the system group; source vs. interpretation and accuracy are placed within the data group. The authors divide the trust analysis methodology into five steps, the first being the provision of a pervasive computing scenario that illustrates system use and is confirmed by experts, and that evolves throughout the trust analysis process to fulfill system designer and user requirements. The second step is trust analysis via the Trust Analysis Grid, in which the trust issue categories are checked against vignettes in the scenario, while the third step involves peer review and cross-checking of the initial trust issue analysis. The scenario is refined in the fourth step with the addition or exclusion of text and vignettes, and also subjected to peer review and a second trust analysis, and the final step employs the Trust Analysis Grid to extract guidelines to aid in pervasive system design. The authors list two possible approaches to Trust Analysis Grid examination: The identification of significant areas and matching technologies against the scenario. Their desire is to formally model the pervasive system's design, as well as check those models against the Trust Analysis Grid. "The final models should integrate a model of the particular agent technologies used to implement the system and will enable to have a higher confidence that the trust analysis is correct than the one obtained with semi-formal methodologies," the researchers conclude.
Click Here to View Full Article

The United Nations' Working Group on Internet Governance is meeting for the third time since it was set up in December 2003 to address issues related to Internet governance ahead of the second phase of the World Summit on the Information Society (WSIS), to be held in Tunisia in November. The working group's latest meeting, which began April 20 in Geneva, is centered on current regulatory arrangements and their potential for improvement. A number of issues are on the agenda, including spam and cyber crime, network security, the responsibilities of the Internet's governing bodies, dispute resolution, and the administration of Internet Protocol address and the domain name system. The 40-member panel is also expected to address more general issues pertaining to Internet governance and development, such as Internet access and growth in developing countries, privatization, infrastructure development at the national level, cultural and linguistic differences within the Internet community, and open-source software. UN Secretary-General Kofi Annan expects to receive a final report from the working group in July.
Click Here to View Full Article

The L-1 Visa and H-1B Visa Reform Act recently approved by President Bush restricts abuses while protecting the legitimate movement of skilled workers into the United States, according to experts. The legislation was drafted in response to complaints about foreign companies who basically acted as recruiting agencies for foreign contract workers. Those firms would hire overseas professionals for off-site work in the United States. With the new law, employees petitioning for an L-1B visa (which refers to workers with specialized skills and is different than the L-1A used for management) must have been hired 12 months prior to coming to the United States, compared to just six months under previous rules. In addition, global companies cannot move L-1 workers to a third-party site unless their work is directly related to their area of expertise and they are managed by someone from their employing firm. Finally, L-1B workers assigned to third-party sites will be subject to increased scrutiny from the U.S. immigration authorities. Immigration lawyer Frida Glucoft says the new restrictions close loopholes in the L-1 visa law, which was never intended to facilitate foreign professionals employed as contract labor in the United States. Business immigration attorney Peter Yost says the L-1 Visa and H-1B Visa Reform Act is a good deal for technology companies in that it does not restrict their ability to bring in foreign expertise for legitimate purposes, especially considering the current soft job market and concern about foreign workers displacing nationals.
Click Here to View Full Article

Desktop videoconferencing has become a more affordable option for enterprises thanks to the deployment of Voice over IP and PC hardware improvements, but the issue of bandwidth consumption remains a sticking point. Niche applications will inevitably proliferate among midlevel management, remote workers, and other market segments where video has little presence, and these implementations will be one component of a wider transition to converged conferencing applications. Former Marconi VP Brian Rosen says videoconferencing systems must support video of sufficient quality to clearly relay human emotions--otherwise, they are just a plaything. The further integration of video with IP telephony will facilitate greater ease of use, a concept that is especially applicable to Session Initiation Protocol (SIP)-based solutions. However, SIP does not support the video sophistication typical of offerings based on the ITU's H.323 standard. The enablement of video calls between corporations is the next threshold IP telephony systems must cross. The industry made a move in this direction with the rollout of four firewall and NAT transversal solutions, but further challenges remain; aside from Internet bandwidth issues, users must be able to locate one another before they can place a video call. Video has a strong business case, as its deployment can support productivity improvements, more effective screening of prospective hires, better exploitation of companies' subject matter and technology experts, and enhanced sales presentations and customer visits.
Click Here to View Full Article

The digital reconstruction of the interior of Ottawa's Chapel of the Convent of Our Lady of the Sacred Heart serves as an example of how the incorporation of 3D imaging methods into existing workflows can benefit architectural design proposals by supporting appropriate accuracy and visual sophistication while also retaining usability for museological and cultural-heritage presentation. The project's central objective was to devise a 3D imaging and modeling protocol that combines multi-sensor technologies with modeling and rendering methods via interpolation among a composite set of existing photographic, physical, and 2D records. Factors that have to be weighed in order to determine the optimal sensor technologies and methodologies for such a project include visual-fidelity needs, metric accuracy, integration of multiple media types, manipulation efficiency, and high-performance visualization capabilities. A multilayered, hybrid methodology that integrates conventional user-dependent 3D modeling, photogrammetry, and even orthorectified photo mapping takes into account the interactions between human scale and perception, data visualization and abstraction, and geometric accuracy. The project protocol has a three-layer modeling approach: The primary layer provides overall geometry using survey data, the secondary layer yields more refined and accurate geometries, and the tertiary layer captures sub-millimetric geometries via laser-scan data. The methodology allows for the easy division of work among several groups, and the distinction of milestones to be reached throughout the process of digitalization. Close-range scanning enables more precise documentation of culturally relevant or historically significant details so that they can be restored even though the knowledge of manual fabrication has been lost.
Click Here to View Full Article

The newly formed World Wide Consortium for the Grid (W2COG) is holding an inaugural working symposium at George Mason University on May 24-26. As conference chair Peter J. Denning explains: In the 1980s, the need for connectivity drove the proliferation of the Internet. In the 1990s, the need for information sharing drove the proliferation of the Web. Now the need for effective distributed collaboration is emerging as a major driver for the next generation of the global information grid. This driver is called network-centric, or distributed, operations. Denning invites fellow pioneers in planning the technical agenda for the consortium and in developing links with the ACM research and development community. For a look at the W2COG Symposium brochure, visit http://www.w2cog.org/cog_sym/files/W2COG_5_05_Symposium_Brochure_2.4.pdf.