Welcome to the March 24, 2006 edition of ACM TechNews,
providing timely information for IT professionals three times a
week.

Sponsored by

Learn more about Texis, the text-oriented database providing
high-performance search engine features combined with SQL operations and a
development toolkit, that powers many diverse applications, including
Webinator and the Thunderstone Search Appliance.

IBM researchers have built an electronic circuit using a single carbon
nanotube molecule in a development that could eventually lead to
microscopic circuitry produced through conventional techniques. Molecular
electronics aims to create circuits that would be less than one-tenth the
size of the most sophisticated components available today. Scientists say
that molecular-level technologies could sustain the scaling process beyond
the middle of next decade, when fundamental limitations are expected to
halt the scaling of current technologies. Carbon nanotubes contain
numerous properties that have promise in electrical applications. "This is
the first time that a single carbon nanotube has been used to make an
integrated electronic circuit," said Dimitri Antoniadist, a professor of
electrical engineering at MIT. Antoniadist said that while the discovery
shows great promise, carbon nanotubes are still a long way from supplanting
silicon as the staple material in electronic circuitry. The IBM developers
also reported that they achieved megahertz-level circuit speeds, a first
for molecular computing. Previous molecular electronic switching speeds
capped out in the kilohertz range, switching thousands of times each
second, while commercial microprocessors have speeds in the range of
billions per second. IBM reported speeds of 52 MHz, though IBM's Zhihong
Chen, who authored the study, believes that speeds on the order of
trillions of operations per second will be possible with molecular devices.
Carbon nanotubes are especially exciting because they appear to be able to
transmit more current without wasting additional heat, which in recent
years has become the major impediment to high-speed computing.Click Here to View Full Article
- Web Link May Require Free Registration
to the top

Hackers have begun using DNS servers to magnify the scope of Internet
attacks and disrupt online commerce in a variation on the traditional
distributed denial-of-service (DDOS) attack. VeriSign sustained attacks of
a larger scale than it had ever seen last year. Rather than the typical
bot attack, VeriSign was being targeted by domain name system servers.
"DNS is now a major vector for DDOS," said security researcher Dan
Kaminsky. "The bar has been lowered. People with fewer resources can now
launch potentially crippling attacks." DNS-based DDOS attacks follow the
familiar pattern of inundating a system with traffic in an effort to bring
it to a halt, though the hackers responsible for the attacks are more
likely to be professional criminals looking to extort money than teenagers
simply pulling off a prank. In a DNS-based DDOS attack, the user would
likely dispatch a botnet to flood open DNS servers with spoofed queries.
DNS servers appeal to hackers because they conceal their systems, but also
because relaying an attack via a DNS server amplifies the effect by as much
as 73 times. DNS inventor Paul Mockapetris likens the DNS reflector and
amplification attack to clogging up someone's mailbox. Writing and mailing
letters to that person would be traceable and time-consuming, while filling
out the person's address on numerous response request cards from magazines
will cause large quantities of mail to pile up quickly without divulging
the responsible party's identity. In a bot-delivered attack, users can
block traffic by identifying the attacking machines, though blocking a DNS
server could disrupt the online activities of large numbers of users. The
DNS servers that permit queries from anyone on the Internet, known as
recursive name servers, are at the core of the problem. Mockapetris called
the operators of these open servers the "Typhoid Marys of the Internet,"
and said "they need to clean up their act."Click Here to View Full Articleto the top

ACM will honor four computer scientists who provided the foundation for
formal verification tools for hardware and software systems with the 2005
Paris Kanellakis Theory and Practice Award. The Kanellakis Award, which
carries a $5,000 prize, will go to Gerard J. Holzmann, a researcher at
NASA's Jet Propulsion Laboratory; Robert P. Kurshan, a fellow at Cadence
Design Systems; Moshe Y. Vardi, a computer science professor at Rice
University; and Pierre Wolper, a computer science professor at the
Universite de Liege, Belgium. The researchers have shown that mathematical
analysis of formal models can be used to check the correctness of systems
that interact with their environments, such as digital systems and
communications protocols. Finding ways to check whether hardware and
software designs meet their specifications has been problematic in the
field of computer science. However, the honorees' work is regularly used
commercially in "control-intensive" computer programs. They will be
honored at the ACM Annual Awards Banquet on May 20, 2006, in San Francisco.
For more on the Kanellakis Award and its 2005 recipients, visit
http://campus.acm.org/public/pressroom/press_releases/3_2006/kanellakis.cfm<
/A>Click Here to View Full Articleto the top

Microsoft Research scientists say that schools and colleges are not
training the next generation of scientists with the necessary computer
skills. The Microsoft researchers, as part of their 2020 report, said that
computer science will support the natural sciences in much the same way as
mathematics underpins the physical sciences. "This means that tomorrow's
scientists will need to be highly computationally literate as well as being
highly scientifically literate," said Microsoft Research Cambridge director
Stephen Emmott. Andrew Parker, director of the Cambridge eScience Center,
noted that while students come to him with solid skills in mathematics and
physics, they are novices at processing and analyzing data. "They don't
need IT courses on how to read their email and do word processing; they
need computational science courses which are relevant to analyzing large
data collections, searching, making hypotheses, doing simulations," Parker
said. Others at Microsoft agreed that the focus of education is tilted
toward basic computing skills rather than real computer science. Parker
also criticized poor teaching standards for driving students away from
computer science, noting that the material too often is presented in a
sterile and uninteresting manner. The researchers also bemoaned the
culture of idolizing fame, claiming that in a world where students can more
readily identify a star from reality television than a groundbreaking
scientist, the appeal of a discipline perceived as staid and utilitarian is
compromised.Click Here to View Full Articleto the top

Stanford University's computer science department celebrated its 40th
anniversary on Tuesday, bringing together current and former students and
teachers to recognize the achievements of the celebrated program that has
long been the golden nugget of Silicon Valley. In attendance were Andy
Bechtolsheim, co-founder of Sun Microsystems, Jim Clark, founder of Silicon
Graphics and Netscape, D.E. Shaw's David Shaw, and Yahoo! co-founders David
Filo and Jerry Yang. Venture capitalists look to Stanford's computer
science department as a seedbed for research that could turn into a
marketable product or company, though veterans from other departments are
responsible for companies such as Cisco Systems, Varian, and
Hewlett-Packard. Many panelists argued that the innovative environment of
Stanford and Silicon Valley is imperiled by government funding cuts and an
increasing tendency to frame policy matters around religious considerations
rather than science. "We have a unique environment here, and I hope it can
survive eight years of bad government," said Clark. Computer science and
electrical engineering professor Mark Horowitz, founder of Rambus, said
immigration restrictions could "kill the golden goose." He said, "If you
make it difficult for foreign students to come here...you will have a
dramatic effect on the quality of the technology industry in the United
States." Stanford is working to counter increased competition from China,
India, and other nations for international students by integrating its
programs in science, medicine, engineering, business, and law and allowing
students to take classes spanning disciplinary boundaries.Click Here to View Full Articleto the top

Pitting robotic dogs against each other, the RoboCup soccer tournament has
a cult appeal to computer scientists around the world, but the competitions
between soccer robots are also the proving grounds for artificial
intelligence technologies that could have a substantial impact in the
future. Soccer robots are far more primitive than supercomputer chess
champions, and have a tendency to wander out of bounds, have difficulty
seeing the ball, and collapse mid game due to depleted batteries. Most
robots are dogs called Aibos, though some field teams of two-legged
human-like robots that are prone to falling over after kicking the ball.
RoboCup has the lofty goal of creating a humanoid robotic soccer team by
2050 capable of defeating the champions of the World Cup. This June,
coinciding with the World Cup, more than 100 teams will vie for the RoboCup
World Championship in Bremen, Germany. The genesis of robotic soccer comes
from a paper published in 1993 by University of British Columbia computer
science professor Alan Mackworth, who thought the interactive element of
soccer would make a more interesting challenge for robots than chess.
Japanese scientists launched the first RoboCup, which has now evolved into
an international event with eight independent categories with designations
such as "small-size" and "four-legged." Alexi Lalas, a player on the 1994
U.S. World Cup team, believes the researchers will have difficulty
incorporating the subtleties of the game into robots, noting that what
makes players great is instinct and innate ability, as well as skill and
strategy. Sony's Aibos are equipped with infrared sensors, video cameras,
and wireless Ethernet cards that they use to process 30 images a second to
create a virtual topography. The players are directed by a computer chip
that relays instructions devised by complex algorithms.Click Here to View Full Article
- Web Link May Require Paid Subscription
to the top

Microsoft Research has announced plans to provide Brown university with
$1.2 million and its computer expertise over the next three years so that
its researchers can continue their work in getting computers to understand
and process complex written handwriting, from chemical equations to
artistic sketches. Officials from Microsoft joined representatives for a
news conference this week to announce the creation of the Microsoft Center
for Research on Pen-Centric Computing, which will be the first research
center to solely on pen-centric computer research, according to Brown. For
the past four years, Microsoft Research has provided about $150,000
annually to pursue pen-specific research, according to Andries van Dam,
vice president for research at the university. Van Dam, who serves on the
technical advisory board of Microsoft Research, will also be the director
of the pen-based computer research center. Chemistry students at Brown are
currently using a program that turns stylus-sketched molecules into a
three-dimensional, moving model. Researchers have also developed a program
that enables musicians to write musical notations on a screen, save the
music, and manipulate it without using a pen and paper.Click Here to View Full Article
- Web Link May Require Free Registration
to the top

Out of concern that Google's control of the online search market is
growing unchecked, Alex Chudnovsky has developed Majestic-12, a
community-based endeavor that uses distributed computing to index Web
pages. "Because of their success, they have effectively created a monopoly
in the virtual world. Monopolies never end up well for consumers," said
Chudnovsky about Google's ascendancy. In the United Kingdom, Google has a
market share of more than 60 percent, well ahead of Yahoo! and MSN.
Majestic-12 already has 1 billion pages in its index, though Google reached
the 8 billion mark four months ago, and estimates that its search engine is
three times larger than its closest competitor. Mark Levine, computer
science professor at Birkbeck College, has written that Google has more
than 15,000 servers that crawl 3,000 URLs per second. Harnessing
distributed resources, Chudnovsky claims that fewer than 10,000 people
could crawl the entirety of Google's database each day. So far,
Majestic-12 has around 60 volunteers who crawl about 50 million pages a day
with unlimited broadband connectivity and software running in the
background. While the index holds around 1 billion pages, Majestic-12 has
crawled through 7 billion pages in the few months since its inception.
Though it would invite the creation of duplicate blocks of information
within the index, Chudnovsky would ultimately like the index to be
distributed. "Many search engines do this to reduce the traffic load
returning to a single central site--distributing the index itself is okay,
so long as you have an efficient mechanism to search the index," he
said.Click Here to View Full Articleto the top

Most of the weather maps on the Real-Time Weather Data Web site of the
National Center for Atmospheric Research (NCAR), which has gotten up to 2
million hits in a single day, are developed in the NCAR Command Language,
known as NCL. The language enables scientists to access and visualize
geoscientific data in such far-reaching applications as forestry, aircraft
analysis, genetics, and renewable energy, though it was designed mainly for
climate and weather research, writes NCAR's Lynda Lester. Available for
download on the Web, NCL powers laptops and supercomputers, and is involved
in scientific endeavors in 51 countries. NCL simplifies the data
collection process through its compatibility with numerous file formats.
"Data formats that contain metadata are converted to a uniform-variable
interface similar to netCDF, which is quite convenient for scientists,"
said NCAR's Mary Haley, who added that NCL can also write output to
different data formats. NCL has numerous data processing and manipulation
functions built in, and it can also produce high-quality two-dimensional
visualizations suitable for journal publication in PDF, Proscript, X11, and
NCGM formats. NCL has two new modules that enable Python users to harness
NCL's visualization and I/O capabilities.Click Here to View Full Articleto the top

The battle over net neutrality is set to enter a new phase as the U.S.
Congress decides if and how to impose new regulations on the Internet. On
one side of the debate are broadband carriers such as AT&T and Verizon, who
want to charge higher fees to content providers who need higher speeds. By
doing so, broadband carriers say they will be able to invest the money
necessary to build the bigger and better broadband networks. However,
their efforts are being opposed by Google, eBay, Amazon, and others, as
well as many in the world of academia, who have written to Congress saying
that it is dangerous to give one form of Internet content an advantage over
another. They claim Internet startups will have no hope of competing with
large companies such as Google, who have the resources to pay for faster
speeds. The genius of the Internet has been to allow "innovation without
permission," says Vinton Cerf, the "chief Internet evangelist" for Google.
If those who control the network are allowed to discriminate between
different kinds of content--if they can deliver one company's videos faster
than another's, for example--then net freedom will be shackled and
innovation hobbled, according to Lawrence Lessig, professor at Stanford
University Law School and chief academic theorist of the Internet.
Although broadband carriers have so far not given preferential treatment to
one particular Web site over another, that could change since most U.S.
households have just two broadband providers to choose from, and sometimes
no choice, Lessig says. He adds that if network operators begin choking
off some content, consumers will have little power to fight them.Click Here to View Full Articleto the top

In a recent interview, the Free Software Foundation's (FSF) Eben Moglen
discussed his thoughts on the update to the GPL, free software, and the
recently launched Software Freedom Center. Moglen began representing
Richard Stallman, founder of the FSF, in the early 1990s while working at
Columbia Law School, and spent roughly one-fifth of his time helping him
pro bono. Last year, Moglen helped with the launch of the Software Freedom
Law Center, which provides free software projects with free legal advice.
Much of the firm's time is spent on the update to the GPL at the moment,
Moglen says, adding that the attorneys will also provide advice to the One
Laptop per Child project. Moglen has been working extensively with
Stallman on the GPL, and expects a second discussion draft by mid June.
Moglen prefers to characterize his feelings toward the GPL as principled
conviction, rather than Linux creator Linus Torvalds' assertion that it is
closer to religious zealousness. Moglen says that all the lawyers at the
center have a high level of technical expertise and can understand and
write code, but that it will have to grow over the next few years to keep
up with the demand for its services. The center was created by a two-year
grant from the Open Source Development Labs consortium. While he believes
that the patent system is in need of reform, Moglen admits that the status
quo will be preserved as long as the pharmaceutical industry has as much
political clout as it does today, but that simply prohibiting software
patents would at least help the IT industry. "Programming is an
incremental process, so I would say almost nothing could be argued to be
novel," Moglen said, restating the main argument against software patents.
Moglen applauds the European Parliament for rejecting the directive on
software patents and politicizing what had been a fringe issue.Click Here to View Full Articleto the top

The IMS Learning Design e-learning specification is being embraced by
providers of Open Source course management systems and e-learning
applications, including the developers of Moodle and .LRN. The adoption of
the educational modeling language with XML binding comes as teachers show
more interest and awareness of the specification as a facilitator of a more
flexible and efficient e-learning environment. The e-learning
specification has its roots in a 2003 initiative by the Open University of
the Netherlands to offer courses online. Teachers can use IMS Learning
Design to create lesson plans for a single class or an entire course--a
Unit of Learning (UoL)--on one application, and share them with students
and colleagues who use different applications. "The issue of
interoperability of educational materials has already been addressed, but
until now there was no sophisticated solution to the interoperability of
educational activities--that is the problem IMS Learning Design solves,"
says Dai Griffiths, coordinator of UNFOLD, which promoted the
specification. Over the past two years, the number of applications, tools,
and UoLs that are able to use the specification has risen substantially,
and the momentum has caught the attention of commercial providers of
proprietary software.Click Here to View Full Articleto the top

Wiki creator Ward Cunningham believes that open-source software will
continue to drive innovation through collaborative development, which has
only begun to realize its potential. "I'm betting on open source being a
big trend," said Cunningham, who is the director of community development
at the Eclipse Foundation. "And it's not just because of cost, but because
of end-user innovation. No end user wants to be a programmer; they just
want to get their jobs done." Advanced tools and languages will continue
to bring users together, much the way communities have formed around wikis.
Cunningham arrived at the wiki form of Web development after using the
HyperCard system to create a database with links in the 1980s. Throughout
his career, he has been interested in the methods of communication within a
large group of people. Cunningham worked on community development at
Microsoft before coming to the Eclipse Foundation, and he gives the
software giant full marks for balancing the interest of stock holders with
the push toward community development. Cunningham is also a strong
proponent of agile development because it encourages collaboration and has
"the ability to track radically changing business needs." By exposing
programmers to more different tasks, Cunningham says that agile development
also produces expert programmers more quickly than conventional development
models.Click Here to View Full Articleto the top

In a significant advance in quantum cryptography, a team of international
researchers has developed a photon detector capable of creating and
exchanging cryptographic keys at 100 Mbps, a peak speed 20 times faster
than previous technologies. Built mainly from off-the-shelf pieces, the
equipment runs on DARPA's Quantum Key Distribution test bed system.
Because reading a photon changes its state, quantum keys created by photons
are undetectable to eavesdroppers. Accelerating the process of creating
keys is critical to the swift deployment of one-time pads, the lists of
random cryptography keys transmitted among senders and receivers that are
considered to be the most secure form of cryptography. As computing power
continues to advance, quantum cryptography will enjoy a growing number of
applications, such as securing a video stream with the rapid production and
resetting of keys. Quantum cryptography will cross the threshold of
justifiable expense once the cost of deployment is eclipsed by the value of
transmitting information with added security, said Carl Williams of the
National Institute of Standards and Technology. MagiQ already offers a
quantum cryptography package, though CEO Robert Gelfond admits that it is
not yet ready for widespread deployment. The new detector is based on a
modified radio astronomy receiver that is a major departure from existing
technologies. "This is a fundamentally new type of detector," said BBN
Technologies' Jonathan Habif. "The old one is solid state circuitry. This
is superconducting technology." A closed-cycle refrigerator lowers the
detector's temperatures down to 3 degrees Kelvin, though Habif admits that
it is not very efficient. Connecting to DARPA's network that links BBN,
Harvard University, and Boston University, the system operates at a
sustained rate of 100 million pulses per second.Click Here to View Full Articleto the top

While software development is still plagued by inefficiencies, busted
budgets, and product failures, a host of new development tools is helping
to automate the process, generating code from sophisticated
machine-readable schemes or domain-specific languages with the help of
advanced compilers. University of Texas computer science professor Gordon
Novak is developing high-level automatic programming applications that use
stockpiles of generic code to sort and find items in a list. Users create
views that direct the organization of the data, building sophisticated
flowcharts that are compiled with the generic algorithms, producing custom
code in languages such as C, C++, or Java. Novak claims that his system
can produce 250 lines of code for an indexing application in 90 seconds by
describing the program at a higher level, while it would take a programmer
a week using an ordinary language. The Kestrel Institute's Douglas Smith
is developing a system to automatically import knowledge into the computer
using abstract templates, the generic components of high-level knowledge
about algorithms and data structures that form a reference library in
Smith's application. Smith says that Specware can also prove that the code
produced meets the user's requirements. By producing many more lines of
code than the user actually has to write, Specware is essentially an
efficiency tool for programmers, though Smith has also developed Planware,
a language at an even higher level that has been used by the Air Force to
develop an aircraft scheduling application. "It's a language for writing
down problem requirements, a high-level statement of what a solution should
be, without saying how to solve the problem," Smith said. "We think it's
the ultimate frontier in software engineering."Click Here to View Full Articleto the top

The limitations of the open-source approach are becoming evident as the
methodology branches out of the software sector and into other areas. The
approach's most attractive attribute--its openness to anyone--is also its
Achilles heel, leaving projects vulnerable to either unintentional or
deliberate abuse that can only be deterred through continuous
self-policing. Indeed, only a few hundred of the approximately 130,000
open-source projects on SourceForge.net are active because the others are
unable to accommodate open source's shortcomings. The success of
open-source projects often hinge on the degree of similarity between the
projects' management practices and those of the companies they are trying
to surpass, and most projects' core component is a close-knit group rather
than a wide-ranging community. Many open-source initiatives have set up a
formal and hierarchical system of governance to guarantee quality.
However, while open source provides tools for very productive online
collaboration, ways to "identify and deploy not just manpower, but
expertise" are still lacking, according to New York University Law School's
Beth Noveck. The model permits elitism in the acceptance of contributions,
despite the egalitarian system of contribution. There is also speculation
that open source's ability to sustain its innovation as well as the
enthusiasm of contributors is limited.Click Here to View Full Article
- Web Link May Require Paid Subscription
to the top

Studies by NASA and Carnegie Mellon University researchers imply that
portable electronic devices (PEDs) carried onto aircraft by consumers emit
radiation that can potentially interfere with critical aircraft instruments
while in use. The Carnegie Mellon researchers monitored the radio
frequency (RF) environment on 37 passenger flights in the eastern United
States between September and November 2003, and successfully identified
emissions from cell phones as well as other consumer devices. The study
led to the conclusion that there is a regular occurrence of cell phone
calls made from commercial aircraft, in clear violation of FCC and FAA
regulations, and also suggested that at least one passenger does not turn
off his or her cell phone on most flights. The researchers found not only
a profound lack of awareness among passengers of the reasons behind current
PED policies, but disbelief that the use of such devices on flights
constitutes a major safety risk. The Carnegie Mellon and NASA studies
indicate a clear and present danger that cell phones can make GPS
instrumentation useless for landings, and support the theory that cell
phone emissions may have contributed to accidents. Beyond an outright ban
on PED use in aircraft cabins, which is unlikely, the authors recommend
that airlines, regulators, and aircraft and equipment makers must practice
risk analysis and nurture the development of adaptive management and
control via five strategies. There must be a joint industry-government
initiative for assessing, testing, and promoting improved communications
between aviation professionals and the public; NASA's Aviation Safety
Reporting System must be enhanced to once again support statistically
meaningful time-series event analyses; in-flight RF spectrum measurements
should continue; real-time RF emission monitoring by flight crews must be
facilitated; and the FCC and the FAA must collaborate on harmonized
electronic device emission and vulnerability standards for avionics.Click Here to View Full Articleto the top

Organizations can deploy location technologies to address staff mobility
issues, as demonstrated by the setup of a prototype real-time location
sensing (RTLS) system implemented with radio frequency identification
(RFID), Wi-Fi, and digital mapping to help technical-support team members
at the Business School of the Universite de Sherbrooke manage their work
priorities and better evaluate how daily activities are impacted and
enhanced by location technologies. The experiment also highlights the need
to carefully monitor changes in work processes as they are introduced to
prevent misemployment and resolve "soft" human issues stemming from the
introduction of innovative techniques that run counter to entrenched
assumptions. MicroGeomatics, the augmentation of an organization's
decision-making processes through the optimized use of indoor location
systems, has the potential to help managers comprehend the circumstances
under which emerging technologies can benefit organizations, but its
acceptance relies on the proper accommodation of various fundamental
reorganization issues. The experiment focused on fulfilling the
organization's need to effect faster and improved communication among
technicians during assignments; record problems and solutions for reuse by
other team members; enable the improvement of staff mobility and response
time through real-time tracking; and permit a map visualization of computer
inventory in the building. The prototype was comprised of a single
interface consisting of GIS, Java programming tools, a relational database,
and an Internet-based messaging service. The test showed that
MicroGeomatics technologies extracted relatively good quality and accuracy
of spatial data, while location results were heavily influenced by the
number of antennas and their position. The system facilitated simpler
information sharing, optimized moves via cartographic visualization, and
more appropriate data through real-time database updates.Click Here to View Full Articleto the top