The United States is doing away with old voting systems because they are
prone to error, but problems still should be expected in the November
elections because so many voters will be using unfamiliar equipment. "You
throw that many people in on something new, you're always bound to see
something go wrong," says Kimball Brace, president of Election Data
Services, which tracks election equipment. According to a new survey from
the political consulting firm, this fall at least 80 percent of voters will
use new machines that are either ATM-style touchscreen units or devices
that ask users to fill in the blanks. Ten percent of voters will use a
lever machine, and 3 percent will use punch cards, which were the subject
of the contested votes in Florida during the 2000 presidential election.
At that time, about 20 percent of voters used levers, and approximately 17
percent used punch cards. Meanwhile, critics of the new voting systems
maintain that they can be manipulated, and the charges have prompted 25
states to pass laws that require the equipment to verify votes and to yield
paper receipts. After the 2006 elections, approximately 48 percent of the
nation's 170 million registered voters will have used a new voting system.

San Antonio will host the 30th annual World Finals of the ACM
International Collegiate Programming Contest from April 9-13, drawing the
best students from around the world in the highest profile university
competition for computing science and engineering. Last fall's competition
included more than 5,600 teams from 84 countries and 1,733 universities,
which produced the 83 finalists that will compete in San Antonio. They
will be given at least eight sophisticated programming problems drawn from
real life, such as determining optimal travel routes or creating a network
strategy for the ideal placement of cell phone towers, to solve within five
hours. The winning team will receive scholarships and awards from IBM,
which sponsors the program as part of its academic outreach initiative,
ultimately aiming to further open-source development and innovation. "This
event offers collegiate programmers the opportunity to become familiar with
Java, Linux, Eclipse, and other open computing platforms being adopted by
industries around the world," said IBM's Doug Heintzman. "Open source and
open standards are driving the next great innovations in the industry, and
this contest challenges students who will be responsible for that
innovation for decades to come." North America will send 22 teams to the
contest finals, with 17 coming from the United States. Three teams will
come from Africa/Middle East, 22 from Russia and Europe, 29 from Asia and
the South Pacific region, with seven coming from Latin America.

Several Internet companies and Google are urging Congress to pass a law
that would ban telecommunications networks from charging consumers more for
some services and controlling what they can get off the Internet. The "net
neutrality" debate has been heating up for some time now after some phone
companies suggested they plan to bill companies for delivery of specific
Internet services while Congress is considering amending the 1996
telecommunications act. Net neutrality is the idea that network operators
should be neutral providers of Internet content and consumers should have
the option of accessing whatever they want on the Internet. "There are
250,000 networks that make up the Internet," says Google's Vinton Cerf.
"They are compensated by its users. Allowing broadband carriers to control
what people see and do online would fundamentally undermine the principles
that have made the Internet such a success." Cerf has been a strong
advocate for net neutrality and also says the openness of the Internet is
being threatened, and that a new law would protect consumers by limiting
they ways carriers can interfere in the decisions of their Internet users.
National Cable and Telecommunications Association CEO Kyle McSlarrow
disagrees with Cert and is requesting that lawmakers refrain from making
premature legislative decisions. Senate Commerce Committee Chairman Sen.
Ted Stevens (R-Alaska) plans to introduce net neutrality legislation in the
beginning of March.Click Here to View Full Article
- Web Link May Require Paid Subscription
to the top

LISP and other development tools that are friendly to programmers but
translate poorly to end-user hardware have typically been reserved for
research-oriented arenas such as artificial intelligence, though Web-facing
applications may bring them more into the mainstream. Because programmers
are more of a rarity than hardware, LISP and other languages designed for
the humans involved in the development process deserve another look. An
increasing portion of the computing environment is based on distributed
networks of inexpensive PCs, where the workload and cost are shared.
Processing speed is increasing at a faster rate than developers' skills,
arguing for a more programmer-friendly language, even if it makes things a
little harder for the machine. LISP presents both its data and programs as
a connected list of symbols. While LISP grew out of the artificial
intelligence and machine learning environment, it can be used for designing
a host of other customized systems, such as the AutoCAD drafting tool.
Symbolic reasoning schemes such as OPS5 and PROLOG also rely heavily on
LISP in their development, and both appear in Franz's Allegro Common Lisp
8.0. In addition to Franz's commercial product, LISP also appears in many
open-source applications, though, as with any old language, it is dogged by
outdated misconceptions of what it can and cannot do. While it labors
under the perception that it is a slow language, LISP can produce functions
with run-time speeds comparable to C and C++ applications. While the
fastest C and C++ applications can outperform the best that LISP has to
offer, the median speed of LISP doubles that of the C family programs.
LISP is more memory-intensive that C and C++, though less so than Java,
which it can also outperform.Click Here to View Full Articleto the top

The Bush administration's 2007 budget includes $137.2 billion for spending
on research and development, an increase of just 2.6 percent, or $3.4
billion, from fiscal 2006. Basic research spending would rise only 1.3
percent, or $357 million, to $28.2 billion. Although White House science
advisor John Marburger said science funding in 2007 would remain flat, he
noted that non-defense R&D would rise 1.9 percent, while agencies also
stand to benefit from $50 billion in funding over the next 10 years that's
called for by the new American Competitiveness Initiative, including $1.3
billion in new funding and $4.6 billion for the R&D tax credit next year.
The National Science Foundation would receive a $349 million increase to
$4.5 billion for R&D, the National Institute of Standards and Technology
would obtain a $104 million boost to $535 million, and the Energy
Department would get a $595 million increase to $9.2 billion. Spending on
networking and information technology would climb 9.4 percent, or $239
million, to $2.78 billion. "[The] increase in support for advanced
networking research in 2007, primarily by NSF, the Defense Advanced
Research Projects Agency and [Energy] will ensure that large-scale
networking technologies will keep pace with the rapid developments in
petascale computing systems," says the budget. Funding for the National
Nanotechnology Initiative would jump about $77 million to $1.3 billion, and
spending for Homeland Security would be about $4.8 billion, with $535
million going toward Pentagon projects on cybersecurity, domestic nuclear
detection, explosives research, and food- and livestock-protection.Click Here to View Full Articleto the top

While the Senate is investigating the legality of the Bush
administration's warrantless surveillance program, University of
Pennsylvania computer scientists have developed simple, inexpensive methods
for eluding the eavesdropping net. Phone taps commonly rely on the absence
of a C-tone, the sound conveyed when a receiver is on the hook, to trigger
recording. C-tones can be created by playing two frequencies in tandem,
tricking the wiretap by simulating the noise that a phone makes when the
receiver is idle. Military phones with C-tone buttons can be found on
eBay, or, alternatively, the parts to generate the sound can be purchased
at Radio Shack. UPenn computer scientist Matt Blaze tested a variety of
wiretapping devices, and found that the older loop extender systems were
especially susceptible to the C-tone trick. The government more commonly
uses CALEA systems now, which the FBI claims are nearly impervious to the
C-tone defense--a claim that Blaze disputes. In presenting his findings at
the International Federation for Information Processing Conference on
Digital Forensics last week, Blaze also presented tricks that can stymie
software intended to intercept email, Web traffic, and file sharing. Since
all the information that travels over the Internet is contained in packets,
Blaze dispatched decoy packets, carrying bogus information and packaged in
such a way to ensure that only the eavesdropper would receive them, not the
original recipient. Blaze took advantage of the different ways of routing
and processing packets, ensuring that the eavesdropper and the intended
recipient would receive different versions of the same message.Click Here to View Full Articleto the top

The Belgian research group IMEC has demonstrated the results of the first
stages of its Human++ initiative, aiming to develop wearable wireless body
area networks. IMEC unveiled plans for a completely integrated low-power
ultrawideband (UWB) receiver designed for applications of low data rates at
the International Solid State Circuits Conference in San Francisco, as well
as research detailing a high-speed analog-to-digital converter (ADC) with a
conversion step of record low power. The UWB receiver runs between 3 GHz
and 5 GHz, with a variable channel filter that enables pulse processing at
bandwidths of up to 2 GHz. The front end of the receiver is ideally suited
for carrier-based impulse radio, offering a flexibly defined spectrum of
minimal complexity. IMEC expects the device to yield practical
applications for sensor networks of low data rates. The ADC project,
undertaken as a part of IMEC's 90 nm RF CMOS project, will figure
prominently in reducing the power consumption of wireless devices. The ADC
has an oxide thickness of 1.5 nm and a 70 nm physical gate length. Another
group detailed a read-out front-end created through a 0.5 micron CMOS
process to extract bio-potential signals emanating from portable
electroencephalography, electrocardiography, and electromyography.Click Here to View Full Articleto the top

While TopCoder's weekly contests among programmers from all over the world
occupy a relatively obscure place on the Internet, and only occasionally
draw media attention when they pit contestants against each other in the
finals, the program is nonetheless reflective of the shift under way in the
broader international computing scene. Just three years ago, all but 10
percent of the top 50 programmers in the TopCoder program were American.
Today, Americans account for just 12 percent of the best TopCoders, and the
United States has been surpassed by Russia, Poland, and Canada.
Participants must solve three complex problems of increasing difficulty in
75 minutes in the first round of each contest. In Round 2, contestants can
challenge each other's code, gaining points if they cause an opponent's
program to fail, and losing points if their own sabotage is unsuccessful.
Contestant Oded Wurman, a recent Stanford graduate now employed by Nvidia,
notes that players will often systematically attack the solutions that
novice entrants offer up to the most difficult problems, assuming that
there must be a flaw that they can exploit to earn points. While each
unsuccessful attempt at busting another programmer's code loses you points,
Wurman says that many contestants will spend much of the first round
devising attack strategies, rather than solving the problems themselves.
This stiff competition is about more than just pride, though, as TopCoder
contestants can win prizes and job interviews with the sponsoring
companies.Click Here to View Full Article
- Web Link May Require Paid Subscription
to the top

Princeton University Dean of Engineering Maria Klawe warned against the
negative myths and stereotypes that discourage women from pursuing computer
science in her recent speech, "Gender, Lies and Video Games: the Truth
about Females and Computing." Klawe, a former president of ACM, hopes to
boost female participation in computer science by helping women overcome
the myths that computers are built for men and that women are inherently
less capable of understanding technology. While women do spend more time
on the Internet than men, the misguided perception that computer scientists
toil endlessly at an isolated terminal has curtailed female enrollment in
computer science courses and kept women out of computing careers. Further,
Klawe notes, "computer science majors are snatched up first by employers,
and they are being paid $10,000 more starting than many other majors
looking for work." Tracing the disparity in interest back to adolescence,
video games shoulder much of the blame for captivating the attention of
boys, leaving girls essentially uninterested. High schools teachers often
favor boys in computer science courses, inviting them to help teach the
class while ignoring the girls. Before they get to college, women show a
preference for disciplines such as the arts and psychology, while boys
gravitate more toward computer science, physics, and engineering. Klawe
hopes to draw more women to computing with humanizing elements such as
games, media, and outreach programs, placing an emphasis on the
applications of computing, rather than the technical aspects of
programming.Click Here to View Full Articleto the top

When Free Software Foundation founder Richard Stallman voiced his concerns
about software patents undermining innovation in 1991, he was largely
ignored and branded an alarmist. With the draft update to the GPL, he is
now seeking to limit the growth of patent-protected digital content and
proprietary software. While Stallman readies for an embittered struggle
between open-source advocates and defenders of the proprietary software
model, IBM is leading a coalition of open-source groups, including Red Hat
and Open Source Development Labs, to improve the quality of patents and
guard against attempts to patent work already in use. The group will start
by compiling a list of prior art, which runs counter to Stallman's strategy
of completely sheltering GPL code from the patent process. The draft
update also restricts GPL code from being used to protect movies and music.
Stallman's vision would provide universal access to free software,
effectively undermining the current patent protections, which has sparked
staunch opposition from the entertainment industry, as content distributors
use open-source code to safeguard their digital property rights, while
Linux powers a growing number of devices, such as the TiVo digital video
recorder. Linus Torvalds has already repudiated GPLv3, setting the stage
for what could be a showdown between Stallman and the free software
ideologues and the larger and more pragmatically-minded open-source
community. The looming confrontation could undermine the availability of
GPL software, which has many observers hoping that IBM will be able to
broker a solution that continues the flow of innovation.Click Here to View Full Article
- Web Link May Require Free Registration
to the top

Yahoo! and America Online have announced that they will soon start
offering companies the voluntary option of paying for ensured delivery of
emails in their subscribers' inboxes, a move that SpamCop founder Julian
Haight called "another nail in the coffin of email in general." He said
the concept "kills the whole openness of the email system on the Internet,"
while AOL's Nicholas Graham said the idea is to provide a choice "for
people who simply want to have their email delivered in a different way."
He added that AOL is providing this service in response to subscriber
complaints that they have no way of telling if their emails are legitimate
or a ruse by con artists. Emails sent through the new service will be
accompanied by a seal of certification to establish confidence among
recipients that the messages are authentic. Companies using the service
will pay a cent or less per piece to send; Goodmail Systems will handle
email sent via the program, and the messages will not be filtered as most
emails to AOL subscribers are as part of AOL's anti-spam efforts.
Anti-Spam Research Group Chairman John Levine finds the prospect of paid
email to be both "depressing and inevitable," while Heller Information
Services President Paul Heller said a lot of people are unsettled by the
Yahoo! and AOL programs because Web users have always looked upon email as
a free and open service. "Logically, it's just an extension of advertising
that you see on the page when you log on to AOL," he noted. AOL is slated
to roll out its paid email service in the next few months, while Yahoo!
remains mute about its program.Click Here to View Full Articleto the top

The Silicon Valley startup Krugle has developed a search engine designed
to help developers find source code on the Internet, parsing and indexing
the code and presenting it in a user-friendly interface. Krugle combines
open-source and proprietary elements in its own technology, drawing heavily
on the Apache Software Foundation's Nutch and Lucene and the Antir parser
generator. "Today, programming is more about efficiently assembling and
integrating code, than it is about writing new code from scratch," said CEO
and co-founder Steve Larsen. "The problem is, finding and evaluating the
available code takes too much time. That's the problem Krugle solves."
Co-founder Ken Krugler added that while existing search engines can crawl
the Web to retrieve individual sites, they are unable to mine repositories
of source code. Krugle's founders also claim that the tool can help
developers negotiate issues such as licensing and documentation, as well as
providing advice on which code to use. Developers can augment Krugle with
tags and commentary over top of the code, similar to the way that Wiki
users supplement content with metadata. Krugle expects to go live with the
search engine on March 8 at the O'Reilly Emerging Technology Conference.Click Here to View Full Articleto the top

The widespread adoption of the open-source Linux operating system as an
embedded platform for innovative Internet-enabled applications is being
driven by the emphasis on fairness, progress, and resource sharing in basic
design decisions, and the applications developed through this platform
offer greater reliability, versatility, cost efficiency, and faster
time-to-market than competing products, notes Sven Thorsten-Dietrich of
MontaVista Software. The Linux community is working to improve Linux so
that it can accommodate the real-time performance needs of embedded
systems; one such advance is MontaVista's O(1) scheduler in the Linux 2.6
kernel, which mimics the behavior of previous Linux schedulers while also
supporting bounced-back time scheduling. Thorsten-Dietrich writes that a
reevaluation of the original Linux design principles was necessitated by
Linux's expanded versatility in the embedded segment, along with the
incessantly increasing demand for time-critical functionality. A real-time
operating system must be designed to identify assigned task priorities, and
respond to time-critical events by moving to the equivalent tasks within a
determined period. The occurrence of a time-critical event in a real-time
system requires preemption, in which the currently running task is
suspended and the task responsible for processing the event is scheduled.
Making Linux fully preemptable involved the development of the Mutex, an
alternative mechanism that would let the kernel permit preemption while
executing in the most important sections. "Overall, the real-time effort
is helping to expose existing problem areas in the kernel, stimulating
discussions about efficiency and optimization, in addition to guiding
development efforts towards a continually higher standard of implementation
and performance for the evolving Linux ecosystem," notes
Thorsten-Dietrich.Click Here to View Full Articleto the top

Profits from cyber crime were higher than profits from the sale of illegal
drugs for the first time last year, according to Valerie McNiven, the U.S.
Treasury Department advisor. "Cyber crime is moving at such a high speed
that law enforcement cannot catch up with it," McNiven says. Cyber crime
is now driven by profit with an estimated 85 percent of malware created
specifically for profit. The FBI lists fighting cyber and technology crime
at number three on its list of top 10 priorities. Since cyber criminals
are becoming more organized, experts say a new approach to fighting cyber
crime is needed in three key areas: people, policies, and technology. The
"people factor" aspect of the solution is figuring out how hackers work and
what makes them tick. Behavioral insight will help fight intrusions as
well as extrusion into the network. Policy is another issue that must be
dealt with by organizations by establishing expectations for behaviors and
outcomes in order to create a secure business environment. The
implementation of security policies allows companies to protect their data.
More than 40 organizations recently came together to form the Data
Governance Council, a group designed to go beyond the traditional
approaches to security, privacy, compliance, and operational-risk policy.
Technology such as encryption is another challenging issue companies must
face and learn how to extend it to every touchpoint on the network. It is
estimated that more than half of all corporate data is on someone's PC,
PDA, or cellular phone. Cyber crime is now the crime of the 21st century,
but with the right people, policies, and technology in place, it can be
fought, writes IBM Research vice president Paul Horn.Click Here to View Full Article
- Web Link May Require Free Registration
to the top

One of the most exciting areas in the field of robotics is "autonomous
mobile manipulation," which focuses on the development of machines whose
manual dexterity matches that of humans. Projects in the field include
NASA's Human-Robot Technology program, which seeks to create robots that
are as dexterous as a six-year-old child within two years. Breakthroughs
in robotic dexterity are now possible because sensor, actuator, and
computing technology has advanced to the point where robots can more
accurately sense their surroundings, improve their fine motor skills, and
interact with objects in a more natural manner. Such advances enable
robots' movements to be controlled according to the exertion of force
instead of the absolute position of each limb or digit. The arms of NASA's
Robonaut, the most dexterous machine in the world, boast 150 sensors
programmed to detect such variables as joint positions, contact forces,
stresses and strains on the limb, and heat flow; an on-board computer
analyzes sensor readings and transmits commands to the electric motors in
the arm. However, Robonaut is a remote-controlled machine rather than
fully autonomous, and achieving full automation requires teaching the robot
to use tools, keep track of objects, and recognize speech and gestures.
Robots with autonomous mobile manipulation can carry out tasks that are too
dangerous or just undesirable for humans, and although current technology
is likely to be restricted to menial chores such as garbage collection
rather than more complex tasks such as repairs in space, the advances
necessary for such operations could lead to important milestones in
prosthetics, automated elderly care, and surgical tools.Click Here to View Full Article
- Web Link to Publication Homepage
to the top

Improved algorithms and faster computers are taking machine learning
beyond the realm of speech recognition and fraud detection and placing it
more into the mainstream, says Stanford computer science professor
Sebastian Thrun. With the applications becoming larger and less
manageable, coupled with their increasing commercial appeal, the ability to
self-adapt is fast becoming an imperative. Thrun applied some of the new
methods of machine learning to Stanley, the self-guided robotic car that
won the top prize of $2 million in the recent DARPA contest. Machine
learning helps automated devices perform tasks such as image and speech
recognition that are simple for humans, but difficult to explicate in
computer code. Tom Mitchell, director of the Center for Automated Learning
and Discovery at Carnegie Mellon University, is exploring the idea of
pairing two learned algorithms together to train each other, approaching a
problem from different perspectives and comparing notes. With a twofold
reduction in errors, Mitchell's research is significant for deploying
algorithms that learn from test cases that other software, rather than
humans, has labeled. University of California, Berkeley computer science
professor Stuart Russell is researching the application of machine learning
to gaps in areas of otherwise sound human knowledge, an application that he
calls partial programming. Russell writes his applications in Alisp, a
variation of Lisp, allowing the computer to decide how best to fill in
knowledge gaps in instruction sequences such as driving directions to the
airport. Researchers are also developing an area of machine learning known
as genetic programming, where many, often thousands of versions of a
program are dispatched to solve a problem, and a form of natural selection
takes hold and the strongest applications beget progeny in a process that
can last for generations without human guidance.Click Here to View Full Articleto the top

IBM, Sony, and Toshiba have jointly developed the Cell Broadband Engine
Architecture, or Cell, as a multicore microprocessor whose support of
graphics and multimedia trumps all others. Developed at a cost of $400
million, Cell is being incorporated into game consoles, televisions, and
other broadband-linked consumer items in an attempt to control the "digital
living room." Cell runs 36 times faster than the PlayStation 2's processor
with a peak speed of 192 gigaflops. The transition to multicore
architectures is being driven by the technical limitations of shrinking
processors and raising clock speed, which eventually becomes unworkable
because of heat output. Unlike most multicore architectures currently on
the market, Cell's architecture is asymmetrical, with two varieties of
cores: A Power processing element that runs the Linux operating system,
and eight Synergistic processing elements that perform tasks distributed
among them by the Power element. The Synergistic elements are designed to
manage multimedia applications such as video compression/decompression,
encryption and decryption of copyrighted content, and graphics rendering
and modification. A Synergistic element works only on data kept in its own
256 KB of memory accessible via a high-bandwidth link, while Cell's engines
for managing memory can be programmed to maintain the flow of data through
the processor. Cell's ability to fragment problems into pieces that can be
done in parallel also plays an important role in the processor's speed
advantages. Software tools that can exploit Cell's benefits to the fullest
are critical to the processor's commercial success.Click Here to View Full Articleto the top

Progressive Policy Institute vice president Robert Atkinson cites
"neo-Schumpetarian" analysis, which posits that productivity gains become
harder to generate as an old economy reaches the edge of its technology
innovation and diffusion capacity, for playing a role in economists' view
that productivity fueled by IT-based technology will slacken and stagnate
fairly soon. Atkinson disputes this assumption, arguing that the diffusion
of technology to other adopters besides the primary ones has historically
kept productivity climbing. He also takes issue with Stanford University
historian Paul David's postulation that the IT system's impact on overall
productivity statistics was slow in coming because learning how to use new
technology is a time-consuming process. Atkinson says David's theory does
not take into account the fact that IT technologies are relatively easy to
learn and are never fully-formed when they are first introduced. The
author attributes the dramatic resurgence in productivity growth between Q4
1996 and Q4 2004 to the advent of an IT system whose affordability, power,
and networking capability was enough to improve efficiency and productivity
of services to a vast degree, primarily through automation. Atkinson
writes that at least four needs must be met for the digital revolution to
reach its full potential: Technology's ease-of-use and reliability must be
improved; many devices must be converged and integrated; better
technologies (intelligent agents, expert system software, voice
recognition, etc.) must be introduced; and more ubiquitous adoption must be
facilitated. The author thinks technologies stemming from nanoscale
advances or the need to increase productivity in human-service functions
are likely candidates for the core drivers of the next economic wave.Click Here to View Full Article
- Web Link to Publication Homepage
to the top