Weizmann Institute of Science professor Adi Shamir recently warned of a
hypothetical incident in which a math error in a commonly used computing
chip could endanger the security of the global electronic commerce system.
Shamir, one of the designers of the RSA public key algorithm, says the
increasing complexity of microprocessor chips will almost certainly lead to
undetected errors. Similar errors have already been found in older
systems, such as the discovery of an obscure division bug in Intel's
Pentium microprocessor in 1994 and a multiplication bug found in
Microsoft's Excel spreadsheet. A subtle math error would allow an attacker
to break the public key cryptography technique by discovering the error in
a widely used chip and sending a "poisoned" encrypted message to the
computer, allowing the attacker to compute the value of the secret key used
by the targeting system. Shamir says the error would allow millions of
PCs to be attacked without having to manipulate the operating environment
of each one individually. Shamir notes that laws governing trade secrets
that protect the exact workings of microprocessor chips make it almost
impossible to verify that the chips have been designed correctly. "Even if
we assume that Intel had learned its lesson and meticulously verified the
correctness of its multipliers," he says, "there are many smaller
manufacturers of microprocessors who may be less careful with their
design."Click Here to View Full Articleto the top

Chao Wang of North Carolina State University, Mark Hoemmen of the
University of California, Berkeley, and Arpith Chacko Jacob of Washington
University of St. Louis are the winners of the ACM/IEEE Computer Society
HPC Ph.D. Fellowship Award. Wang, Hoemmen, and Jacob were honored Thursday
at ACM's SC07 conference in Reno, Nev. Other awards presented included the
Seymour Cray Award, which went to Kenneth Batcher of Kent State University,
and the Sidney Fernbach Award, which went to David Keyes of Columbia
University. The Gordon Bell Prize went to James Glosli, Kyle Caspersen,
David Richards, Robert Rudd, Frederick Streitz (Lawrence Livermore National
Laboratory), and John Gunnels (IBM) for research entitled "Extending
Stability Beyond CPU-Millennium: Micron-Scale Atomistic Simulation of
Kelvin-Helmholtz Instability." Dennis Abts, Abdulla Bataineh, Steve Scott,
Greg Faanes, James Schwarzmeier, Eric Lundberg, Tim Johnson, Mike Bye, and
Gerald Schwoerer (Cray) won the Best Paper Award for "The Cray BlackWidow:
A Highly Scalable Vector Multiprocessor." The winners of the Best Student
Paper Award, Best Poster Award, ACM Student Paper Award, Analytics
Challenge, Bandwidth Challenge, Cluster Challenge and Storage Challenge
were also announced.Click Here to View Full Articleto the top

Stephen Wolfram says people building complex computers and writing
complicated software may achieve more studying nature. Wolfram says his
company is exploring the "computational universe" to find more simple
solutions to complex problems that are currently handled by complex
software. "Nature has a secret it uses to make this complicated stuff,"
Wolfram says. "Traditionally, we're not taking advantage of that secret.
We create things that go around things nature is doing." Wolfram believes
that nature has created a molecule that could be used as a computer if
people ever manage to isolate and program the molecule. University of
Chicago Department of Computer Science Chairman Stuart Kurtz says a lot of
computer scientists are fascinated by finding simple systems capable of
producing complex results. For example, a University of Southern
California professor has proposed using recombinant DNA for computing.
While DNA computers are largely theoretical, computer scientists take them
quite seriously, Kurtz says. "People are used to the idea that making
computers is hard," Wolfram says. "But we're saying you can make computers
out of small numbers of components, with very simple rules."Click Here to View Full Articleto the top

Lancaster University researchers have developed software that allows two
cell phones to establish a wireless connection by holding the devices
together and vigorously shaking them. The shaking movement is measured by
built-in accelerometers. By holding the two devices together tightly the
accelerometer readings will match, enabling the system to make a secure
connection either by transmitting an open stream of accelerometer data and
searching for matching data, or by establishing a secure connection
automatically and then using accelerometer measurements to confirm it. The
researchers say this technique is both easier and more secure than
selecting a device from a list or entering a security code, and could make
it easier to connect cell phone peripherals such as wireless headsets.
Currently, users have to select the device from a list and enter a PIN
supplied with the device. However, about 95 percent of headsets have
"0000" as their default PIN code, creating a security weakness. Lancaster
University's Rene Mayrhofer says some cell phones already include
accelerometers and adding the software needed for shake-to-connect should
be relatively simple. Eventually, shake-to-connect could be used for more
sensitive transactions such as transferring money between credit cards.Click Here to View Full Articleto the top

Realizing the potential contained in DNA, scientists are working on
creating a programmable, systematic way of combining computers with
chemistry. "Programming chemical systems needs to be thought about," says
California Institute of Technology computer science professor Erik Winfree.
"The meeting of computer science and chemistry hasn't happened yet, but is
right around the corner. There's nothing logically, chemically or
physically impossible about computing with molecular systems." Winfree
says that programming with DNA is possible but extremely basic.
"Programming in a DNA world offers a variety of applications," he says.
"We can design a range of structures capable of creating complex patterns,
circuits, and motors." Winfree's research has led to a variety of DNA
patterns and structures formed in test tubes, including squares, stars, and
smiley faces made from programming DNA sequences. The sequences are passed
into a strand of DNA that is heated and cooled, which causes it to self
assemble in the desired shape. "Imagine a new virus that infects the
world's agricultural crops," says University of Washington electrical
engineering professor Eric Klavins. "If we understand the language, we
could develop a biological response through reprogramming, no different
than a remedy pushed out by Norton AntiVirus on a computer." Chemical
computing has several hurdles to overcome, as calls and DNA do not behave
as predictably as electronic systems, and improving reliability is key to
making chemical computing more powerful.Click Here to View Full Articleto the top

Georgia Institute of Technology researchers have developed a
communications system for Internet servers based on the dance-based
communication system bees use to divide limited resources. The new
computer system allows servers that would normally be reserved for a single
task to move between tasks as needed, reducing the chances that a Web site
will be overwhelmed and lock out potential users. The new system helped
servers improve service by 4 percent to 25 percent in real Internet traffic
tests. Because bees have a limited number of workers to send out to
collect pollen, scout bees are sent to find lucrative spots. These scout
bees return to the hive and perform a dance to tell other bees where to
find the nectar. The forager bees then dance behind the scout until they
learn the right steps. Forager bees continue to follow the scout bee's
dance until the nectar runs out or they find a more attractive dance. The
system allows the bees to seamlessly shift from one source to another
without a leader or central command to slow the decision process. Most
server systems are theoretically optimized for "normal" conditions, which
frequently change due to human nature. If demand for one site surges,
servers not assigned to that site may remain idle while users are put into
a queue that forces them to wait for the server assigned to the site to
become available. When the bee server system receives a request for a
site, the system places an internal advertisement, the equivalent of the
bee's dance, to attract any available servers. The ad's duration is
determined by demand for the site and how much revenue the site's users may
generate. The longer an ad remains active, the more power available
servers send to serve the Web site request.Click Here to View Full Articleto the top

Research reported in Science describes how European scientists were able
to use tiny robots placed in a colony of laboratory cockroaches to
manipulate the actions of the insects. The robots, using behavioral
modification methods, were able to convince the real insects to follow them
into bright areas, a significant achievement considering cockroaches are
known for hiding in dark areas. The significance of the research is that
even simple robots can significantly influence group behavior. Some
scientists believe that it is inevitable that advances in robotics and
technology will ultimately alter the fundamental relationship between
humanity and technology, and many analysts say now is the time to seriously
consider the ethical implications of technological advances. In many Asian
countries with highly advanced robotic research laws are being considered
that would regulate how much independence robots should be given by
programmers, and even what "rights" should be given to robots. One issue
of particular interest is whether robots will be given the ability to make
life-or-death decisions involving humans, for example in a hospital or
battlefield. Only two months ago, an unmanned aircraft deployed by U.S.
forces in Iraq made its first "kill." While the drone was remote
controlled, the action highlights the possibility of robotic warriors
capable of making their own decisions. "We are embarking on the process of
creating the first intelligent species to share the earth with humans since
the time of the Neanderthals," says renowned science fiction author Robert
Sawyer, who wrote an essay that accompanied the report in Science. "We're
racing past the era of robo-vacuum cleaners into someplace quite different
and more complex."Click Here to View Full Articleto the top

World Wide Web Consortium director Tim Berners-Lee is concerned that
restricting access to Internet content is causing the mobile Internet to
separate from the regular Internet, and says the W3C is working to prevent
that separation from happening. The W3C recently launched a new tool
developers can use to test their Web sites for compatibility on mobile
platforms to ensure their site does not cause a mobile device to crash.
Berners-Lee says the overarching goal is to keep content available
regardless of the device a person chooses to use. "I like being able to
choose my hardware separately from choosing my software, and separately
from choosing my content," Berners-Lee says. Numerous Web sites are
inaccessible from mobile devices as developers choose not to make a mobile
version of their site due to the extra technical complications. However,
in some parts of the world mobile phones are the primary method people use
to access the Internet. W3C's Matt Womer also says that mobile-device
users should not be forced to download large images or be redirected to
several different pages as mobile Web users pay by the kilobyte. Mobile
sites can also be hard to find as there is no standard for creating mobile
domain names. Some sites replace "www" with "mobile" or "wap," but Womer
says the result can be confusing for users who may not know to enter
special prefixes. He says the W3C recommends that Web site developers
separate information about how to present content from the actual content.
Content can be described through hypertext markup language while the
presentation can be handled with separate style sheets.Click Here to View Full Articleto the top

Gartner research analyst Adam Sarner predicts that by 2015, 2 percent of
U.S. citizens will get married in virtual worlds to people they have never
met, and may never meet, even after marriage. The online virtual marriages
will have all the same legal implications as real-world marriages,
including joint property rights. Sarner also predicts that companies will
spend more money marketing and advertising products and services in the
virtual world than in the real world by 2020, and that at least one city
will elect a "virtual anonymous persona" to be the city's mayor. Although
marriages already occur in Second Life, they currently have no legal
implications. Sarner believes that people who choose a virtual marriage
with someone they have never met will feel connections powerful enough to
transcend the physical aspects of matrimony. "I think the online
connection is powerful enough to have these legal marriages online," Sarner
says. "The point is the emotional connection they have will be strong
enough that they want to make it forever." Sarner acknowledges that his
theory is a "maverick" prediction but insists that he will be proven
correct.Click Here to View Full Articleto the top

Most of the information that hinted at possible trouble prior to the 9-11
attacks was buried under massive amounts of data being collected faster
than analysts could handle. A single day's collection would fill more than
6 million 160-gigabyte iPods, and some of the data conflicted with other
pieces of information. To prevent such pieces of information from being
missed again, researchers at the DHS Science and Technology Directorate are
developing ways of viewing such data as a 3D picture where important clues
are more easily identified. Mathematicians, logicians, and linguists are
collaborating to make the massive amounts of data form a meaningful shape,
assigning brightness, color, texture, and size to billions of known and
apparent facts. For example, a day's worth of video, cell phone calls,
photos, bank records, chat rooms, and emails may be displayed as a
blue-gray cloud with links to corresponding cities. "Were not looking for
'meaning' per se," says Dr. Joseph Kielman, Basic Research Lead for the
Directorate's Command, Control and Interoperability Division, "but for
patterns that will let us detect the expected and discover the unexpected."
Kielman says it will still be several years before visual analytics can
automatically create connections from fuzzy data such as video.Click Here to View Full Articleto the top

Engineers at Penn State have teamed up with cable manufacturer NEXANS in
an effort to determine whether digital data could be sent over 100 meters
of Category-7 copper cables at a rate of 100 Gbps. "A rate of 100 gigabit
over 70 meters is definitely possible, and we are working on extending that
to 100 meters, or about 328 feet," says Ali Enteshari, graduate student in
electrical engineering. "However, the design of a 100-gigabit modem might
not be physically realizable at this time as it is technology limited."
The technology of chip circuitry will need to improve if the modem designs
are to be built, and it will likely take two to three generations. Glass
fiber-optic cables are very fast and are used as long distance lines for
most Internet systems, but copper cable is used for shorter distances.
Fiber-optic cabling costs too much for home networks, and Penn State
researchers believe their approach would be more affordable and easier to
build. Moving 100 gigabits of data per second over 100 meters is the
equivalent of moving 12.5 Encyclopedia Britannica sets per second.
Enteshari presented his team's research Wednesday at the IEEE High Speed
Study Group in Atlanta.Click Here to View Full Articleto the top

A gradual fall-off in the number of people applying to earn degrees in
computer science since the implosion of the first dot-com bubble has
fostered a perception that the field is expiring, but David Chisnall
questions this assertion. The idea that computer science is dying is
muddled by the fact that few people know what truly constitutes computer
science, with most people viewing it as a vocational course that focuses on
programming. "A computer scientist may not fabricate her own ICs, and may
not write her own compiler and operating system ... But the computer
scientist definitely will understand what's happening in the compiler,
operating system, and CPU when a program is compiled and run," Chisnall
writes. From his perspective, computer science lies at the convergence of
mathematics, engineering, and psychology, and the third discipline is
critical to the instruction of computers by humans. Psychology plays a
part not only in human/computer interaction, but also in the development
and assessment of computer intelligence, according to Chisnall. He
maintains that a lot of unhappiness with computer science stems from the
mistaken assumption that computer science graduates will also be expert
programmers, and notes that a lot of people appear to confuse computer
science and software engineering. Chisnall points out that "computer
science is first and foremost a branch of applied mathematics, so a
computer scientist should be expected to understand the principles of
mathematical reasoning" However, he notes that computer science has the
added distinction of its concentration on efficiency and concurrent
thinking at different levels of abstraction.Click Here to View Full Articleto the top

There are currently tens of thousands of Web sites devoted to delivering
the beliefs and methods used by terrorist organizations, and tracking and
monitoring these Web sites and chat rooms is a extremely difficult task for
government agencies. While the sites may not appear to reveal any
information on their creators, programmers and writers leave digital clues
such as the words they chose, punctuation, syntax, and how they code
multimedia attachments and Web links that can be used to find them.
University of Arizona researchers are working on a tool that would use
these clues to automate the analysis of online jihadism. The Dark Web
project aims to search sites, forums, and chat rooms to find the Internet's
most influential jihadists and learn how they attract new recruits.
Artificial Intelligence lab director Hsinchun Chen hopes Dark Web will
cripple the online terrorist recruitment and education effort, as many
potential terrorists learn how to make explosives and plan attacks online.
"Our tool will help [U.S. authorities] ID the high-risk, radical opinion
leaders in cyberspace," Chen says. Former FBI counterterror chief Dale
Watson says the ability to sort through massive amounts of data
automatically would be of great value, as terrorist Web sites and
communications are currently analyzed manually. "It would greatly enhance
the speed and capability to sort through a large amount of data," Watson
says. "The issue will be where is the Web site originating and where are
the tentacles going?"Click Here to View Full Articleto the top

Nielsen Norman Group principal Bruce Tognazzini, who founded Apple's Human
Interface team and developed the company's first usability guidelines,
notes that there are more user-unfriendly than user-friendly Web sites on
the Internet, and the worst single gaffe they make is discarding the user's
work. He cites 27-year-old guidelines on human interface design whose
validity still holds up, and these guidelines call for an intuitive
interface that anticipates user needs as much as possible and is designed
with the end user in mind, simple screens that eliminate unneeded verbiage
and superfluous graphics, and tolerant and forgiving inputs. Tognazzini
says that most companies have still failed to grasp even the most
rudimentary concepts of human-computer interaction, and he says that "such
ignorance and laziness ensures full employment for HCI designers for the
foreseeable future, and also ensures that the original promise of the Web,
with its sweeping aside of 'bricks and mortar stores,' will continue
unfulfilled." Tognazzini remarks that video ad formats offer a terrible
user experience, but this situation will not change as long as users
tolerate it. Tognazzini projects that a lot of iPhone knockoffs will come
out, but hopefully a gestural interface will take root and promote a new
form of power and simplicity across many devices besides phones. In his
opinion, phones, faxes, and computers offer the most irritation in terms of
use, because "powerful applications have outstripped the capabilities of
all three," while the devices' interfaces are uniformly lousy.Click Here to View Full Articleto the top

Fortran is considered the most popular programming language for
high-performance computing applications, and enhancing the language with
additional features such as explicitly parallel constructs is the goal of
Fortran 2008, a proposed new standard. Craig E. Rasmussen of Los Alamos
National Laboratory writes that programmers will soon be allowed to start a
line in free format form with a semicolon, a feature added to boost the
internal consistency of Fortran. Co-arrays were introduced to handle
parallelism by enabling programmers to directly read or write to memory on
a remote processor using the square bracket notation. Co-arrays can
facilitate barrier synchronization with a subset of program images with the
SYNC TEAM or SYNC IMAGES statements or with all of the images through the
SYNC ALL statement, and Rasmussen notes that a CRITICAL ... END CRITICAL
construct was introduced to restrict the execution of a block of code to
one image at a time. The programmer can also explicitly notify the program
that individual iterations of a loop body may be performed concurrently in
any order with the addition of a DO CONCURRENT ... END DO loop construct.
Perhaps Fortran 2008's most convenient enhancement lets the compiler choose
and return an IO unit number when a file is opened with the OPEN statement,
relieving programmers of the burden of ensuring that the selected unit
number does not conflict with a unit number already being used. "In my
opinion, in the future Fortran should look at ways to introduce some of the
work being done in Chapel and X10," Rasmussen says. "In particular, we
should allow for multiple tasks (e.g., Ocean and Atmosphere) to run
concurrently and should provide mechanisms for individual threads and data
to be placed on particular hardware units, not just the one program SPMD
model adopted by co-arrays."Click Here to View Full Articleto the top

A precipitous decline in computer science enrollments in the United States
is spurring academicians to turn to robotics to restore the field's allure
and engage students' flagging interest. "The classic way to teach computer
science is to [give] a really dry assignment like 'Write a program to print
the Fibonacci sequence," says Georgia Institute of Technology computer
scientist Tucker Balch. "Students don't get turned on by this." Bryn Mawr
College professor Douglas Blank says robots have an inherent "sexiness"
that his school is using to make computer science more appealing, and for
years robots have been employed by science, technology, engineering, and
mathematics proponents to interest elementary and high school students,
frequently through competitive leagues. One example is the KISS (Keep It
Simple, Stupid) Institute for Practical Robotics' Botball tournaments in
which teams of secondary school students build and program robots that
compete against each other in games. Michelle Medeiros, founder of the
STEM community club that won this year's Hawaiian Botball regional
tournament, says robots can help remove some of the intimidating or boring
aura surrounding science. Even more challenging than Botball is the FIRST
Robotics Competition in which remote control machines aggressively compete
in a contest that emphasizes engineering. A study by Brandeis University
found that FIRST participants were two times as likely to choose college
science or engineering majors than non-FIRST participants who took similar
math and science courses in high school. The National Science Foundation
recently allocated $6 million to overhaul computing education at 25 U.S.
schools.Click Here to View Full Article
- Web Link May Require Paid Subscription
to the top

Futurist Ray Kurzweil's book, "The Singularity Is Near: When Humans
Transcend Biology," predicts that advances in computing technologies and
biological research over the next 40 years will result in the merger of
biological and nonbiologial intelligence. Kurzweil says technology
advances exponentially, not linearly, which is often overlooked and one of
the reasons long-term forecasts will generally fall short of the eventual
reality. Kurzweil also predicts that over the next 10 years computers will
look much different than today's desktop and laptop computers. "They're
going to be extremely tiny," Kurzweil says. "They're going to be
everywhere. There's going to be pervasive computing. It's going to be
embedded in the environment, in our clothing. It's going to be
self-organizing." Technology will also advance to the point of augmented
reality, with computers watching and listening to humans and helping. "The
computers will be watching what you watch, listening to what you're saying,
and they'll be helping. So if you look at someone, little pop-ups will
appear in your field of view, reminding you of who that is, giving you
information about them, reminding you that it's their birthday next
Tuesday." Such pervasive computers will provide similar information when
looking at buildings and other objects. "If it hears you stumbling over
some information that you can't quite think of, it will just pop up without
you having to ask," Kurzweil says.Click Here to View Full Articleto the top

University of Maryland, College Park computer science professors Victor R.
Basili and Marvin V. Zelkowitz are investigating the application of an
empirical approach toward the understanding of key software development
problems. "Computer science involves people solving problems, so computer
scientists must perform empirical studies that involve developers and users
alike," they write. "They must understand products, processes, and the
relationships among them. They must experiment (human-based studies),
analyze, and synthesize the resulting knowledge." Subsequently, this
knowledge must be modeled or packaged for additional development. Basili
and Zelkowitz attest that industrial, government, and academic
organizations must interact in order for the experimentation that is so
vital for empirical studies to be successful. Experimentation must involve
the study of human activities, given the role such activities play in
software development; the assessment of quantitative and qualitative data
to comprehend and improve the development staff's operations is central to
this experimentation. Basili and Zelkowitz say it is imperative that
experimental concepts are applied across a diverse array of computer
science environments, such as high-end computing. "Understanding,
predicting, and improving development time requires empirical methods to
properly evaluate programmer, as well as machine, performance," they write.
"We need theories, hypotheses, and guidelines that allow us to
characterize, evaluate, predict, and improve how an HEC
environment--hardware, software, developer--affects development of these
high-end computing codes."Click Here to View Full Article
- Web Link May Require Paid Subscription
to the top