Microsoft researchers believe analyzing the manner in which Web users
browse and click through content from a search results page could help
search engines rank and retrieve results. The researchers will present two
papers at the 29th Annual International Association for Computing
Machinery's Special Interest Group on Information Retrieval (ACM SIGIR
2006), which got underway in Seattle on Aug. 6. Eugene Agichtein, an
expert in the company's Mining, Search, and Navigation Group, says most
search engines are two-dimensional in that they match queries with content
and link the structure of a Web page to return the results. "Using the
'wisdom of crowds' can give us an accurate interpretation of user
interactions, even in the inherently noisy Web search setting," the
researchers say in one paper. "Our techniques allow us to automatically
predict relevance preferences for Web search results with accuracy greater
than the previously published methods." The second paper details how such
user information can boost algorithms used to rank search results by 31
percent. Thirteen groups are scheduled to present papers at ACM SIGIR,
which runs through Aug. 11, on topics ranging from making vast amounts of
content more digestible to improving the presentation of news summaries for
mobile devices.Click Here to View Full Articleto the top

The Anita Borg Institute (ABI) for Women and Technology has unveiled its
roster of speakers for the upcoming Grace Hopper Celebration (GHC) of Women
in Computing Conference, sponsored jointly by ACM and ACI. The GHC,
inspired by the legacy of Admiral Grace Hopper, is the world's largest
technical conference for women in the profession of computer science. The
keynote speakers will be iRobot co-founder and Chairman Helen Greiner,
Princeton University President Shirley Tilghman, and Sally Ride, the first
U.S. woman to travel in outer space. "We are honored to welcome this
remarkable group of technical women to this year's GHC program. Each of
them is an extraordinary role model and a positive example of how technical
women are seizing opportunity and changing the face of technology," said
Telle Whitney, president and CEO of ABI. This year's event, open to all
women from the collegiate to professional levels, is expected to draw more
than 1,200 attendees. Participants will present technical papers and hold
workshops, and the winners of the Anita Borg Technical Leadership and
Social Impact Awards will be announced. The NSF and numerous universities
and businesses, including Microsoft, Google, and Intel, have provided
funding for a record number of attendance scholarships. The conference
will be held in San Diego from Oct. 4-7.Click Here to View Full Articleto the top

A new lottery-style scratch-and-vote card that voters could verify might
put to rest the security concerns that have long plagued electronic voting
systems. With current touch-screen systems, "there is no way for an
individual voter to know that his or her vote has been properly counted,"
said Microsoft's Josh Bernaloh. "Even election officials cannot be certain
that the systems are free of errors." Even with paper receipts, voters are
still relying on other people and procedures to count their votes. While
encryption-based systems can be audited to verify their accuracy, it is
important to ensure that voting remains anonymous, says Ben Adida of MIT's
Computer Science and Artificial Intelligence Laboratory. Paper-based
systems produce a unique number that can be traced back to identify a
voter's name. S&V schemes can be used with existing election systems,
including one recently developed by University of Newcastle-upon-Tyne
cryptographer Peter Ryan. Ryan's system places candidates' names on one
side of the ballot in random order, with the tick boxes on the other. The
voter tears the ticket in half after placing his vote, and a cryptographic
code then matches the sequence of candidates on each side of the ballot.
The challenge that Ryan's system faces is verifying that the encrypted
information accurately correlates the order of candidates' names, but the
S&V approach would secure the auditing process because it furnishes a paper
ballot that would not pass through an election official's hands. A voter
could simply scratch off the surface of the ticket to reveal a number that,
when combined with a number that corresponds with the sequence of
candidates and a public encryption key, would determine whether a ballot
has been rigged. Voters could also use S&V cards to check to make sure
that their votes have been counted after the election by verifying that the
ballot code on their paper receipt matches the encryption code. Though new
systems like these will be difficult to adopt on a widespread basis, they
could represent a significant step forward in ensuring voting security,
says Michael Shamos, co-director of Carnegie Mellon University's Institute
for eCommerce.Click Here to View Full Articleto the top

Researchers at the University of Toronto and MIT have developed a new
image-processing technique that could prevent the blurriness that results
from photographs taken with an unsteady hand. Their method is based on an
algorithm that calculates the path taken by a shaky camera when the picture
was shot, and then traces that path back to reverse the blurring. "This is
the first time that the natural image statistics have been used
successfully in deblurring images," said MIT's Rob Fergus, the project's
lead researcher who demonstrated the technology at ACM's SIGGRAPH
conference last week. Each image takes 10 to 15 minutes to process using
the technique, which employs a universal statistical property that
characterizes transitions from light to dark. There are numerous products
currently available that aim to counteract the effects of photos taken with
unsteady hands, but they only eliminate blurriness to a limited degree,
while the researchers' work addresses more complex patterns of motion. The
statistical property that Fergus' technique uses is a combined measurement
of the variations in brightness between neighboring pixels. Real images
have a similar distribution of gradients, while randomly generated computer
images vary widely, Fergus says. Blurry images have contrasting gradients,
which Fergus' technique uses to estimate how the camera moved. The process
generates what is called a blur kernel, which reveals where the camera was
shooting when the image was taken.Click Here to View Full Articleto the top

Search engines contain no inherent bias toward popular Web sites,
researchers at Indiana University claim. Their study contends that search
engines actually have an egalitarian effect on the Web, disputing the
"Googlearchy" theory that search engines funnel traffic to the best-known
sites, creating an effective monopoly over their smaller competitors.
"Empirical data do not support the idea of a vicious cycle amplifying the
rich-get-richer dynamic of the Web," said Filippo Menczer, associate
professor of informatics and computer science at IU. "Our study
demonstrates that popular sites receive on average far less traffic than
predicted by the Googlearchy theory and that the playing field is more
even." Drawing on their collective expertise in Web mining and networks,
the Indiana researchers set up experiments where users alternately browsed
the Web by clicking only on random links or by visiting only pages in the
results listings produced by search engines. In explaining the general
impact of search engines on the Internet, the researchers describe a
"long-tail structure" where the vast majority of connections come from a
few nodes. The rich-get-richer notion that is commonly invoked to explain
this phenomenon is errant because it requires advanced knowledge of the
prestige of each network node, a characteristic of Web sites that is often
unknown, the researchers claim. All that is required to create the long
tail, the researchers claim, is that the nodes are sorted by any measure of
prestige, regardless of whether the precise values are known. "By sorting
results, search engines give us a simple mechanism to interpret how the Web
grows and how traffic is distributed among Web sites," Menczer said.Click Here to View Full Articleto the top

Peripheral devices such as keyboards, microphones, and mice could pose an
entirely new computer vulnerability, researchers at the University of
Pennsylvania have found. Using a device known as a JitterBug, the
researchers found that a hacker could physically bug a peripheral device
and steal chunks of data by creating an all-but-imperceptible processing
delay after a keystroke. The researchers built a functional JitterBug
keyboard as proof of concept. "This is spy stuff. Someone would need
physical access to your keyboard to place a JitterBug device, but it could
be quite easy to hide such a bug in plain sight among cables or even
replace a keyboard with a bugged version," said Gaurav Shah, a graduate
student in Penn's Department of Computers and Information Science.
"Although we do not have evidence that anyone has actually been using
JitterBugs, our message is that if we were able to build one, so could
other, less scrupulous people." Unlike keystroke loggers, which have to be
physically installed and then retrieved to collect data, the JitterBug
needs only to be installed. The device can use any interactive
network-related software application such as email or instant messaging to
relay the data, leaking it through split-second keystroke delays. Limited
storage space on the device would prevent the JitterBug from recording
every keystroke, but could be trained to record a certain type of activity
prompted by a specific keystroke. "For example, one could pre-program a
JitterBug with the user name of the target as a trigger on the assumption
that the following keystrokes would include the user's password," Shah
said. In one particularly alarming scenario, a manufacturer of peripheral
devices could be compromised, inundating the market with JitterBugged
devices. Shah's initial research suggests that cryptography could be used
to protect against JitterBugged devices.Click Here to View Full Articleto the top

Sens. Orrin Hatch (R-Utah) and Patrick Leahy (D-Vt.) have introduced a
bill that would have the United States determine who should receive a
patent based on who files a patent first. The Patent Reform Act would
bring the nation in line with the first-to-file patent systems of most
other countries, but advocates of small inventors say the patent reform
bill would hurt individuals who are unable to afford the patent process.
Although an inventor can file a patent for as little as $100, fees and
legal representation associated with a patent application cost on average
about $15,000. Moreover, a legal dispute over who invented something first
can cost at least $100,000. "Weakening patent protection at a time when
America's incredible inventiveness is the one edge we have in a low-wage
global economy is incredibly poor public policy," says Ronald Riley,
president of the Professional Inventors Alliance. Also under the patent
reform proposal, judges making patent infringement awards would have to
consider the value of the patented item in relation to the entire product,
and patents would be challenged in a post-patent review process. The
Hatch-Leahy bill differs from legislation Rep. Lamar Smith (R-Texas)
unveiled in the House last year, in that injunctions against patent
infringing-companies would not be restricted.Click Here to View Full Articleto the top

The University of Pittsburgh will serve as a University Affiliate Center
(UAC) that will aid the U.S. Department of Homeland Security (DHS) in its
effort to gain the information analysis capability to analyze free text for
potential terrorist activity. DHS will provide Pitt with $2.4 million over
the next three years to develop advanced computing technology that can find
common patterns in a wide range of information sources. "The goals of the
work will be to identify facts and entities, as well as beliefs and
motivations, expressed in text, and to create new methods for linking
events and beliefs across documents, and tracking them over time," explains
Janyce Wiebe, lead researcher and a computer science professor at Pitt.
Cornell University and the University of Utah will participate in the Pitt
UAC, which will work closely with the Institute for Discrete Science, the
joint initiative of DHS and several National Laboratories that is working
to improve the software algorithms and architectures used in a variety of
computing applications. "The biggest challenge facing this critical area
is the need for improved methods to quickly and accurately analyze,
organize, and make sense of vast amounts of changing data," adds Jeffrey W.
Runge, acting under secretary for Science and Technology. Rutgers
University, the University of Illinois at Urbana-Champaign, and the
University of Southern California are also serving as UACs, with each
focusing on a specific area of research identified by Congress.Click Here to View Full Articleto the top

Researchers at Rensselaer Polytechnic Institute are developing technology
that simulates surgery based on sophisticated, tactile computer-generated
models of organs, inviting the possibility that surgeons could eventually
train in virtual reality. The surgery simulator, which enables surgeons to
work with virtual organs in real time, is similar to the flight simulators
that pilots use to train. Ultimately, Rensselaer researcher Suvranu De and
his team are working toward the goal of developing a virtual human--an
expansive database of human anatomy that would appear in every sense
exactly like a flesh-and-blood human that surgeons could manipulate with
various types of haptic interfaces. "A virtual human can be pushed and
prodded pretty much as you would a real human," De says. Most existing
simulators are unpopular because they are not realistic enough and the
haptic technology is not developed to the point where doctors can feel how
tissue reacts when it is prodded or cut. In current simulators, haptic
interfaces convert the movement of a surgeon's hands into the motions of
computer tools that interact with virtual organs. Computer monitors render
the scene, and no existing application can realistically render the
behavior of soft tissues. If the entire body remains still or appears like
a cartoon, students are unlikely to feel immersed in the experience, says
Dan Morris, a student at Stanford University's Artificial Intelligence
Laboratory. Discovering how human tissue responds to direct contact with
surgical instruments is an essential part of the learning process. De and
his team believe they have a solution in their point-associated field
approach, which uses complex software to produce real-time simulations of
any form of matter.Click Here to View Full Articleto the top

As mobile phone makers work to cram more features into their devices,
usability is becoming an increasingly important issue, as almost a quarter
of all phones returned as defective work fine, according to a recent
survey. The problem is that people have a hard time getting them to work.
The operating experience is complicated by the increasing depth of menus
and lists, and more and more befuddled users are only looking at the top
few items. One popular solution that developers are exploring to simplify
the experience is animation. "Some people think animation is just for eye
candy, to make things look good, but it can actually enhance usability,"
said Next Device's Geoff Kendall. Some applications are building menu
options around the keypad, while other manufacturers are considering
redesigns of the keypad altogether. Of all the variations of swiveling and
sliding keyboards that manufacturers have developed, the wheel on Apple's
iPod is one of the few interface innovations that both looks cool and
simplifies the user's experience. The iPod has become the benchmark for
successful interface innovation, and many other manufacturers are
developing products under the premise that the controls must be round, and
some are leaving all the functions up to the scroll wheel. But the scroll
wheel alone is not the answer to every design problem, Kendall says. "The
problem with mobile phones, for example, is that they do much more than
just show lists of albums and artists and so on--we have to take pictures,
send messages, take calls, etc." The future could lie in embedding all the
controls in a touch-sensitive LCD screen, a sort of virtual keypad that
could include the functionality of a scroll wheel, a joystick, or any
number of keys. Some users have had difficulties with virtual keypads
because they cannot feel what they are doing, though it might be possible
to simulate the feeling of pressing a button through audible clicks and a
highly sensitive vibrate function.Click Here to View Full Articleto the top

The University of Technology Sydney (UTS) and RMIT University have added
more flexibility to their IT undergraduate programs for 2007 that could
encourage more students in Australia to pursue a career in IT. At UTS, the
Bachelor of Science in Information Technology (BScIT) program will offer a
major in Business Information Systems Management for students looking for
work in technology implementation and governance; Enterprise Systems
Development that emphasizes technology building; and Internetworking and
Applications and Computing and Data Analytics, two degrees that could lead
to a career in technology servicing. More electives will be available, and
students will be able to combine their major with a number of other
disciplines. RMIT is also offering students the opportunity to pursue
combined degrees, and to obtain an IT degree with a minor in a subject not
related to technology. Industry needs IT professionals who have some
background in other areas, says RMIT senior lecturer Saied Tahaghoghi,
adding that many people are unaware that there remains a great demand for
IT professionals. "The fact is business needs IT more than ever and only a
limited set of operations can be outsourced overseas," says Tahaghoghi. "I
believe the flexibility of the program will attract students who would
previously have not considered doing much study in IT."Click Here to View Full Articleto the top

The Senate Appropriations Committee (SAC) has approved its version of the
Defense Appropriations bill for fiscal 2007, with deep cuts to DARPA's
Cognitive Computing program for the second year in a row. Senate-wide
debate on the bill will resume in September after the August recess. SAC
also approved reductions in the Information and Communications Technology
(ICT) account and the activities of DARPA's Computer Science Study Group.
The House had granted the president's request for a $47 million funding
increase for ICT, bringing its total allotment to $243 million, but the
Senate cut $13.4 million from the request for an approved total of $229
million. Similarly, the House approved the $220 million allotment for
Cognitive Computing Systems that the president had requested, but SAC only
approved $149 million. Targeted programs included "Integrated Cognitive
Systems," "Learning Locomotion and Navigation," and "Improved Warfighter
Information Processing." SAC also cut funding for DARPA's Computer Science
Study Group, which was created this year to introduce young faculty to
computer science problems that affect the Defense Department. While the
ICT cut simply scales back the rate of increase, the other cuts are real
losses to the affected programs.Click Here to View Full Articleto the top

The dream of cognitive computing is to enable machines to learn as people
do and respond to unanticipated events instead of tapping existing
knowledge or employing preprogrammed logical threads. Advocates say this
will nurture reasoning abilities, and perhaps intelligence and
consciousness, that can be harnessed to generate profits. It is theorized
that cognitive computing could exceed the goals of research into artificial
intelligence thanks to our growing knowledge of the human brain and the
advent of supercomputers and other tools that can model and eventually
replicate the brain. In keeping with current neuroscience reasoning, a
brain-like computer must be capable of constructing neural nets that store
past experiences. This goal may not be so elusive: Swiss researchers at
the Ecole Polytechnique Federale de Lausanne's Brain Institute have used
supercomputing systems to build neocortical columns. Cognitive computing
technologies are expected to soon make a splash in such markets as
automotive systems, medical devices, and personal robots. In fact, belief
in the technology is so strong that James Albus, a senior fellow at the
National Institute of Standards and Technology, says the agency has plans
for a new project called "Decade of the Mind" that calls for awarding $4
billion in funding to researchers working on mind-based computing. Albus
says the project could get underway next year. Although some scientists
still believe that traditional AI concepts using software-based systems are
the best ways to achieve smart computers, others say that only machines
that attempt to mimic the way the brain works can ultimately replicate the
human brain. IBM cognitive computing leader Dharmendra Modha says, "The
brain is a machine. It's biological hardware. If a program is not
biologically feasible, it's not consistent with the brain."Click Here to View Full Article
- Web Link to Publication Homepage
to the top

Working with a pair of researchers from the University of Washington,
Microsoft's Richard Szeliski has developed technology that converts digital
images into 3D models, enabling users to create the effect of walking or
flying through a scene from any angle. Due to be presented at ACM's
SIGGRAPH conference in Boston, Photosynth compiles unique features from
different photographs and cross-references them against other images,
looking for similarities. That enables it to isolate a specific 3D
position and then calculate the camera's location when the picture would
have been taken. "Then basically, it is just a geometry problem," Szeliski
said. "You are simultaneously adjusting the position of the camera and
where those little pieces of images are until everything basically snaps
together." The system can be used with as few as two images, though it is
much more interesting when several dozen images are combined, Szeliski
said. The technology will facilitate a higher level of interaction with
photographs, as users will be able to able to look at them from any angle,
zoom in on specific features, and identify where one image was shot in
relation to another. Photo-sharing sites will likely be early adopters of
the technology, Szeliski says. Cities or tourism boards could also use the
technology to provide a virtual tour.Click Here to View Full Articleto the top

The Department of Energy's Oak Ridge National Laboratory is partnering
with Cray in a $200 million project to develop the world's most powerful
supercomputer by 2008, with a peak speed of 1 petaflop. Researchers at IBM
and the DOE's Lawrence Livermore National Laboratory have announced that
they have created the world's most powerful software code for Blue Gene,
the world's most powerful supercomputer, that will run complex simulations
critical to national security. The race among the DOE labs for ever-faster
supercomputing creates a healthy atmosphere of intellectual competition,
researchers say. The DOE's weapons research demands considerable computing
capacity to develop large, detailed models. Other, unclassified programs
include climate modeling, quantum chemistry and physics, and materials
science. Software for high-performance computing has been slow to develop,
however, allowing supercomputers only to perform calculations at a fraction
of their peak capacity. Petascale systems will broaden the horizons for
research possibilities, though creating systems that can sustain
performance at that level is still a challenge, according to Dan Reed,
director of the Renaissance Computing Institute. The Cray computer will
support Oak Ridge scientists' research activities in biology, energy, and
nanotechnology, and corporate and academic researchers working under a DOE
program will also get some time on the system. "There's an almost
insatiable demand for computing power," said Cray's Steve Conway. "The
more they can get, the better off the science is going to be." Running
Lawrence Livermore's Qbox code, meanwhile, the classified Blue Gene is
helping the National Nuclear Security Administration (NNSA) develop
predictive simulations of nuclear weapons and ensure the safety and
reliability of the United States' existing nuclear stockpile.Click Here to View Full Articleto the top

In January 2005 MIT Media Lab cofounder Nicholas Negroponte unveiled the
One Laptop Per Child project, an initiative to design and distribute an
ultracheap, lightweight, and intuitive portable PC to poor children
throughout the world, at the World Economic Forum. The project called for
a highly ruggedized machine equipped with radio antennas for networking in
the absence of satellites or towers; a dual-mode display that shifts to
monochrome in bright light; and a way for generating power that facilitates
indefinite operation without an electrical outlet. Among those invited to
design the laptop was fuseproject owner Yves Behar, who suggested a compact
and sealable form factor that, in his words, "shouldn't look like something
for business that's been colored for kids." An earlier version of the
laptop featured a handcrank to generate power, but this was eliminated
after it was determined that gripping the crank with one hand and the
laptop with the other would cause the machine to shake, placing excessive
strain on the hardware. The latest version of the laptop, priced at about
$140, features a kid-friendly design and colors that deter theft; a hollow
handle that holds a shoulder strap; built-in VoIP and Skype; 802.11b/g
antennas with a range of half a mile; custom batteries with a five-year
lifespan; LEDs in place of a fluorescent backlight; a rubberized plastic
shell to absorb shocks; 512 MB of flash memory and 200 GB of storage
through a mesh-networked server; a 366 MHz processor and 128 MB of RAM; a
bare-bones version of Redhat Linux; a seamless touchpad that allows
handwriting and drawing; and the ability to swivel to ebook mode. Behar
designed every laptop component to be multifunctional: For instance, the
computer's antennas are movable "ears" that can swivel down to shield the
laptop's ports.Click Here to View Full Articleto the top

The proliferation of programming languages stems from the desire to
improve the language rather than create a wholly new language for the sake
of doing so, but while many programmers subscribe to the idea of a true
programming language, few can agree on what that language is, writes Brian
Hayes. Among the petty feuds associated with programming languages is what
role the semicolon should play: In Algol and Pascal, semicolons are used
to separate program statements, while in C they terminate statements.
Though nearly every programming language is built atop a platform of
context-free grammar, there are several families into which languages can
be categorized, with different appearances, audiences, and areas of
application for each category. Imperative or command-based languages are
languages in which the commands act on stored data and tweak the general
state of the system; functional languages modeled after the concept of a
mathematical function use arguments as input and values as output; in
object-oriented languages, imperative commands and the data they act on are
tied together into encapsulated objects, and the data structure can be
"taught" to perform operations on itself; and logic, rational, or
declarative languages distinguish themselves by having the statement of
facts or relations be paramount. Languages can also be labeled as
"low-level" or "high-level," with the former notable for permitting more
direct access to pieces of the underlying hardware, and the latter offering
a protective abstraction layer. Supporters of specific languages are less
inclined nowadays to bad-mouth other languages, and more focused on
"converting" users of rival languages over to their language. Click Here to View Full Articleto the top