The Federated Computing Research Conference (FCRC) will offer more than 16
affiliated research conferences. Sponsored by ACM, FCRC is scheduled from
June 9-16, 2007, at the Town and Country Resort and Convention Center in
San Diego, Calif. The speaker schedule includes Bjarne Stroustrup, from
Texas A&M, on June 9, in an address on C++: Evolving a Language for the
Real World; and the following day ACM 2006 Turing Award winner Fran Allen
will discuss Compilers for High Performance Computing. On June 11, Chuck
Moore of Advanced Micro Devices will give an address on A Framework for
Innovation, and Christos Papadimitriou of UC Berkeley will present
Algorithmic Lens: How the Sciences are Being Transformed by the
Computational Perspective. David Culler of UC Berkeley and Deborah Estrin
of UCLA will give a speech on Wireless Sensing as the Internet's Front-Tier
on June 12, a day that will also offer a presentation on the Future of
Computer Architecture by independent consultant Bob Colwell. Other
speakers include Princeton's Avi Wigderson on the Art of Reduction, Guy
Steel of Sun Microsystems Labs on Designing by Accident, and the University
of Washington's Ed Lazowska on Computer Science: Past, Present, and Future.
FCRC will also feature the ACM Student Research Competition.Click Here to View Full Articleto the top

Although the electronic design automation (EDA) industry is growing, it
needs to help solve the software development crisis, said analyst Gary
Smith during a presentation on the eve of the Design Automation Conference.
Smith said EDA tools need to adopt parallel programming, and many tools
need to be completely rewritten. Smith said that while the industry looks
strong it is "software challenged," and that semiconductor vendors want EDA
vendors to provide tools for all the design challenges they are facing, and
the biggest problem is software. "They're now looking for EDA vendors to
provide a total hardware and software solution," Smith said. "If you want
to stay providing the stuff you've always provided, you probably won't be
around in five years." Smith said that EDA vendors need to parallelize
their algorithms to handle designs at and beyond 100 million gates, and
while some vendors have done "surface rewrites" to include parallelism in
their tools, most will have to be completely replaced. Objective Analysis
analyst Tom Starnes said particularly challenging areas include task
partitioning, memory hierarchy, bus structure, power domains, process
technology, verification, simulation, and power dynamics.Click Here to View Full Articleto the top

Engineers are compelled to keep modifying Google's search engine because
the company is determined to reduce instances in which users are unable to
find the subject of their searches quickly and accurately. Google
jealously guards the mechanisms of its ranking algorithm, which is used to
ascertain what Web pages best represent the targets of user queries.
Google engineer Amit Singhal leads a "search-quality" team that tweaks the
mathematical formulas that drive the ranking algorithm about six times a
week. Among the challenges the search engine constantly faces is the sheer
immensity of its scope, which ranges from services offered in over 100
languages to indexing tens of billions of Web pages and managing hundreds
of millions of queries every day; filtering out a growing percentage of
fraudulent pages; and rising user expectations that the engine will yield
precise results with a minimal amount of input. Many complaints about
broken Google queries are forwarded to Singhal and his team, who must
consider how to fix them while maintaining a balance between the positive
and negative effects such changes could facilitate. One recurring
complaint Singhal has spent a lot of time focusing on is a lack of
"freshness," or the inclusion of new or recently changed pages in a search
result. Singhal concluded that simply displaying more new pages often
reduces search quality, so his group concocted a mathematical model that
tries to detect when users do and do not want new information, based on the
level of current online enthusiasm for the topic. More and more, Google is
tapping users' search histories for clues about their interests that the
search engine can take into account to refine results. Federated Media CEO
John Battelle estimates that search engines are responsible for bringing in
25 percent to 50 percent of visitors and a majority of new customers to
online stores, while media sites are realizing that many people are
skipping their home pages and jumping to the specific pages they desire via
Google.Click Here to View Full Articleto the top

Women are highly valued by the gaming industry for the fresh insight they
can bring, and this is creating opportunities for female tech professionals
looking for job options outside of the usual corporate IT departments. "If
we want to have [game] titles that reach a diverse audience, our workforce
has to reflect that diversity," argues Sirenia Consulting game designer and
developer Sheri Graner Ray, who is also chairwoman of Women in Games
International's steering committee. Peter Gollan of Iceland's CCP Games
believes adding more female game designers could result in the production
of content that draws more female gamers, while University of Southern
California School of Cinematic Arts professor Tracy Fullerton suggests that
more women would become game designers if there were more games on the
market that appeal to them. According to the International Game Developers
Association, only 11.5 percent of the gaming industry workforce was female
as of 2005. Graner Ray points out that most game designer tutorials follow
a distinctly male learning paradigm, that of jumping right in and playing
with the game environment, while women are more inclined to first
understand games before they experiment with them. Also discouraging to
female game designers are negative portrayals of women and a strong
anti-female bias in popular games, notes ECD Systems CEO Jack Hart.
Meanwhile, JupiterResearch analyst Michael Gartenberg observes that women
and girls have a greater affinity for games that involve strategy and
puzzles than in violent first-person shooter scenarios.Click Here to View Full Articleto the top

Researchers from Purdue University and Northwestern University have
developed a flexible, clear transistor that could be used to make
see-through organic light-emitting diode (OLED) devices such as maps on
visors and windshields, television screens in eyeglasses, and roll-up,
see-through computer screens. Although it was already possible to make
OLEDs transparent, a see-through display had been impossible due to the
lack of a clear transistor. The new transistor uses zinc-oxide and
indium-oxide nanowires that are not only see-through, but perform better
than their silicon counterparts and are easier to fabricate on flexible
plastic. Purdue University professor of computer and electrical
engineering David Janes says the transparent transistors could lead to
brighter see-through OLED displays. Other attempts to make see-though
transistors either had inferior performance or were not completely
see-through due to tiny metal contacts between nanotubes and electrodes.
However, the new transistor provides excellent mobility, flexibility, and
transparency, according to John Wagner, an electrical engineering and
computer science professor specializing in transparent electronics at
Oregon State University. The biggest question surrounding the transparent
transistors is if they can be manufactured on a large scale. The current
construction technique deposits several thousand nanowires without any
method to control where the nanowires settle or how they line up. The
researchers must wait until they find one appropriately aligned. Janes
says there needs to be a way of putting the desired number of nanowire is
the correct location.Click Here to View Full Articleto the top

A University of Alberta research team led by Abdulhakem Elezzabi has
applied plasmonics principles to spintronics technology to create a novel
way of controlling the quantum state of an electron's spin, a new
nanotechnology called "spinplasmonics" that the researchers believe will
lead to revolutionary advances in computer electronics and many other
areas. Spinplasmonics may result in the creation of very efficient,
electron-spin-based photonic devices, which could be used to build
computers with extraordinary memory capabilities. "We've only just begin
to scratch the surface of this field, but we believe we have the physics
sorted out and one day this technology will be used to develop very fast,
very small electronics that have a very low power consumption," Elezzabi
says. Using gold and cobalt samples, Elezzabi and his team were able to
demonstrate a plasmonically-activated spintronic device that turns a light
on and off by controlling the spin of electrons. The researchers believe
that with a slight alteration to the sample structure the effect would
become non-volatile, so any result could be indefinitely maintained without
a power source. Elezzabi believes this technology will move computer
electronics away from silicon-based semiconductors to a new era of
metal-based electronics with light driven circuits. "To me, this is almost
a natural evolution of the two fields," Elezzabi says. "This opens up a
lot of possibilities; this is just the beginning."Click Here to View Full Articleto the top

University of Illinois at Chicago (UIC) researchers, working with
colleagues from the University of Central Florida in Orlando, have received
a three-year, $500,000 grant from the National Science Foundation to
explore and monitor the development of virtual, 3D online chat. Director
of UIC's Electronic Visualization Laboratory Jason Leigh notes that
graphics technology has already advanced enough to create realistic-looking
human avatars in three dimensions, speech recognition is more than 90
percent accurate, and computer image-processing speeds are close to real
time. UIC has also developed technology that can create 3D images without
the use of special glasses. UIC professor of communication Steve Jones
says an important factor in the project is to create body language for the
avatar as it responds to comments and questions. Subtle movements,
gestures, or speech patterns such as a slight pause before speaking, can
have great meaning and have been missing from software programs so far,
Jones says. The project will use video cameras to record a person's
mannerisms, feeding that and other information into the avatar program.
Leigh says that such technology may be applied to preserve virtual avatars
of people with unique knowledge, such as knowledge vital to running a
business or other organization. "The goal is to combine artificial
intelligence with the latest advanced computer graphics and video game
technology to enable us to create historical archives of people beyond what
can be achieved using traditional text, audio and video footage," Leigh
says.Click Here to View Full Articleto the top

The National Science Foundation has awarded a five-year, $400,000 Faculty
Early Career Development (CAREER) grant to University of Texas at San
Antonio computer science professor Carola Wenk. The CAREER award will
enable Wenk to continue to research the theory and practice of geometric
shape handling. Wenk could pursue the development of computational tools
for analyzing two-dimensional electrophoresis gels and protein samples,
which could benefit the medical industry and improve the process of
developing pharmaceutical products. Wenk could also use the award to apply
theoretical algorithm research to global positioning systems or
car-navigation systems. A year ago, Wenk created real-time traffic
estimation and prediction systems using data from GPS receivers in school
buses and taxis. "I would like to apply similar technology here in San
Antonio and maintain a database for each road segment to determine current
travel situations using GPS receivers in cars traveling all over this
area," says Wenk.Click Here to View Full Articleto the top

Bullying, it seems, is a universal constant, even in the virtual arena,
according to University of Nottingham researchers. Abuse of digital
avatars, swearing, and nudity are just some examples of belligerent
behavior observed by researchers who are using the Second Life
computer-based environment as a testbed; Second Life citizens say newcomers
are frequent targets of bullies. Experts think the ramifications of
cyber-bullying can extend to everyone, and a team led by the University of
Nottingham's Dr. Thomas Chesney is conducting research to study
similarities and/or differences between virtual and real-world bullying
behavior. "In Second Life it appears that the power imbalance between a
griefer [bully] and a target is focused on knowledge and experience," notes
occupational psychologist Dr. Iain Coyne. "A new resident [newbie] may be
targeted because of their naivety and inability to stop the griefing."
Coyne points out that this power imbalance also plays a key role in the
bully-victim relationship at school and work. It is possible that the
anonymity offered by the Web is a critical factor behind the higher
incidence of virtual bullying compared to real-world bullying, claim
researchers. "If we are to be in a position to address the problem [of
cyber-bullying], we need to be able to understand the nature and extent to
which it occurs--that's why I think the research project is an important
one," says University of Nottingham Dean of Business, Law, and Social
Sciences Christine Ennew, the project's primary sponsor. The researchers
will present their findings at The European Conference on Information
Systems in early June.Click Here to View Full Articleto the top

Holographic memory is already available and is capable of storing more
information than memory technologies such as CDs and DVDs because
information can be encoded in three dimensions, but holographic data can
only be written to once. Xiaowei Sun and Liu Yanjun of the Nanyang
Technological University in Singapore have developed a technique that could
be used to create rewritable holographic memory devices. By using software
to calculate an interference pattern, the researchers were able to use a
single laser to record information on a cell containing an 8-micron-thick
layer of liquid crystal polymer. Normally holographic imaging requires two
lasers, so although the technique simplified the process, it also reduced
the resolution of the holograph. The real technological advancement,
however, was that by applying a voltage to the cell, the recording was
temporarily wiped clean because the liquid crystal molecules were forced to
realign. After the voltage was removed, the hologram returned. Sun said
the hologram functions like a transistor. "Instead of turning a current on
or off, it is switching a holographic image," Sun explained, adding that it
should be possible to integrate the hologram technology into regular
electronic devices. Neil Collings, a hologram expert at Cambridge
University, said the technology could lead to rewritable holographic
memory, "but it has a long way to go."Click Here to View Full Articleto the top

University of Southern California Viterbi School of Engineering graduate
student Nathan Schurr, working with artificial intelligence expert Milind
Tambe, created DEFACTO, a training program that will help firefighters
practice simulated emergency situations. The program, part of Schurr's
thesis project, can be used by fire departments as an alternative to the
traditional training system of assembling a veteran team to create,
describe, and monitor emergency scenarios that trainees would respond to in
another room. DEFACTO uses a committee of artificial intelligence "agents"
to create vivid disaster scenarios, complete with images and maps, and
allow small training groups, or even individuals, to make response
decisions and receive immediate feedback on their choices. DEFACTO
features an "omnipresent viewer" that allows the trainees to view the
disaster in full color 3D images. The agents can also help the trainees
develop response strategies and allows them to gauge their success by
comparing their reactions to ones proposed by the program. Schurr says the
responses proposed by large committees of artificial agents were only
slightly behind those of human responses, and that disagreements between
humans and the agents occasionally resulted in a compromised plan. "Even
wrong decisions can lead to better results," Schurr said. Los Angeles Fire
Department Fire Captain Ron Roemer praised DEFACTO. "It's a lot more
controlled," Roemer said. "You can see if you're heading toward a mistake
much more quickly."Click Here to View Full Articleto the top

The U.S. Department of Defense's yearly congressional report warns that
the People's Liberation Army (PLA) of China is gearing up for electronic
warfare by establishing information warfare units that are creating viruses
to lay siege to adversarial computers and networks, while simultaneously
implementing strategies to defend its own computer systems and networks and
those of its allies. Electronic and infrared decoys, false target
generators, and angle reflectors are some of the other electronic
countermeasures China is exploiting outside of malware. Internet Security
Advisors Group President Ira Winkler said China is second only to Russia as
the country most capable of cyber-espionage, and maintained that China has
vast resources to devote to acquiring "first strike" capability in a
cyber-warfare scenario. Breaches of U.S. computer networks have been
attributed to Chinese hackers, who Winkler said are successful because of
their ability to exploit both their highly methodical analysis of target
systems and their victims' inadequate security deployments. The DoD's
report was condemned by China foreign ministry representative Jiang Yu, who
claimed the study distorts his nation's military strength and expenses "out
of ulterior motives." "Each sovereign state has the right and obligation
to develop necessary national defense strength to safeguard its national
security and territorial integrity," he argued. "It is totally erroneous
and invalid for the U.S. report to play up the so-called 'China
Threat.'"Click Here to View Full Articleto the top

Researchers at the University of Hertfordshire are bringing KASPAR
(Kinesics and Synchronization in Personal Assistant Robotics) into local
schools in an effort to study the effectiveness of using robots to help
children with disabilities develop socials skills. The Interactive Robotic
Social Mediators as Companions (IROMEC) project, follows Hertfordshire's
success with AuRoRA (Autonomous mobile Robot as a Remedial tool for
Autistic children) in encouraging imitation and turn-taking behavior.
"This previous research led us to using KASPAR, a child-sized humanoid
robot, with minimum facial expressions, which can move its arms and legs
and allows the child to interact with it," says Dr. Ben Robins. The
three-year project, funded by the European Sixth Framework, will help
reveal the potential of robotic toys as mediators of human contact for
children with special needs. "We are seeing already that through
interacting with the robot, children who would not normally mix are
becoming interested in getting involved with other children and humans in
general and we believe that this work could pave the way for having robots
in the classroom and in homes to facilitate this interaction," add Robins.
Other members of the Hertfordshire team include professor Kerstin
Dautenhahn and Dr. Ester Ferrari in the School of Computer Science.Click Here to View Full Articleto the top

Hakia CEO Dr. Riza C. Berkan writes that there is plenty of room for
improvement in search engine technologies, and discusses how semantic
search can effectively address the poor relevancy challenge. Berkan
explains that a semantic system is truly semantic if it encompasses
language knowledge, and what is required is a deterministic language
processing model based on algorithms that match the definition of concepts
and mimic "understanding." The author lists two fundamental semantic
search models: A Semantic Web model in which semantic resources are
incorporated into the Web pages, and a Semantic Search Engine model in
which semantic resources reside in search engines that implement algorithms
that use them. Berkan contends that the Semantic Web strategy "is based on
an unrealistic assumption that every Web author will abide by the complex
rules of semantics--not to mention the education it requires--and place
content in the correct buckets of mysteriously unified standards." The
Semantic search engine approach--which hakia, among other companies, is
focusing on--requires embedding language knowledge into a framework that
permits a swift and scalable search process, which entails a major
investment in time and money; also time-consuming is the next step of using
the system to analyze all Web pages to ready a retrieval platform. A
Semantic search engine can support on-the-fly analysis of long-tail queries
that yields contextually accurate results. Berkan anticipates that current
search engines will eventually be supplanted by semantic search because the
effectiveness of the approach on long-tail queries should encourage its
application to simple queries. The author notes that a semantic search
engine's recognition of the correct context for a given query term makes a
Web page's popularity irrelevant, so the page's credibility, which is
relatively easy to assess, becomes the prime factor.Click Here to View Full Articleto the top

The Free Software Foundation (FSF) announced on Thursday that the final
version of the GNU General Public License (GPL) 3.0 should be released
before the end of June, and also published the "final call" draft of GPL
version 3. "We've made a few very important improvements based on the
comments we've heard, most notably with license compatibility," says FSF's
Peter Brown. "Now that the license is almost finished, we can look forward
to distributing the GNU system under GPLv3, and making its additional
protections available to the whole community." The draft follows
accusations by Microsoft that free and open-source software infringes on
more than 200 of the company's patents. The draft bans deals similar to
Novell's arrangement with Microsoft to distribute software under GPLv3,
although that deal in particular is not prohibited because under GPLv3 the
patent protection Microsoft has extended to Novell's clients would be
broadened to include everyone who employs any software Novell distributed
under the license, according to the FSF. But the draft says some companies
would be allowed to distribute GPLv3 even if they have forged such a deal,
provided the arrangement was made prior to March 28. Included in the draft
is a measure declaring that distributors must supply installation
information only in situations where they are distributing the software on
a user product and in which the clients' buying power will probably be less
structured.Click Here to View Full Articleto the top

Enterprise database management systems are evolving under pressure from
governance, mobility, stability, and scalability issues, says Gartner
analyst Donald Feinberg, who believes major DBMS vendors are making a noble
effort to address the challenges of scalability and the increasing volume
of rich data. Their strategy in the first instance is to align the
scalability of the databases created with their DBMSes with the customer's
needs, while vendors' answer to the second challenge is to embed within the
database "the capability to store many different types of data,
efficiently," says Feinberg. Hardware virtualization has a lot of ground
to cover before it yields true value to the DBMS community, in Feinberg's
opinion. In his view, numerous online transaction processing (OLTP)
databases will eventually adhere to a model where the majority of the data,
if not the whole of the data, is warehoused. "The architecture of
applications is going to be [such] that they're going to get some of their
operational data out of the OLTP or the transaction databases, but some of
it is going to come out of the data warehouse," Feinberg predicts. "And as
more and more of that happens, as you get BI and analytic code in OLTP
applications that access data warehouses, the warehouse becomes much more
critical." Storing data directly into the warehouse where it can be most
efficiently utilized will become even more important as the amount of data
being processed by companies explodes with the implementation of mobile
applications and radio-frequency identification, Feinberg says.Click Here to View Full Articleto the top

Smart environment technologies created from the integration of pervasive
computing, sensor networks, and artificial intelligence would be a major
boon to elderly people with mental or physical handicaps who wish to live
independently at home, writes Washington State University School of
Electrical Engineering and Computer Science professor Diane J. Cook.
Software that runs smart environments can employ data collected from
sensors to identify residents' actions and construct a model for daily
living, making deviations that could signify a health crisis easier to
recognize and rectify. An automated home and work environment affords a
degree of control for physically limited people that obviates the need for
frequent caregiver assistance, while cognitively impaired people can be
automatically reminded of tasks, routines, and directions to make their
day-to-day living easier. Augmenting the quality of life is another thing
smart environments are designed to do. One example is the environments'
use of wireless sensors to observe the social interactions of older adults,
report those activities to caregivers, and make suggestions to enhance a
person's social life. Smart environments are also useful to hospitals, for
such things as making patients and doctors safer and monitoring the
progress of people following surgery. "Much continued research is needed
to make these technologies robust and ready for widespread adoption,"
explains Cook. "Investigating these issues is imperative if we want to
adequately care for our aging population and provide the best possible
quality of life for them and, ultimately, for ourselves."Click Here to View Full Articleto the top