Princeton University computer science professor Bernard Chazelle plans to
aggressively combat the declining interest in computer science among
college students, and is challenging his colleagues to do the same. The
top 36 computer science programs in the U.S. witnessed a nearly 20 percent
decline in enrollment from 2000 to 2004, causing Chazelle to wonder, "Why
is there this decline when the field has never been more exciting?" In a
recent interview, Chazelle expressed his belief that computer science is
much more than a technical pursuit, and that it has the legitimate
potential to transform society and lead to the most significant realignment
of the scientific worldview since quantum mechanics. Chazelle believes
that despite the dependence on algorithms that will characterize
neurobiology, proteomics, and other 21st century sciences, the discipline
of computer science has yet to recover from the perceptions that technology
is too uncertain a field to stake a career on that have been circulating
since the dot-com collapse. Chazelle also laments the discipline's lack of
an industry luminary to inspire students as they make their academic and
career choices, as Stephen Hawking does for physics. Chazelle also argues
vigorously against the notion that the only career open to a computer
science graduate is in programming, noting that all sciences depend on
computer science, and that it has caught up to math in its ubiquity and
value as a language to solve scientific problems.Click Here to View Full Articleto the top

While alarmist speculation that the United States faces a critical
shortage of knowledge workers in science and engineering has reached a
fever pitch, the debate routinely ignores the steep increase in the number
of science and engineering degrees awarded by American colleges and
universities. Graduate enrollment in science and engineering programs has
also spiked, having risen 22 percent since 1998, with a 60 percent increase
in computer science. While these numbers are partially inflated by
foreign-born students attending American institutions, enrollment among
native-born students, after years of decline, has been rising as well.
While the figures of 600,000 and 350,000 engineering graduates produced
annually in China and India, respectively, have been widely cited as
evidence of emerging economic powers eclipsing the United States in
technological leadership, Duke researchers found significant flaws in those
numbers. Aside from including graduates from abbreviated two- and
three-year programs, they also obscure the fact that taken as a proportion
of total population, the United States still leads the world in graduating
engineering students. Still, as India, China, and other countries take a
more active role in the global economy, it is inevitable that they will
produce more engineers and scientists, which will naturally erode the
percentage supplied by the United States. Instead of relying on sheer
numbers, it makes sense to consider some other characteristics of the U.S.
technological climate, such as its acceptance of new ideas, the close
relationship between universities and business, and amply-funded venture
capitalists. That said, the U.S. military still requires high-level
technology, and high-value research remains critical to the economy.
Better pay is the simplest and surest way to draw the best minds to careers
in science and technology, thereby ensuring that U.S. innovation remains
vigorous.Click Here to View Full Articleto the top

Drawing on research in technologies such as advanced lenses and related
areas, a team of IBM researchers has developed a technique that could lead
to semiconductors with feature sizes of 30 nm, potentially breathing new
life into Moore's Law, which has increasingly been threatened with
expiration as scientists struggle to continue the miniaturization of the
computer chip. A consensus had formed in the industry that today's
photo-etching process would not be able to produce wires smaller than 40
nm, necessitating a migration to X-ray light sources or other alternative
printing methods. The IBM researchers, partnering with a group from JSR
Micro, used deep ultraviolet lithography (the same laser technique used to
imprint circuits on chips) to create the thinnest line patterns that the
industry has seen. IBM's Robert Allen hopes that the research will guide
the industry toward the continued use of optical lithography, rather than
X-ray light sources, which would require a dramatic overhaul of the entire
semiconductor industry where optical lenses would have to be exchanged for
mirrors to focus light in the manufacturing process. The research also
demonstrates that argon fluoride excimer lasers, now used to create 65 nm
features, will be able to continue to scale through a fluid immersion
etching technique. By using a crystalline quartz lens infused with exotic
immersion liquids, the researchers improved the resolving ability of the
light source, though they acknowledge that their technique must still be
refined before it can be considered commercially viable.Click Here to View Full Article
- Web Link May Require Free Registration
to the top

In a recent interview, Virginia Polytechnic Institute's Wu Feng, newly
hired to manage his own lab and contribute to the Center for High-End
Computing Systems, discussed his work at the Los Alamos National Laboratory
and his goals in his new job. Feng entered the field of high-performance
computing through his doctoral work on real-time networking, concentrating
on time-sensitive information delivery. When he first started at Los
Alamos, Feng was working on high-performance networking, looking for ways
to improve the systems software, particularly clearing the host interface
bottleneck. Feng developed a protocol to bypass the operating system when
transmitting information across a network, effectively cutting out the
middleman. While he began with networking, Feng's interests have become
more global, and he hopes to use his SyNeRGy Lab at Virginia Tech to
facilitate the use of computers in a host of fields, ranging from
engineering to music. While he admits that his work may be laying the
groundwork for human-computer interaction, Feng notes that it first must
solve more immediate problems such as enabling software to handle more
system failures and other functions automatically. Feng sites Google as a
successful model of this idea, as despite the hourly failures in its
processing farm, Google never shuts down. By the end of the decade, Feng
believes that system software could emerge that conducts fault tolerance
automatically and offers the complete availability of the system, despite
the unreliability of its components. Feng also laments that in the past
several years, he has seen many of his colleagues leave the country to
work, citing the declining portion of funding that goes to long-term
scientific research.Click Here to View Full Articleto the top

Fearful that Bell Labs would cut their funding if they did not produce an
invention soon, Willard Boyle and George Smith sat down for a one-hour
brainstorming session in 1969 where they developed the basic blueprint for
the charge-coupled device (CCD), a new memory chip that would revolutionize
the capture and storage of images. CCDs are the backbone of digital
cameras, and have been used in X-rays, space exploration, and surgical
procedures. Boyle and Smith will receive the $500,000 Charles Stark Draper
Prize from the National Academy of Engineering this week for their
breakthrough. A CCD contains a light-sensitive silicon chip capable of
storing charge packets inside its capacitors. Photons unfetter electrons
when they collide with the silicon, producing a charge commensurate with
the intensity of light, which is then stored by the capacitors that in turn
produce pixels. Voltage then passes through the device, impelling the
charges to move in a controlled fashion between pixels, depositing the
charge packet into a signal processor to be digitized and reformatted as
the original image. After sitting on their discovery for a couple weeks,
Boyle and Smith tested the first CCD, containing all of six pixels, on a
metal oxide array. Successful trials heralded the end of chemistry
photography, and provided the foundation for modern digital cameras that
contain millions of pixels. In 1991, Kodak unveiled the first commercial
digital camera, consisting of a 20 MB hard disk and a backpack to carry the
required electronics. By that time, the CCD had already been widely used
in astronomy, appearing in various observatories and the Hubble telescope,
ushering in the era of space exploration conducted from space.Click Here to View Full Articleto the top

Research in the field of virtual humans has developed practical and
potentially life-saving applications for a technology once thought of as
little more than fodder for big-budget animation studios. Scientists at
the University of Iowa at work on the Virtual Soldier Project have created
a virtual human through algorithms and motion-capture data collected from
digital scans of a human volunteer. Santos, the Iowa researchers' virtual
human, can be used to test products in lieu of a physical prototype.
Instead of requiring costly physical production for testing, the developers
can load a scaled-down digital prototype of the product into the system and
command Santos to interact with it, simulating how a human would relate to
it in real life. Having enlisted Santos to test the ergonomic quality and
serviceability of its heavy machinery, Caterpillar might dispatch the
virtual human to a task such as changing the oil on a dump truck, the whole
time monitoring his simulated body functions such as heart rate, muscle
exertion, and temperature. The U.S. Army also uses this technology when it
develops body armor and other protective equipment for combat, using a
model such as Santos to determine if the gear prohibitively impedes motion.
Ongoing research in the field promises new advances for digital human
modeling, such as the Visible Human project, in which researchers divide a
cadaver into 0.3 mm slices, creating cellular-level resolutions that could
have a major impact on fields such as accident reconstruction and
forensics.Click Here to View Full Articleto the top

Technology designers too often ignore the real world when developing the
user interfaces of computers, cell phones, and other devices powered by a
computer chip, according to Stanford assistant professor of computer
science Scott Klemmer. He believes that in their haste to overload devices
with features and computational power, product developers often neglect the
intuitions and physical control of the real world, often running counter to
the manner in which users learn and interact. Klemmer has developed a set
of design principles, including the belief that only a limited amount of
the device's testing should be virtual rather than physical, stressing that
designers must take the time to build actual prototypes, rather than
relying on simulations. By way of compromise, Klemmer and doctoral
candidate Bjoern Hartmann have developed d.Tools, a prototyping system that
enables simultaneous design of a device's hardware and software. Klemmer
also believes that product design must be guided by the understanding that
the human body is capable of engaging in complex and varied interactions
with the world, and that interfaces that entirely remove the physical
element from human/machine interaction may not be ideal. Machines have
done nothing to further the concept of visibility, where people can
physically observe what their colleagues are working on and other useful
information, and the paperless office remains a myth. Klemmer's design
principles are meant to work in tandem with prototyping, where developers
can gauge consumer response through mockups and dummies to further improve
the product's functionality. With the number of computers vastly outpacing
the growth of the world's population, Klemmer argues that it is more
important than ever for them to mesh with the natural world and humanity's
physical nature.Click Here to View Full Articleto the top

The advance of scientific discovery is critically dependent on
cyberinfrastructure to support the global exchange of information and
instant access to resources irrespective of their physical location.
Comprising cyberinfrastructure are cyberenvironments, which enable access
and integration of projects across disciplines and geographies, cyber
resources, which solve advanced scientific problems in a timely manner, and
cybereducation, which makes the benefits of cyberinfrastructure available
to teachers and students around the world. The NSF has recently released
an updated version of its "Cyberinfrastructure Vision for the 21st
Century," calling for international partnerships between public and private
organizations to collaboratively develop the cyberinfrastructure required
to enable the next generation of scientific computing technologies. In his
keynote address as the recent NCSA 20th Anniversary Celebration, NSF
Director Arden Bement championed the international development of
cyberinfrastructure, arguing that collaborating with other nations will be
critical to the United States maintaining its own position at the forefront
of technology. "We should pursue more global involvement, not less. The
rapid spread of computers and information tools compels us to join hands
across borders and disciplines if we want to stay in the race." CTWatch
Quarterly has collected articles from scientists detailing the activities
of eight cyberinfrastructure programs around the world, including the
Australian Partnership for Advanced Computing, India's developing national
grid system GARUDA, and Japan's Cyber Science Infrastructure and National
Research Grid Initiative. Other entries came from Brazil, Korea, South
Africa, Taiwan, and the Pacific Rim Applications and Grid Middleware
Assembly.Click Here to View Full Articleto the top

The high-tech industry could have more females in positions of leadership
soon if current initiatives prove to be successful in encouraging more
women to pursue careers in technology and promoting those who are talented
IT professionals, writes Anita Borg Institute for Women & Technology
President Telle Whitney. Top universities are now reconsidering the way
math, science, and technology are taught and learned, and leading high-tech
companies are not only trying to recruit more women, but they are also
focusing on retaining them and promoting them. The efforts come at a time
when the numbers of women in the tech industry and pursuing
technology-related studies pale in comparison to females in the labor force
and on college campuses. A new report from Catalyst shows that the number
of female computer science graduates at top research institutions has
fallen from 37 percent in 1985 to 17 percent in 2003. The report also
indicates that women account for 11 percent of corporate officers at
technology companies compared with 15.7 percent at Fortune 500 companies,
and hold 9.3 percent of board seats at tech firms compared with 12.4
percent at Fortune 500 firms. Not only must women be expert technologists
to advance, they have to contend with a corporate culture that has not
fully embraced and supported their advancement. Other factors include
balancing work with family responsibilities, a lack of role models and
networks, and the inability of companies to identify and develop their
skills.
To learn about ACM's Committee on Women in Computing, visit
http://www.acm.org/womenClick Here to View Full Articleto the top

The University of Bath's Cityware project will convert the center of the
historic city of Bath, England, into a pervasive computing area where users
will be able to access wayfinding services, interactive games, and
information services on their laptops or other mobile devices. Project
investigator Danae Fraser, a professor of psychology at the university,
says the project will include 30 volunteers who reside in the city to track
how the technology is used over the next three years. Fraser expects their
feedback to inform the world's technology companies as they develop the
next generation of mobile devices. "Pervasive technology that is available
to everyone, everywhere, and at all times promises to be the next big leap
in mobile computing technology," said project leader Eamonn O'Neill, noting
that cities will see the greatest and most immediate demand for pervasive
computing. One service included in the Cityware project will enable users
to submit a photograph of a building and relay it to a central server,
which then compares it to a database and responds to the user with specific
location information. Throughout Bath, the Cityware project will use
Bluetooth, Near Field Communication, and wireless networks. Due to its
status as a UNESCO World Heritage Site, Bath annually draws millions of
visitors, which will provide the project coordinators with ample
opportunities to track how the system is used.Click Here to View Full Articleto the top

HP Labs celebrated its 40th anniversary this week with an open house in
Palo Alto, Calif., in which several of its consumer-oriented projects were
on display, including a coffee table that featured a touch-screen display
that could be used for sharing pictures, playing board games, or looking at
a map. The research hub of Hewlett-Packard, HP Labs has been behind
several major developments over the years, such as the thermal inkjet
printer, that have helped the company become profitable. By working
closely with products groups on its research projects, developments do not
go unnoticed and are scrutinized for their market potential. The ranks of
HP Labs consist of 600 employees, and the research arm splits its time
between practical projects that will boost profits and more scientific
aspirations that could be rewarding 10 years down the road. Innovative
ways to automate and virtualize the data center with heavy investment in
software is one focus of HP Labs. Hewlett-Packard spent about $3.5 billion
on research and development last year, and the budget of HP Labs is about 5
percent of that amount.Click Here to View Full Articleto the top

The NSF has awarded a $450,000 grant to Virginia Tech and Villanova
University to improve the availability of its online library, enabling
students and faculty to conduct searches directly through course Web sites.
"Our goal," said project sponsor Manuel Perez-Quinones, assistant
professor of computer science at Virginia Tech, "is to get content from the
National Science Digital Library (NSDL) closer to its intended audience,"
which consists of academics involved in any area of computing. The NSDL's
Web site touts the refined, targeted results that are produced by searching
its collections, noting that it only seeks materials from credible academic
sources suitable for educational environments. Perez notes that as its
collections have expanded, the NSDL is focusing more on user services and
greater functionality. Web sites are the centerpiece of the project, which
will convert individual course sites into gateways to the NSDL's
collections with a personalized interface, so that they could direct a
professor to a course page that might contain a list of textbooks relevant
to his search. Perez notes that this context-sensitive type of service
will better utilize the NSDL's resources: "Users are more likely to select
options right at the spot and context where they are doing their work,
instead of going to a different Web site, searching for textbooks, and
browsing through the list of textbooks to identify those that might be
appropriate for their needs." The project will also monitor the usage
habits of students and professors, seeking to better understand the effect
that the Web has on academic classes in the broader context of the two
universities' study of digital libraries, human-computer interaction, and
personalization.Click Here to View Full Articleto the top

A new lab for mobile communications research initiated by MIT and Nokia is
up and running in Cambridge, Mass. Called the Nokia Research Center
Cambridge, the new research and development partnership brings together 40
researchers from MIT's Computer Science and Artificial Intelligence
Laboratory (CSAIL) and the Nokia Research Center in Cambridge. Working
from the one-to-one model, the two organizations will focus on wireless
communications and handsets, and provide solutions for bringing enhanced
technology to market in the years to come. Nokia Research Center's James
Hicks will serve as director of the lab, and an MIT computer science and
engineering professor who goes by the single name of Arvind will be the
program director. Rodney Brooks, an MIT professor who directs CSAIL,
highlights the collaborative equality of the relationship. "Unlike most of
these relationships, the teams will be working in close proximity to each
other, which helps in getting things done," says Brooks. Arvind, founder
of the semiconductor company Sandburst in Andover, adds that the lab will
operate "in the open," which means students will have access to Nokia's
intellectual property and will not have any problems when it comes to
publishing academic work.Click Here to View Full Articleto the top

The role of the International Telecommunication Union (ITU) in assuring
the successful adoption of radio frequency identification (RFID) and sensor
technologies was the focus of a recent workshop in Geneva, Switzerland,
attended by industry and academic leaders. "RFID is moving from closed
systems of reader and tag to where we need a network capable of sharing the
data," says Pierre-Andre Probst, who headed a number of sessions at the ITU
workshop. "Billions of tags creating data to transmit over a network means
a significant change in traffic for the network to handle. That will
require new network capabilities, and there are specific new requirements
as we move toward an Internet of things." Among the issues broached at the
conference were network and service architecture, requirements for
machine-to-machine communications, security, interoperability, and spectrum
allocation. "Our main concern is to see the network requirements and
capabilities developed to support the move from simple RFID applications
toward more-complicated devices that include sensors," says Probst.
Spectrum allocation will be addressed at ITU World Radiocommunication
Conference scheduled for October 2007 in Geneva.Click Here to View Full Articleto the top

The debate over the legality of the Bush administration's warrentless
eavesdropping could become a moot point if more providers follow in the
footsteps of Skype, which encrypts its free Internet calls, making them
almost immune to eavesdropping. Though encryption techniques for Internet
communication have been around for years, most users have not felt
vulnerable enough to justify the hassle of security programs such as the
cumbersome email application Pretty Good Privacy. Counterpane Internet
Security CTO Bruce Schneier notes that Skype's ease of use made it popular,
rather than its security. Skype boasted 75 million registered users of its
freely distributed software at the end of last year. Talking over the PC
is free, but telephone-based communication carries a fee. Calls placed
through Skype traverse the Internet encrypted with 256-bit keys, twice the
length of the keys typically used to transmit credit card numbers. "It's a
pretty secure form of communication, which if you're talking to your
mistress you really appreciate, but if al-Qaeda is talking over Skype, you
have probably a different view," said Verso Technologies CEO Monty
Bannerman. Schneier says that Skype's encryption is of sufficient strength
to foil the eavesdropping efforts of the National Security Administration,
as even a poorly encrypted call would take hours to crack. He adds,
however, that the government could still track Skype's calls, even if it
could not listen in on the content. Skype CEO Kurt Sauer claims the system
has no back doors to get around the encryption, though he also reports that
Skype is in full cooperation "with all lawful requests from relevant
authorities," declining to elaborate further.Click Here to View Full Articleto the top

Motor vehicle agencies will now have to link their databases together and
possibly implant chips in driver's licenses in an effort to make way for a
national ID card, according to American Association of Motor Vehicle
Administrators CEO Linda Lewis-Pickett, who made the announcement during
the RSA Conference 2006. "The DMV is in differing aspects of readiness and
it would need to make a quantum leap to get to the point of issuing
national ID cards," said Lewis-Pickett. She also said several states need
to create a method of interoperability to share information that could be
used for a national ID system. The conference panel agreed that a national
ID system will fail to fight terrorism, one of the goals of the Real ID Act
passed last year, and is scheduled to go into effect in 2008. The
panelists said a national ID system may create other security concerns
other than an inability to fight terrorism, such as the potential
exploitation of the information in the database, as well as the commercial
harvesting of information every time a national ID card is used. "This is
a rules problem, not a technology problem," said James Lewis with the
Center for Strategic and International Studies. "We need rules on who has
access to the information." Lewis said 100 countries currently use a
national ID card, and that it has not stopped identify theft in those
countries.Click Here to View Full Articleto the top

Asynchronous JavaScript with XML (Ajax) is becoming more acceptable as
developers focus on designing fast and easy-to-use Web interfaces, and the
"client-free," rich Web applications Ajax can help create dovetails nicely
with many future forecasts about software development. However,
Christopher Lindquist recommends CIOs practice caution when considering the
use of Ajax: "Take a deep breath and learn what the technologies can and
can't do and what skills you need on staff to take best advantage of the
tools," he writes. For the present, Ajax is optimal for creating more
intuitive and useful user interfaces. However, deep proficiency in
JavaScript and familiarity with the back-end database is necessary to
ensure that Ajax-based applications will work; luckily, growing interest in
Ajax has spurred the creation of off-the-shelf tools and open-source kits
designed to streamline Ajax development. But there are other sticking
points with browser-based JavaScript support, especially for those who wish
to work across platforms such as Unix, Windows, and the Macintosh along
with browsers such as Firefox, Internet Explorer, and Safari. Sachin Shah
with the SimplyHired job-listing Web site says companies must guarantee
that their Ajax features can downgrade appropriately if they intend to make
their sites publicly accessible, while security is another factor to heed.
Backcountry CTO Dave Jenkins notes that developers must never forget that
Ajax is not an all-or-nothing scheme.Click Here to View Full Articleto the top

Milton Feng and Nick Holonyak Jr. of the University of Illinois at
Urbana-Champaign lead a team that has developed a prototype laser
transistor whose switching speed overtakes that of all other transistors;
the device can switch on and off over 700 billion times per second. The
transistor simultaneously emits both electrical signals and a laser beam,
which can be adjusted to relay optical signals at 10 billion bits per
second. Feng and Holonyak predict that the transistor laser will
eventually be modified to send 100 billion bits per second at room
temperature. The researchers envision the use of transistor lasers as
optical interconnects, which would enable the instantaneous flow of data to
and from memory chips, graphics processors, and microprocessors. The
transistor laser is essentially a transistor with an extremely thin
additional layer known as a quantum well. Electrons are injected into the
base by a voltage at the emitter, while in the well, many more electrons
combine with holes than in the rest of the base, resulting in the emission
of light; this light bounces off mirrors within the well, and the
accumulated stimulation eventually produces a beam of laser light.
Electrons that fail to recombine with holes in the well are shuttled into
the collector, which exhibits a current gain. Feng and Holonyak believe
the transistor laser could dramatically enhance and improve the quality of
teleconferences, video cell phones, Internet searching, and supercomputer
number-crunching.Click Here to View Full Articleto the top

Elliptic-curve cryptography (ECC) can contribute significantly to the
performance of embedded systems. ECC, which is standards-based, comes with
all the benefits of public-key cryptography, employs smaller key lengths,
and offers more efficient implementation for both public and private
operations. Because private-key cryptographic schemes assume knowledge of
a shared secret key, systems that use them are hard to initialize or
recover when the keys are lost or compromised. Public-key cryptography
only sets up shared keys on an as-needed basis, which makes public-key
systems more secure but less efficient than private-key systems. As a
result, private-key and public-key schemes are often employed together to
establish the private keys for encryption or to sign and confirm signatures
on messages. The much smaller sizes of ECC keys mean security measures
such as smaller signatures and certificates are more efficiently
implemented. Increased efficiency of other ECC operations besides security
can be realized through additional methods, with notable advantages to
embedded systems. On systems that are flexible enough to add hardware,
substantial gains in speed and power usage can be extracted from the
addition of a hardware assist to carry out finite-field multiplications,
which are the foundation of ECC.Click Here to View Full Articleto the top