A team of European security researchers has shown that radio frequency
identification (RFID) tags contain a vulnerability that a hacker could
exploit to transmit a software virus by infecting even a small portion of
the chip's memory. The researchers, associated with the computer science
department at Vrije Universiteit in Amsterdam, warn that in addition to the
host of privacy concerns raised by the widespread use of RFID tags, the
newly discovered vulnerability could enable terrorists or smugglers to pass
through RFID luggage scanning systems at airports. The researchers tested
software intended to replicate the commercial software in RFID tags, and
noted that while they did not have a specific flaw to report, they believe
that commercial RFID software contains the same potential vulnerabilities
that can be found in the rest of the computer industry. The group's
leader, American computer scientist Andrew Tanenbaum, warned specifically
of the dangers of buffer overflow, a common programming error throughout
the software industry where developers fail to verify all of their input
data. The low cost of RFID tags, the critical feature that enables their
widespread deployment in tracking cargo, merchandise, and even livestock
and pets, is also a security concern, according to SRI International's
Peter Neumann, co-author of a forthcoming article in the May issue of the
Communications of the ACM. "It shouldn't surprise you that a system that
is designed to be manufactured as cheaply as possible is designed with no
security constraints whatsoever," Neumann said, citing the potential to
counterfeit or deactivate tags, insufficient user identification, and the
poor encryption of the U.S. passport-tracking system under development,
though he had not previously considered the possibility of viruses or
malware.Click Here to View Full Articleto the top

A recent ACM study has found that the rise of offshoring indicates that
the technology industry is getting stronger, rather than evidence of an
ailing economy marked by the wholesale exporting of U.S. jobs, as has been
widely reported in the media. Rolia Varma, a professor at the University
of New Mexico's School of Public Administration, argues that the media has
only become concerned about offshoring now that it affects high-tech and
professional jobs, while manufacturing work has been exported for years.
"Nobody was paying attention when blue-collar jobs were being outsourced,
but when it began to happen with white-collar jobs, suddenly people started
to notice," said Varma, who served as one of 30 researchers on the task
force for ACM's study. The ACM report, "Globalization and Offshoring
Software," argues that the migration of technology jobs to developing
countries such as India signifies the computing industry's growth
worldwide. The study also found that consistent employment growth in the
U.S. tech sector in recent years, as the number technology jobs increased
17 percent from 1999 to 2004, according to the Bureau of Labor Statistics.
However, the report warns that the widely held perception that it is a
declining industry could choke off the prospects for its own future by
discouraging students from studying computer science in school. Varma
notes that while some jobs are being exported, IT remains a strong industry
due to its innovative character and the universal demand that it enjoys
across the business world. "Because of that, the outlook for IT is fairly
nice. So what the study is saying is that we need to invest in education,"
Varma concludes.
To view the complete report "Globalization and Offshoring Software--A
Report of the ACM Job Migration Task Force," please visit
http://www.acm.org/globalizationreportClick Here to View Full Articleto the top

U.S. District Judge James Ware yesterday ordered Google to turn over
thousands of Web search records to the Justice Department, which marks a
turning point in a case where Google refused to release such information in
accordance with a federal subpoena on the grounds that it would expose its
trade secrets and jeopardize Google's protection of users' privacy. Ware
felt Google faced less of a burden since the government has narrowed the
scope of the original subpoena from a random sampling of 1 million Web
sites and a week's worth of search queries to only 50,000 sites and 5,000
queries, and is willing to reimburse Google engineers for the work such a
request entails. Google general counsel Nicole Wong stated that Ware's
comments "reflected our concerns about user privacy and the scope of the
government's subpoena request. At a minimum, we've come a long way from
the initial subpoena request." The Justice Department wants the
information it requested from Google and other online search services to
build a case that the Child Online Protection Act is constitutional, and
prove that filtering software cannot effectively limit minors' access to
Internet pornography. The government insisted that it is not looking for
personally identifiable data about Internet users, but privacy proponents
are concerned that the government might go too far in tracking online
activities. "It's really about the outsourcing of surveillance to these
private companies, and the question is: How legitimate is that?," notes
Seton Hall University School of Law professor Frank Pasquale. Still, a new
Ponemon Institute poll finds that Americans are more worried about
government surveillance of their phone conversations than email
surveillance, or video surveillance in public restrooms or department-store
dressing rooms.Click Here to View Full Articleto the top

Hints by officials at AT&T and Verizon Communications about a
tiered-broadband model that would charge content providers for use of their
networks has the telecom industry in an uproar and federal lawmakers
debating what role if any the government should play in ensuring Internet
neutrality. Verizon and AT&T, along with Cisco Systems, which supplies the
companies with networking equipment, argue that as the demand for
bandwidth-eating video and other data-intensive content increases, the
average user who pays more money for broadband should be given some sort of
guarantee that he or she will be able to access the content in real time,
which may require some sort of prioritization. But companies such as
Yahoo!, Google, and PacWest Telecom argue that such a system will give some
content providers preference over others and will keep smaller competitors
who are unable to pay the fees out of the market. "They shouldn't be able
to give preference to their own content over someone else's content," said
PacWest's John Sumpter. "The solution is a form of Net neutrality that
would not allow them to discriminate against other companies'
applications." But the telephone companies say they have no intention to
discriminate against content providers since customers would likely not
stand for it. "We have no intention of blocking or degrading other
services on our network," said Verizon's David Young. "We are giving
customers what they want, which is fast pipes at a low cost. Anyone who
tries to take that away from consumers will be punished by the market."
U.S. Rep. Joe Barton (R-Texas) has introduced a measure that would prevent
network operators from blocking or interfering with access to applications,
while another bill from Sen. Ron Wyden (D-Ore.) calls for "equal treatment"
of all online content. But this week, Senate Commerce Committee Chair Sen.
Ted Stevens (R-Alaska) said a proposal to revamp U.S. telecommunications
laws would not necessarily entail Net neutrality.Click Here to View Full Articleto the top

Intel's research division is developing more than 80 projects at its sites
around the world, many of which center on multicore and energy-efficient
technologies. As the multicore technology has entered a scaling phase,
Intel is now exploring how to place up to hundreds of cores onto a single
chip. "There is a lot of architecture work to do to release the potential,
and we will not bring these products to market until we have good solutions
to the programming problem," said Intel CTO Justin Rattner at a company
technology briefing last week. Intel is also researching high-bandwidth
memory, configurable caches, core I/0, and scalable fabrics, as well as
platform-level technologies such as 3D stacked memory, photonics, and
virtualization. Software-level projects include transactional memory, work
load analysis, compilers, parallel runtime, and auto-threading. Intel is
assuming that multicore chips will never realize their full potential until
the industry develops a simple method for writing parallel programs. Intel
divides its research arm into two groups, with one focusing on short-term
projects bound for commercialization, while the other develops more
future-minded "off-roadmap" technologies. Rattner notes that exploratory
research became a company priority in the mid 1990s, after Intel realized
that it had been neglecting disruptive technologies. To support this
initiative, Intel partners with numerous universities in its Open
Collaborative Agreement, under which researchers at both Intel and the
universities can publish their research. Among the off-roadmap
technologies that Intel is developing are steerable antennas, virtual MIMO
(multiple input multiple output) antennas, and the WISP (wireless
identification and sensing platform) project, which is exploring
intelligent sensor networks with chips that draw power from radio waves.Click Here to View Full Articleto the top

In a presentation at the SD West 2006 Conference, Construx Software
Builders' Steve McConnell argued that agile software development has not
yet lived up to its promise, having been focused more on processes and
tools than on people and interactions. "It seems to me that the promise of
agile development has fallen short at least so far," said McConnell. In
his presentation, McConnell offered his lists of best and worst ideas.
McConnell claimed that agile development has been framed on the belief that
developers can anticipate every possible requirement before building an
architecture, an idea that made his "worst" list. Among McConnell's list
of best ideas are the imperative of incremental software development, that
fixing glitches decreases costs, and that software estimation abilities can
be improved over time. McConnell also lauded the notion that full reuse is
the most powerful form of reuse, and that intellectual flow guides software
projects. Making McConnell's worst list are the ideas that the only
software models are fully iterated or completely non-iterated, defect cost
increase dynamics do not affect agile development projects, and that there
is such a thing as a one-size-fits-all development approach.Click Here to View Full Articleto the top

Microsoft researchers have developed a technology that gives computers the
ability to formulate a rough idea of the state of a user's brain using
sensors that analyze the brain's cognitive state through impulses the
sensors collect. This method can determine, for instance, if a person is
relaxed, processing numbers, or in a state of imagination at a given
moment. In a practical sense, the technology could be used to choose the
appropriate medium for delivering an email alert, opting for an audible
notification if it senses that the screen is cluttered with applications.
Still in its early stages, Microsoft's brain computer interface project is
not designed for a specific product, but rather to "allow the user to
increase the number of things they can effectively do," said Desney Tan,
who is leading the project. The researchers tested their application on
video gamers playing "Halo," and found that it could determine with 95
percent accuracy whether the subject was watching the game, playing
casually, or engaged in a full-scale battle. Tan and Microsoft's Ed
Cutrell, a cognitive neuroscientist assigned to the project, presented the
technology at the recent Microsoft TechFest. A prototype has the sensors
contained in a white headband, though Tan says that they could be embedded
in headphones, headsets, or on the back of a chair. Microsoft's research
differs from many brain computer interface projects in that it does not
seek to control the computer directly through brain waves, but rather to
create an economical technology that could eventually see widespread use in
mainstream settings, rather than in controlled lab environments. Tan notes
that the system could also be used to analyze computer systems to determine
which demand greater levels of thought from their user.Click Here to View Full Articleto the top

For six years, NASA's Adrian Hooke and Google's Vint Cerf have been at
work on the interplanetary Internet project to form a standard to
facilitate communication in environments where it is impossible to carry on
an uninterrupted dialogue. Noting that data relay from Mars to a NASA
scientist can take up to 40 minutes, Hooke believes that communication
patterned after the Internet model can speed the transmission. Hooke and
Cerf are working to apply the technique of delay-tolerant networking to
communications in remote areas, such as outer space and deep beneath the
ocean surface. The researchers have developed a delay-tolerant networking
framework centered around a bundling protocol to store large volumes of
data within a single unit, as opposed to the Internet's packet-switching
technique that breaks information into smaller pieces for transmission.
While the technology is in its early stages, Hooke said that a recent
communication relayed from the Mars rover Spirit to the European Space
Agency's Mars Express, which then sent the message on to Earth, offers a
glimpse into the interplanetary Internet. The Deep Impact Mercury probe
used another interplanetary Internet application, the Coherent File
Distribution Protocol (CFDP) standard. Instruments using CFDP can record
and transmit an observation in a file irrespective of the physical
possibility of transmission at that time, according to the Consultative
Committee for Space Data Systems.Click Here to View Full Articleto the top

As Russia's 25 percent to 30 percent annual increase in demand for skilled
IT workers outpaces the state education system's ability to produce
qualified graduates, software companies are partnering with universities to
fill the void. RUSSOFT President Valentin Makarov notes that the two
worthwhile forms of university education either partner closely with
software developers or provide continuing education or retraining. "In
both cases qualified programmers are trained by teachers who have
experience in commercial programming and scientific research," said
Makarov. Program engineering suffers from a lack of unified standards, and
typically fails to prepare students adequately for risk assessment, project
management, and other practical matters, according to Andrei Terekhov, the
head of system programming at the department of mathematics and mechanics
at St. Petersburg State University, who adds that young specialists
typically require six months of further training after being hired. In
developing the collaborative education program, software companies and
universities are teaching students about new technologies and management
practices, bringing an industry-oriented focus to an environment that
Terekhov describes as overly academic. The program is also training
students to present projects and develop budgets. While the partnership
between industry and academia has a focus on practical skills, it is also
important to develop general problem-solving skills. Yet the demand for
specialists is growing, says StarSoft Labs Director Nikolai Puntikov. "We
have more and more projects that while far from 'rocket science,' still
demand important professional skills," Puntikov said. "It is time to
educate specialists in narrow fields within the framework of secondary
specialized education."Click Here to View Full Articleto the top

The IST-funded CONTEXT program is creating the technological
infrastructure that could give rise to a host of context-aware,
self-updating programs that would be commercially viable, capable of
reconfiguring emergency networks during a spike in calling, locating a
restaurant, or providing secure access to a remote location through a
mobile device, among other features. The CONTEXT project developed a
flexible platform to create, deliver, and manage on-the-fly context-aware
services. Another notable accomplishment of the project was the
integration of existing routers into the system through incremental network
equipment deployment. With its generic infrastructure, the CONTEXT program
can develop new definitions based on old ones, and is broad enough to apply
to any service that relies on a network, notes project manager Arto Juhola.
The CONTEXT system uses Active Context Middleware to manage and distribute
its services, drawing power from programmable networks, and individual
operators can adapt the system for their own purposes. "This is very
important because network operators have traditionally been nervous about
applying new systems out of fear that they will affect network
reliability," said Juhola. Contextual information could be used to improve
business processes or automate routine tasks, drawing on data collected
from across the network. In the Supermother experiment, the CONTEXT
project enabled a mother to send an important report to her office while at
the hospital with her child through the Context-aware Wireless Data
Service, which allowed her to switch from a low-bandwidth WLAN connection
to a more secure, high-bandwidth link.Click Here to View Full Articleto the top

It was the fear that proprietary software could one day cause a major
disruption in people's lives that motivated Richard Stallman to establish
the Free Software Foundation (FSF) in 1985. Stallman is generally credited
as the ideological father of open source. Stallman's deeply held belief
that source code should be universally available for viewing, modification,
and distribution to protect the rights of all users, a concept known as
copyleft, formed the philosophical basis for the GNU General Public
License. Free software exists without supporting copyleft through licenses
such as MIT for BSD, but Stallman remains committed to his crusading ideal
of safeguarding the liberty of computer users. The notion of free software
has become politically charged, and many popular software movements have
taken exception to the methods of the FSF and the stipulations of the GPL.
Foremost among these is Linux creator Linus Torvalds, who embodies the
pragmatic spirit of the open-source community through his willingness to
make concessions to proprietary applications, such as BitKeeper, insofar as
it helps the kernel. In addition to the thorny issue of software patents,
DRM poses another challenge to the open-source movement. Apple has sold
more than 1 billion songs through its iTunes store, but each is wrapped in
its copy-restriction technology. DRM technology is inherently incompatible
with open source, meaning that Linux systems generally cannot play DVDs or
any other item in a growing body of protected online content.Click Here to View Full Articleto the top

Many managers focus on adopting the latest set of best practices from a
successful competitor, but they do not place as much emphasis on
eliminating the misguided philosophies that are hurting their
organizations, according to Phillip Laplante and Colin Neill, associate
professors of software engineering at Penn State Great Valley graduate
school. Books about bad practices that have created stifling work
environments are not as likely to become best-sellers, according to
Laplante and Neill, authors of the recently published book, "Antipatterns:
Identification, Refactoring and Management." Anti-patterns are ways of
working, communicating, or managing that produce more problems than
solutions. In their research, they have identified 48 different
anti-patterns, which can be divided into two categories: Management and
environmental. For example, a manager who imposes his own values and win
conditions on others is exhibiting the "All You Have Is a Hammer"
anti-pattern. Other anti-patterns include subtle changes that eventually
lead to rebellions, resignations, or deaths, which is the "Boiling Frog
Syndrome," and a culture that allows workers to step on each other to get
to the top, which is "Mediocracy." Organizations much acknowledge that
they have anti-patterns and move to counteract them. Employees can learn
some strategies for dealing with their work environment, or do more by
serving as an agent of change, and even leave if it becomes unbearable.Click Here to View Full Articleto the top

Researchers at Vanderbilt University will spend the next three years
designing better control systems for space vehicles, airplanes, and
unmanned air vehicles for the U.S. military. Research teams from the
University of California at Berkeley, Carnegie Mellon University in
Pittsburgh, and Stanford University will aid Janos Sztipanovits and Gabor
Karsai in their efforts to develop software that shows that the control
systems are reliable. Sztipanovits, the E. Bronson Ingram distinguished
professor of electrical engineering and computer science and the director
of the Institute for Software Integrated Systems, and Karsai, an associate
professor of electrical engineering, will head the project, "Frameworks and
Tools for High-Confidence Design of Adaptive, Distributed, Embedded Control
Systems." Sztipanovits and Karsai have received a $3 million grant from
the U.S. Department of Defense, and satisfactory progress on the project
could extend it another two years and push the total amount of funding to
$5 million. The grant is part of $151 million that will be distributed
over the next five years for research in basic areas of science and
engineering.Click Here to View Full Articleto the top

Developers of the open-source GnuPG encryption software say the program
has a security flaw that may enable an attacker to sneak malicious code
into a signed email message. GnuPG, also known as Gnu Privacy Guard, is an
open-source version of the PGP encryption program used for encrypting data
and creating digital signatures. The GnuPG team discovered the flaw when
they were testing the patch for a previous vulnerability reported last
month. "Someone who's able to intercept the message as it's transmitted
could inject some data, and then the person who verifies the signature
would be told it's a valid, unaltered message," says Secunia CTO Thomas
Kristensen. "That's one of the main purposes of the program, so it's quite
significant." Secunia ranked the flaw as "moderately critical." It
affects all versions of GnuPG prior to 1.4.2.2, and users are being warned
to upgrade their systems immediately to that release.Click Here to View Full Articleto the top

Researchers at Microsoft Research and the University of Michigan have
partnered to develop prototypes for virtual machine-based rootkits that
significantly push the envelope for concealing malware and that can
maintain control of a target operating system. The proof-of-concept
rootkit, called SubVirt, exploits known security flaws and drops a virtual
machine monitor (VMM) below a Windows or Linux installation. The rootkit
is impossible to detect once it is put into a virtual machine because it
can not be seen by security software running in the target system. The
prototype will be presented at the IEEE Symposium on Security and Privacy
later this year. It was created by Microsoft's Cybersecurity and Systems
Management Research Group, the Redmond, Wash., unit responsible for the
Strider GhostBuster anti-rootkit scanner and the Strider HoneyMonkey
exploit detection patrol. "We used our proof-of concept [rootkits] to
subvert Windows XP and Linux target systems and implemented four example
malicious services," the researchers stated in a paper describing the
attack scenario. "[We] assume the perspective of the attacker, who is
trying to run malicious software and avoid detection. By assuming this
perspective, we hope to help defenders understand and defend against the
threat posed by a new class of rootkits," says the paper. The SubVirt
project implemented VM-based rootkits on two platforms and was able to
write malicious service without being noticed, according to the group.Click Here to View Full Articleto the top

Despite the ready availability of inexpensive, feature-rich cell phones, a
movement is growing among developers to create the first open-source cell
phone in an effort to spur innovation in both the software and the design
of mobile devices. Hoping ultimately to produce a stable of free software
that leads to a bevy of new cell-phone applications, the movement is
capitalizing on the growing demand for wireless products that power
machine-to-machine communication. Converting this simplistic wireless
module into a functioning cell phone requires a microprocessor, usually
powered by Linux, and a battery, keypad, screen, speaker, and a microphone.
One cell-phone hobbyist notes that even today building a phone from
scratch is not easy, particularly the circuit and software design, and
homemade devices are clunky with short battery lives. In spite of these
limitations, developers are eager to incorporate new features into their
devices, such as RFID tags and GPS units. Developing devices from home is
the only viable option for many individuals looking to customize their
phones, as most manufacturers exact steep licensing fees and royalties from
anyone looking to tweak the hardware or design their own applications. The
designers hope that a community will form around cell phone software to
create a host of new applications and features, just as developers have
flocked to support the open-source Web browser Firefox. Telecommunications
engineer Surj Patel has developed an application that links to Amazon's Web
site to provide users with ratings and pricing information through a
computerized voice reading key. Others look to mobile phones as an
affordable vehicle to bring computing to the developing world, and insist
that a broad community of developers is the only practical way to achieve
that end.Click Here to View Full Articleto the top

Since the inception of the Internet in its most primitive form in 1969,
computer scientists and engineers have tweaked and modified the network in
a piecemeal fashion, enhancing its operating capacity to its current state,
where it handles the traffic of nearly 1 billion users. Fearful that the
minor alterations and upgrades are no longer capable of dealing with rising
security threats and accommodating new devices, such as mobile phones and
wireless sensors, researchers are looking into building a new network from
the ground up. The NSF has launched the Global Networking Environment for
Networking Innovations (GENI) and the Future Internet Design (FIND)
projects. GENI is developing novel protocols and applications and FIND is
exploring the best equipment to support the network in the future. "To
conceive a vision for what a global communications network will look like
in 10 or 15 years," said MIT's David Clark, one of the Internet's original
designers, "you have to free yourself from what the world looks like now."
While Clark warns that the Internet's sheer ubiquity could cloud engineers'
visions of the future, the researchers are certain that the number of
devices on the network will grow exponentially. With virtually any
household object potentially containing a sensor or a chip, the number of
devices will proliferate to the hundreds of billions, and with it,
machine-to-machine transmissions could far outnumber the volume of
human-generated Internet traffic. One proposal to emerge is
trust-modulated transparency, where the traffic-routing structure estimates
how trustworthy packets of data are as they pass by, setting aside suspect
packets for further screening. Internet indirection infrastructure would
add an addressing system over existing IP numbers to support new devices.
For his part, Clark calls for an intelligent diagnostic system to identify
network failures as they occur.Click Here to View Full Articleto the top

New wireless technologies will be free of transmission congestion through
their ability to sense and instantly switch to nearby vacant frequencies.
Cognitive radios will facilitate the automatic jumping of wireless signals
from a band of the spectrum blocked by interference to an available open
frequency, allowing more reliable transmissions that could translate into
less expensive communications costs. Emergency communication, where
transmission reliability is critical, stands to benefit especially from
cognitive radio. Future wireless devices will be able to reconfigure their
functions to fulfill the requirements of users or communications networks
on an as-needed basis through the use of adaptive software. These
modifications will be based on the ability to detect and recall such
diverse elements as the radio-frequency spectrum, user behavior, or network
state of distinct transmission environments at any given time and place,
significantly increasing the dependability and convenience of wireless
communications. A cognitive radio will be capable of autonomously
assessing how its RF environment fluctuates by time and place according to
the power emitted by itself and other neighboring transmitters; when
combined with software, these data structures will allow a cognitive radio
to optimally find and employ surrounding networks while avoiding
interference from other radios. Cognitive radios' ability to search for
available spectrum will be optimized through information sharing effected
by Semantic Web technology, and it is probable that, despite resistance
from the cell phone and telecom industry, progress toward cognitive radio
will move forward given the fact that the relative disorder and rigidity of
unregulated spectrum is avoidable. Cognitive radio could enhance the
flexibility of wireless communications to the point where consumers may
ultimately be able to make calls through cheaper wireless network paths, a
potentially revolutionary step for the communications industry.Click Here to View Full Articleto the top