Diebold Election Systems' recent allegations that college students and other grass-roots proponents are violating copyright statutes by posting on the Internet company documents related to the security of their electronic voting machines--and advocates' counter-argument that they are merely exercising their free speech rights--demonstrate that the scope of the copyright conflict extends far beyond intellectual property holders' aims to preserve their licensing structure, according to legal scholars. "We're so focused on the micro-view--whether EMI is going to make a buck next year--but there is so much more at stake in our battle to control the flows of information," explains New York University's Siva Vaidhyanathan. Critics such as Swarthmore student Nelson Pavlosky contend that Diebold's sending cease-and-desist letters to people who posted the e-voting machine memos online clearly illustrates how corporations are abusing copyright law to muffle freedom of speech. Electronic Frontier Foundation (EFF) lawyer Wendy Seltzer says that Diebold has evoked the Digital Millennium Copyright Act "because they don't want the facts [about their e-voting machines' security problems] out there." The EFF is offering students targeted by Diebold informal advice and helping them secure legal assistance. Meanwhile, hackers posted insecure software from Sequoia Voting Systems last week, prompting the company to issue a statement that the software is an older version that has since been changed. However, the concerns of Rebecca Mercuri of Harvard University have not been assuaged by such claims, and she notes that the fact that Diebold and Sequoia's e-voting code was unprotected raises security questions. Johns Hopkins researcher Aviel D. Rubin believes voting machine companies should stop selling insecure products and halt their campaign against students, whom he argues are merely trying to uphold democratic principles.
Click Here to View Full Article(Access to this site is free; however, first-time visitors must register.)

Web technology companies and the World Wide Web Consortium (W3C) are rushing to the aid of Microsoft, as the software giant tries fights a critical patent claim that could have ramifications for a wide variety of Internet-related software. The '906 patent held by Web developer Eolas Technologies describes application code that runs inside the Web browser, and affects products from the likes of Sun Microsystems and Macromedia. Microsoft has already outlined ways in which it could change its market-leading Web browser to avoid the patent, but the changes would require the revision of countless interactive Web sites. The W3C has filed an unusual appeal to the U.S. Patent and Trademark Office asking for a re-examination of the '906 patent in light of HTML+, a technology created by W3C staff member Dave Raggett embedding data from other applications in the browser window; Eolas founder Mike Doyle dismisses HTML+ as prior art to his patent since he says it does not allow interactivity like his patent does. Doyle says the W3C is getting involved for political reasons, since the group has come out strongly in opposition to patented technologies on the Web over the last year, and he claims his fight is to secure the innovative rights of small players against large firms such as Microsoft, which he says commonly steal the technology of small innovators. Microsoft lawyers are requesting a new trial since the district court judge refused to allow important evidence of prior art, especially the Viola browser created by University of California student Perry Pei-Yuan Wei. Microsoft believes the Viola browser provides an obvious basis for the '906 patent. Many individual Web developers have been searching for prior art to refute the '906 patent after an August call-to-arms by Lotus Notes' co-creator Ray Ozzie.
Click Here to View Full Article

Computer scientists argue that costly electronic voting machines being installed in U.S. states--ostensibly to avoid hanging chads and other problems that plagued previous elections--will only make the election process more problematic, because of their susceptibility to vote miscounts and tampering that could go undetected. Such security flaws constitute "terrifying" implications for democracy, states Stanford University professor David Dill, who points out that e-voting machines' biggest disadvantage is the lack of a voter-verifiable ballot. E-voting machine manufacturers and political advocates counter that security flaws have always been part and parcel of the electoral system, and though computerized voting is not perfect, it remains the best existing option. When the source code of Diebold e-voting machines was sent to Dill by a concerned activist, Dill passed the code on to a research team led by Johns Hopkins University's Avi Rubin, which uncovered and publicly disclosed security problems so blatant that they could be exploited by a computer-savvy adolescent. Rubin says the machines suffer from two overriding problems: They do not furnish a paper ballot and thus make recounts impossible; and their software only features bare-bones encryption, which would allow people with access to a machine to hack the system and fix it in such a way that votes for one candidate, for example, would be registered for another. Diebold disputes Rubin's report, claiming that its machines are subjected to rigorous testing and oversight that would deter tampering. The report caused enough concern among Maryland officials to suspend a $55.6 million deal with Diebold to install e-voting systems across the state until an independent research firm could analyze the systems' security. U.S. states are eager to modernize their voting machines in order to comply with the Help America Vote Act of 2002, which has set a 2006 deadline for such efforts; the flexibility of electronic voting machines is very attractive to states, notes private election technology consultant Roy Saltman.
Click Here to View Full Article

A controversial Internet copyright law based on the EU Copyright Directive has gone into effect in the United Kingdom on Oct. 31, making Britain the sixth EU member state to ratify the legislation after Austria, Germany, Denmark, Greece, and Italy. The directive was designed as a measure to stamp out digital piracy, but civil liberty proponents have called for the adoption of new legislation that upholds consumers' "fair use" rights. Masons IT and e-commerce lawyer Struan Robertson has stated online that the U.K. digital copyright law is excessively broad, in that individual file-sharers could be imprisoned under the legislation. Jeremy Philpott of the U.K. Patent Office insisted that the law does not intend to fine or jail individual file-sharers, but instead target organized crime. He did note, however, that civil penalties could be levied against individual downloaders in the form of injunctions and demand of payment to cover damages. EU member states failed to reach a consensus on a unified set of "fair use" exemptions to the copyright directive, resulting in a hodgepodge of statutes on how consumers can record and play content on their various digital devices. Each EU member country has the authority to decide for itself how to treat new anti-copying technologies; the copyright law adopted by Britain provides protections for such measures.
Click Here to View Full Article

The economy is picking up along with corporate IT spending, but the number of IT jobs remains mired at 20 percent below what the number was in 2000, writes former Labor Secretary Robert B. Reich. Ostensibly, the culprit is not only the recent economic sluggishness, but also a growing amount of technology work being farmed out overseas. Reaction in Congress has been to let the H-1B visa cap fall back to 65,000 per year, while state legislatures are considering bills that restrict offshoring by those governments. Even U.S. IT workers are organizing to protect their jobs from overseas competitors, demonstrating outside "strategic outsourcing" conferences. The reasons for the rise in offshore outsourcing are fundamental business pressures to lower costs, the cheapness of overseas labor, and increased telecommunications capacities and capabilities. Dartmouth College associate professor Matthew Slaughter points out that IT work is shifted much more easily than manufacturing, which requires physical movement and dealing with tariffs. Though offshoring is a threat to the U.S. IT market, the problem is not as bad as it seems, contends Reich: There is no finite limit to the IT industry, since its innovations and benefits are only limited by human imagination; the U.S. IT industry can continue to grow and prosper at the same time that lower-tier IT jobs are moved offshore. Reich foresees a time when American technology workers are not just in back-room operations or in research, but active in the business side, understanding business needs and finding IT-enabled solutions to those problems. These workers will also formulate company policies in regards to offshore outsourcing, deciding which components are non-critical and which are sensitive and core to the company's mission. Still, Reich warns against complacency and argues for more federal and state support for universities, increased federal investment in research and development, and retraining for high-tech workers.
Click Here to View Full Article

The Netherlands will be the stage for the first European Symposium on Ambient Intelligence (EUSAI), where some 160 European researchers are expected to discuss three major ambient intelligence areas--ubiquitous computing, context awareness, and intelligence and natural interaction. The two-day conference begins Nov. 3 with presentations that will focus on the latest theories, designs, and applications in the field of ambient intelligence, which aims to non-intrusively saturate the environment with enabling technologies. Philips Research scientist and member of EUSAI's organizing committee Boris de Ruyter describes ambient intelligence as "a vision for the digital environment of the future" that will emerge gradually, and adds that the research and development needed to bring that vision to life requires extensive support from the European Union as well as a broad spectrum of industry and research organizations. Philips Research is leading the Ambience Project, a component of the Information Technology for European Advancement effort, and the EUSAI will highlight prototypes of ambient intelligence applications for home, office, and mobile use derived from the Ambience Project. Some EUSAI attendees believe the ultimate goal of ambient intelligence is a "disappearing computer." "We work on people-friendly environments in which the 'computer-as-we-know-it' has no role and is replaced by information technology diffused into everyday objects and settings, leading to completely new ways of supporting and enhancing peoples' lives," comments Norbert Streitz of Germany's Fraunhofer Institute. Ambient intelligence will be the area of concentration for a four-year, approximately 3.6-billion-euro project authorized by the EU.
Click Here to View Full Article

University of Arkansas computer engineer Kazem Sohraby demonstrated at the recent National Fiber Optics Engineers Conference in Orlando, Fla., that mesh networks can boast the same level of reliability as ring-shaped Synchronous Optical Networks (SONET) while being more cost-effective--provided they are designed appropriately. The SONET configuration consists of a double network: In the event of a disruption in one ring, network traffic can be quickly re-directed to the other ring, and vice versa; the tradeoff is that neither network can operate at full capacity. These limitations, along with redundant equipment, make ring networks very costly. In addition, many current networks require upgrading, a process complicated by expanding service areas and rising demand. An optical mesh network features numerous entwined paths with nodes located at various points in the mesh, and the network follows a routing scheme to re-route traffic in the event of an interruption between two nodes; the mesh network's lack of redundancy translates into lower initial costs and less wasted capacity. Sohraby, along with Kamala Murti and Ramesh Nagarajan of Lucent Technologies' Bell Labs, devised a simulation model based on a septet of real-time optical mesh networks to ascertain if such networks' restoration times could equal those of a ring network. The results indicated that mesh network performance and restoration are most affected by the distance between nodes, the number of alternate routes needed to bypass the problem area, and the volume of network traffic. Sohraby concluded, "Nodes must be spaced appropriately and be intelligent enough to know there is a problem and what to do so they can effect real-time repairs."
Click Here to View Full Article

By re-capturing the energy used to conduct computer calculations, University of Florida researcher Michael Frank says computer chips can be made to run much cooler, hence faster and more tightly packed together as well. "Reversible computing" works similar to the way hybrid cars recover energy from braking and re-use it in the electric motor. In traditional structures, integrated circuits erase completed operations by grounding one end of the charged circuit. The dissipated energy turns into heat. Frank's designs use MEMS-based adapting resonators, acting as an spring-like oscillator, to store computing energy and release it for other computing cycles instead of discarding it. The concept requires re-engineering integrated circuits so they can compute in reverse. Reversible computing would use less energy, and thus produce less heat and allow chips to run faster and be more densely packaged. As a MIT doctoral student, Frank worked on several simple reversible computing prototype chips. Now in charge of the University of Florida's Reversible & Quantum Computing Research Group, Frank says heat is the fundamentally limiting factor in today's integrated circuit performance, and will prove to a major challenge in extending integrated circuit performance in decades to come. He is trying to get major chipmakers such as IBM to devote more research and development resources to reversible computing.
Click Here to View Full Article

Rich Site Summary (RSS) technology is being lifted out of obscurity by online aficionados and Web operators to keep Web surfers apprised of new content added to specific Web sites. At the core of RSS is freely available software that enables Web site operators to automatically build a "news feed" on a Web page that condenses added material; a tiny "XML" button containing the news feed's Web address is then inserted onto the site by Web publishers. Surfers, meanwhile, must avail themselves of "feed reader" software to which the news feed's address is added when visitors hit the XML button. Whenever a publisher adds new material to a site, the news reader will automatically access the information and display it on the user's reader software; furthermore, RSS users can display all the data from their favorite sites on a single screen because the reader software can simultaneously pick up multiple news feeds from numerous sites. Many Web sites build visitor loyalty by sending regular email announcements, but the spam glut has reduced users' preference for email. Online content consultant Amy Gahran reports that RSS subscribers are "getting stuff directly from the publisher, and the only thing that comes into the feed reader is stuff [they] asked for." Most online RSS feeds serve the technology enthusiast community, but more mainstream Web publishers as well as Web portals have begun to offer RSS feeds. Gahran notes that "sports teams can use [RSS] to publish statistics, music groups can use it to publish tour dates, [and] government agencies can use it to publish regulatory updates." RSS' popularity could increase even more once a universal RSS standard is developed.
Click Here to View Full Article

The CAN-SPAM bill passed by the U.S. Senate on Oct. 23 could be more effective at discouraging spammers if certain provisions are added, writes Jane Black. In its current form, CAN-SPAM only permits federal agencies, state attorneys general, and ISPs to sue spammers, while an amendment introduced by Sen. John McCain (R-Ariz.) only authorizes the FTC to enforce liability for third parties that "knowingly" allow their products to be sent by someone else. The author contends that giving individual consumers the right to sue would seriously discourage bulk emailers, especially if a $100 or $200 penalty is levied against them for every piece of spam they send. CAN-SPAM currently only allows consumers to opt out of receiving spam, while Black notes that most privacy proponents would prefer an opt-in approach that has already become the standard in Japan and Europe. Criminalizing spam is just part of the solution, the author writes--Congress must also grant law enforcement the resources and powers needed to effectively go after spammers. Investing in more thorough antispam enforcement has been discouraged by the widespread contention that most criminal spammers are based overseas; however, the nonprofit Spamhaus estimates that roughly 90 percent of spam originates from 150 criminal bulk emailers, 40 of whom live in the continental United States. "The real issue is to motivate law-enforcement agencies to work the way they do with drugs and terrorism to get rid of this insidious invasion of our privacy," argues British Parliament member Andrew Miller. McCain doubts that legislation alone will be enough to curb spam, but points out that no one should use that fact as an excuse to do nothing. The House is expected to vote on a bill very similar to the CAN-SPAM act.
Click Here to View Full Article

The United States, Canada, and Western Europe are in jeopardy of losing their monopoly on developing innovative technologies, according to Cutter Consortium Fellow Ed Yourdon. Yourdon is calling on the federal government to find a way to boost the creation of IT jobs, at a time when technology companies have followed the lead of other manufacturers in moving jobs overseas. Yourdon says the United States will need to have a situation in place similar to when corporate demand for client/server technology started to overtake the desire for mainframe computers running third-generation programming languages. "Let's assume the economy recovers, some brand-new high-tech 'killer app' will excite the business community in much the same way that client/server technology did a decade ago," says Yourdon. "If that happens, I predict that there will be a lot of 40- to 45-year-old client/server programmers and even some 30- to 35-year-old Java Web programmers who will find themselves unable to make the transition quickly enough to keep their jobs from being taken by the brand-new generation of college graduates." In fact, Yourdon believes Asian high-tech centers are poised to take the lead in IT innovation. Some U.S. venture capital firms are starting to expand their investments from Silicon Valley to include places such as India.
Click Here to View Full Article

University of Maryland at College Park computer scientist Yiannis Aloimonos and his colleague Cornelia Fermuller are set to introduce new technology that will give robots "omni-directional" vision. The new device, which will be presented at a robotics conference in Las Vegas this week, promises to improve the navigation skills of robots considerably, giving them the ability to sense whether they are moving in a straight line or spinning on the spot. The researchers have named their device the "Argus eye," after the Greek god who had eyes all over his body. The Argus eye is based on mathematical research the two used in 1998 to prove that seeing in all directions facilitates precise motion sensing. The researchers have incorporated their findings into software that is designed to process images from cameras arranged on the surface of a sphere. Using a frame about the size of a beach ball, the Argus eye makes use of nine off-the-shelf digital cameras, and the software is able to identify the direction of motion in 3D when it is moved and deliver images to a computer for processing. The researchers equate the use of a single camera for an eye with having a robot see the world through a cardboard tube.
Click Here to View Full Article

Computer Associates, EDS, Opsware, and over 20 other hardware vendors have set up a consortium to support the development of Data Center Markup Language (DCML), an XML-based programming language that establishes communications between a data center's various components, a breakthrough critical to utility computing. DCML essentially itemizes a system's hardware and software and reports this inventory to other devices; the language can additionally notify other devices and their managers of device status. The ultimate product of this sophisticated system management is a more refined IT infrastructure. "In this new age of utility computing, you need information to do things like rapidly repurpose servers and reallocate resources," explains Eric Vishria of Opsware. "That's where DCML comes in." The dramatic upgrades promised by a DCML standard give the language the potential to extend its reach beyond the data center and play a vital role in grid computing, clustering, and other types of inter-corporate computer resource sharing. Certain enterprises believe that hardware vendors are promising utility computing before their engineers are truly ready to deliver it, but DCML's non-proprietary, standards-based approach offers the best chance so far of bringing the engineering vision much closer to reality. The DCML consortium plans to issue a public specification for comment by year's end and send a proposal to a standards organization early next year. Products attuned to pre-standard versions of DCML may hit the market in 2004.
Click Here to View Full Article

Indian Linux and several other major groups in India are working to bring Indian languages to the free software world. Nagarjuna of the Tata Institute of Fundamental Research in Mumbai, Jitendra Shah and colleagues in Mumbai, Tamil, the Government of India computing institutions, and branches of the Indian Institute of Technology are all active in Indian language computing. G Karunakar, an IndLinux team member who is one of the more outspoken advocates for non-English solutions, believes that Indian language computing could make a splash next year now that tools for nine major languages are already functional. Karunakar is heading the localization of the major computing desktop environment Gnome Hindi, as IndLinux has focused on localizing the GNU/Linux operating system and its applications for Indian languages. IndLinux has worked on locale development, fonts, and other details for operating computers in local languages. Karunakar says the new GNU/Linux-based Milan solution allows users to work in nine languages, but there are no free fonts for Oriya and Punjabi. "But in other languages, the fonts are becoming a non-issue, since there is at least one free font for every language," he says. Karunakar adds that email solutions are available and that the team is addressing instant messaging in Indian languages.
Click Here to View Full Article

As Internet Protocol version 6 IPv6 attracts more and more attention and some continue to publicize worries that IPv4 is rapidly running out of space, Richard Jimmerson of the American Registry for Internet Numbers (ARIN) contends, "There is quite a large block of IPv4 address space remaining." Recent information from the Regional Internet Registries (RIRs) indicate that less than 18 percent of available address space in IPv4 had been allocated between January 1999 and June 2003 and ARIN, an RIR, finds that 19.59 /8 equivalents were allocated with 91 /8 equivalents still available within that time frame. ICANN has reserved a number of the 256 possible /8s, which are measures of IP address space that comprise roughly 16.7 million individual addresses with the first eight bits in common. Jimmerson emphasizes that ARIN will not try to forecast the lifespan of IPv4, because trends in address space demand can change, though the RIRs' report indicates that the rate of growth for new allocations to ISPs and local registries declined between 2001 and last year, but probably will begin expanding again in 2003. Allocations in Europe and Asia will likely be largely behind that. If the second half of 2003 mirrors the first half, a little more than five /8 equivalents will be allocated, while fewer than 4.5 were in 2002 and roughly five were in 2001. According to this trend, IPv4 should not dry up for more than a decade. However, the growing popularity of mobile IP devices and other trends make predictions unreliable.
Click Here to View Full Article

Kendall Grant Clark writes that the vision of a sturdy, public Semantic Web can only come to pass if the industrial, academic, and hacker communities are engaged in informal, loosely-coupled collaboration, but notes that hackerdom is, for the most part, uninterested in the Semantic Web (hackerdom being defined as people developing open source software for fun and profit). Clark points out that generic rule systems should be a key convergence point for both hackers and academics, but observes that the enthusiasm the latter group has for rule systems is not shared by the former, though there is no indication that hackers are unwilling or unable to use such systems. The author outlines several rule systems that could play a role in the Semantic Web's development: Rule markup language (RuleML); the Rule Language Server (RLS), which seeks to integrate various formats and rule languages, and supply a tool for Semantic Web developers focused on rule processing; reactive rules that establish relationships between responses and events; rules systems used in security contexts, such as the XML Access Control List model; and OWL Rules, a proposed methodology for integrating ontology formalisms with rule systems so that events and relation antecedents, as well as multimodal semantic processing, can be elegantly represented. Clark suggests that conferences such as ISWC may be good environments to coordinate industrial, academic, and hacker collaboration on Semantic Web development, once cultural barriers are overcome. Hackers, for instance, prefer loosely structured conferences, in contrast to the formality academics favor. The author adds that professors should urge students to use open source software at every opportunity, participate in open source projects and communities, avail themselves of SourceForge and other open source resources, and take note of insights drawn from the use of n3 by hackers. Clark concludes that hackers should modify their behavior as well, and overcome their "hack first, research later" tendency.
Click Here to View Full Article

Nippon Telegraph and Telephone (NTT) continues to invest heavily in basic research, the fruit of which includes a computer chip that can translate brain signals into machine-readable instructions. NTT chief researcher Keiichi Torimitsu is growing rat brain cells on a glass substrate constructed using an advanced semiconductor fabrication process. The resulting "biological circuits" of cells--grown in 10-micron-deep etched grooves forming hexagon shapes--will allow NTT to understand how brain signals are transmitted and processed. A similar NTT chip project aims to create a "biodevice" that would safely operate within the human body, using a protein polymer linking metal electrodes. Torimitsu points out NTT's unique position among Japanese technology companies, many of which have pulled back from basic research because of financial pressure. NTT remains steadfast, employing 3,000 researchers to develop things such as plastic chip materials and quantum computing. Torimitsu says, "Projects like these can be conducted nowhere else but at NTT." NTT is also working on quantum encryption technology, and two of the company's encryption systems were recently chosen by the European Union to secure online government networks. Torimitsu says Britain's University of Liverpool as well as other research groups both in Japan and internationally have asked to work with his team.

The Internet Engineering Task Force is working on a new IP network management standard that would allow more complete traffic-flow data to be gathered at switches and routers, then exported to a management console for analysis. The more detailed information could allow for usage-based billing schemes, pinpoint security holes, and uncover better routing and linking strategies. More detailed traffic-flow would also show how applications interact with the network and which users make the most demands. IP Flow Information Export (IPFIX) is expected to reach final-draft stage early next year and would be included in equipment from Cisco, Nortel, and other vendors. IPFIX is based on Cisco's NetFlow Version 9 data-export protocol. IPFIX working group co-chair Dave Plonka says some enterprise network managers already have patched together SNMP and Realtime Traffic Flow Measurement (RTFM) protocols for the same effect as IPFIX, but the new standard will obviate that effort. Data packets are labeled with seven fields in IPFIX: Source IP address, destination IP address, source port, destination port, type of Layer 3 protocol, type-of-service byte, and input logical interface. The standard will also come with templates that can be configured to define how data should be exported to collection devices. IETF co-chair of the IPFIX working group Dave Plonka calls IPFIX a foundation technology and says, "Flow-based measurements are a sweet spot between mere aggregate counters and complete packet traces."
Click Here to View Full Article

Grid computing promises to supply computing power on demand like any other utility, but needs better security and reliability in order to succeed, writes United Devices' Sri Mandyam. Computational grids are guessed to be at the same stage of technological sophistication as electrical grids were a century ago. The core element of the grid is a highly scalable network composed of local area networks linked to each other through wide area networks, but this overarching network cannot guarantee data rates; however, network providers can boost quality-of-service by embedding redundancy and intelligent traffic management software into this network, and by sharing peak demand with other providers. Grids are arranged in a hierarchical architecture of compute resources, with desktops and workstations at the network "edge." The abundance of desktops and workstations makes them well-suited to run fault-tolerant applications, while high-end resources that run mission-critical programs are far less fault-tolerant; this vulnerability can be offset with expensive data centers, but the on-demand computing model, in which data center operations are outsourced to service providers, offers a less costly solution. Many aspects of autonomic computing, whereby a controller organizes the best plan of action based on real-time input from "intelligent agents," can be added to the Grid Manager to increase reliability. Security can be implemented throughout the grid by incorporating five key capabilities into the Grid Manager and intelligent agents: Authentication, in the form of passwords or digital certificates; confidentiality via access controls and data encryption; data integrity through checksum and digital certificate confirmation schemes; non-repudiation supported by the monitoring and database logging of all grid events and transactions; and sand boxing, which erects a barrier between the job execution environment and the compute node.