Intel processors and clustered systems made huge headway on the most recent top 500 list announced at ACM's Supercomputing 2003 Conference in Phoenix this week: Intel is inside 189 of the listed machines, up from 119 spots in the June list and 59 spots last year. Lawrence Berkeley Laboratories computer scientist Erich Strohmaier says Intel's success marks a larger trend toward clustered systems, of which there were 208 in the most recent count, including seven of the top 10 machines. The overall performance of the top 500 list has increased at a steady rate, clocking a total 528 Tflops compared to 375 Tflops in June; the lowest machine on the list now runs at 403.4 Gflops and the No. 100 machine now operates at 1.142 Tflops. Linux shows up in one form or another on most of the clustered systems, which operate using either IBM, Intel, or AMD processors. The commodity Infiniband interconnect technology is also making headway, and is used in the new No. 3 Apple G5 cluster from Virginia Polytechnic Institute. China also made a significant move with the No. 14 Chinese Academy of Science Itanium 2-based computer, built by Legend Computer. IBM claims the most amount of aggregate processing power on the list at 35.4 percent, followed by Hewlett-Packard with 22.7 percent; HP claims the most number of systems on the list. IBM's new BlueGene/L prototype generated significant interest because of innovative scaling technologies and its small form factor, about the size of a dishwasher. The BlueGene/L test machines ranked No. 73 in the new list, is able to scale up to 64,000 processors, and yields eight floating point operations for each chip's clock cycle.
Click Here to View Full Article

Hackers follow an ethical code that varies according to each hacker's motivations, and what they consider courteous behavior can seem eccentric--even threatening--to others: One Middle East hacker alerted an Oracle security officer of a software flaw he thought he had uncovered, and urged her to contact him, implying dire consequences if she did not--even though his intentions were harmless. Hackers' motivations run the gamut from benevolent (uncovering vulnerabilities to help people shore up systems) to financial (offering to explain bugs to software makers for a fee, which companies often interpret as extortion) to fame-seeking (acquiring a reputation by penetrating complex networks) to an appetite for creating anarchy and destruction. But whatever their personal goals may be, hacker attacks and the damage they cause have risen significantly. Both benign and malign hackers use the same tools and methods and often harbor anti-authoritarian attitudes, which adds to the difficulty in distinguishing between the two, security experts note. Many hackers who penetrate vulnerable software and post the security flaws consider this a public service, and refuse to profit from their hacks--indeed, some companies gratefully acknowledge that these alerts have contributed to better security. Some hackers have exposed these holes publicly without warning the software maker beforehand; a common argument is that such wide-ranging exposure of exploitable weaknesses has a better chance of spurring developers to take remedial action. Many hackers now consider such tactics unethical, and are working with software makers to patch the holes they find before disclosing them. A consortium of major software providers has joined forces with security firms to create a formal system of hacker etiquette to give companies enough time to fix vulnerabilities before public disclosure, but hackers are unlikely to comply with such a system, since they consider the threat of exposure the biggest incentive to bolster security.
Click Here to View Full Article(Access to this site is free; however, first-time visitors must register.)

James H. Morris, dean of Carnegie Mellon University's School of Computer Science, argues that prospective computer-science students should know that such a course of study prepares them for a lifetime career by arming them with basic skills beyond mere programming--such as how to judge what one knows, how to learn essential knowledge, and communications skills. Vital things successful students must know include the skill to engineer and refine technology in harsh environments, and the ability to draw a line between mythological and actual phenomena and use technology to measure people's performance. The author notes that a good computer-science program covers multiple disciplines, including liberal arts, experimental science, and mathematics. Morris comments that dedicated computer scientists view their field as a source of intellectual challenges, such as determining what can be computed--a critical question that needs to be answered in order to advance practical cryptography applications, for instance. Other questions computer scientists are pursuing include what comprises intelligence, a challenge that is being met with the collaborative assistance of biologists and psychologists, and which is yielding sensory devices that operate in the real world. Morris additionally notes that scientists are seeking other ways to build computer systems through the exploration of microbiology, evolution, and quantum effects, while computing's role in the world is also a major focus of inquiry. The dean contends that people who are antagonistic towards computing should also study computer science. "Willy-nilly, all of us are becoming computer users and it will be more fun and less worrisome if we know what it's all about--that is one of the great intellectual adventures of our era," Morris writes.
Click Here to View Full Article

Carnegie Institute of Technology (CIT) researchers have created a prototype context-aware cell phone called SenSay, which combines a global positioning system, sensors, and a personal digital assistant to gather information about the user, his location, and his current activity so that calls can be forwarded or connected appropriately. For instance, if the user is having lunch, SenSay notes this from the location system and a "to-do" list indicating a lunch appointment; the phone then disables its ringing feature and automatically sets itself to vibrate. By checking the location, the to-do list, and user schedule, the device can decide to send all calls to voice mail--but if the caller stresses urgency, the system will text the caller to call back in a few minutes, and then vibrate and provide a text message to the user about the expected call. Callers can also use SenSay to access the user's calendar and determine if the user is running behind. SenSay's sensory array includes an accelerometer to read motion, a temperature and heat flux sensor, a microphone, a light sensor, and galvanic skin response sensors. "The time it takes to hand off or receive vital information is greatly reduced [with SenSay]," boasts Asim Smailagic of Carnegie Mellon University's Institute for Complex Engineered Systems. Issues that need to be addressed before SenSay is ready for commercialization include integration and storage problems, as well as ease-of-use and the product's incompatibility with conventional fixed-line phone users. CIT expects to commercialize SenSay in two years in partnership with Intel, while technology analysts think the device will be a niche product favored by CEOs, travelers, and military officers.
Click Here to View Full Article

There is evidence suggesting that the technology leadership of the United States is on the road to long-term decline with the growth of offshore outsourcing and the rise of "poles" of excellence in overseas regions. The increasing outsourcing of higher-level operations leads to the gradual build-up of such poles and the creation of opportunities for new competitors in those poles. Taiwan's semiconductor manufacturing business and Bangalore's software industry are both examples of poles where openings for new competitors exist in abundance, but where U.S. companies have few opportunities for competitive advantage. The biggest edge overseas tech rivals may have over the United States is in top management, which is grossly overcompensated in America: For instance, Cisco CEO John Chambers, a professional manager hired by the company and not a founder, recently cashed in $38 million in stock options, but currently holds $363 million in uncashed options; Cisco's stock option plan has issued 321 million shares valued at $7 billion, which exceeds the amount of money the company has earned since it was created. In contrast, India-based Infosys' two executive directors earned a total of $1.5 million and about $700,000 in stock options in 2003, which is frugal compared to top U.S. managers' compensation packages. U.S. investors do not seem to take stock option grants into account when valuing companies, but this is likely to change regardless of whether the Financial Accounting Standards Board enforces appropriate valuation of such grants--by which time the U.S. tech industry will be ailing. Though outsourcing can overcome labor cost differentials, the migration of top executives overseas to save money will dramatically reduce tech business' American element.
Click Here to View Full Article

Victoria Shannon writes that the European Union's recently enacted anti-spam directive will ultimately do little to stem the tide of junk email, for a variety of reasons. Only two EU member countries--England and Italy--have adopted laws that support the directive; none of the law enforcement agencies in the 15 EU member nations have the resources or skills to effectively corral spammers; most European spam originates offshore, or follows a circuitous route that makes tracking down those responsible intensely difficult; even if spam does violate the EU directive, it often violates some other statute; and finally, it is sometimes hard to determine what kinds of spam are objectionable, given the subjective nature of such messages. Furthermore, Shannon sees two potentially major flaws with the directive: A provision that allows people who have previous relationships with senders to receive unsolicited commercial email from those senders, and the authorization of opt-in for consumer email addresses only, which is a problem because business addresses also serve as personal email addresses for many people. The author lauds the EU's effort to make opt-in a requirement through the directive, but argues that all EU member nations, the United States, and Asian countries must throw their hats into the opt-in ring if the law is to truly curb spam. Shannon contends that Interpol has the support and the skills needed to head the effort to enforce the directive. She recommends that individuals should also follow defensive strategies such as using spam filters, changing email addresses regularly, and pressuring lawmakers via email and the Internet to embrace opt-in policies.
Click Here to View Full Article

Key to the challenge of effecting smooth human/computer communications on a par with the interplay regularly depicted in science fiction films is teaching computers to distinguish between when someone is speaking and singing, according to University of Regina computer science professor David Gerhard. He is part of a small group of researchers attempting to solve the song and music constraints of current computerized sound-recognition systems. Gerhard believes a system that can identify and analyze a person's singing voice could find practical use in speech therapy, music and lyrics transcription, and extracting specific songs from vast online music archives, among other things. Gerhard focused on what precise elements people use to make distinctions between spoken and sung voice by sampling hundreds of individual speech-song pairings that were subsequently analyzed by numerous people. Major distinguishing sound-wave qualities outlined by this study include vibrato and "voicing," and Gerhard used these findings to write algorithms that enable computers to extract data about such critical characteristics. Columbia University's Daniel Ellis thinks such research will be especially useful with the current boom in legal online music and the emergence of huge music databases. He suggests that people will one day be able to load examples of their favorite songs and singers into their computer, while software would collate them into a "My Music" model that would be tapped to mine online MP3 files for similar material. In addition, Gerhard thinks such systems could enhance music appreciation.
Click Here to View Full Article

Large credit card companies, big IT vendors, and technology startups foresee an imminent change in the way people use bank and credit card accounts. Currently, three technologies are vying to replace the credit cards bulging in people's wallets: Radio frequency identification (RFID), biometrics, and wireless gadgets such as mobile phones. Examples of how these systems might work are already here in Mobil and Exxon gas stations where customers wave their RFID-enabled Speedpass card in front of a reader to buy gas, and in Istanbul where commuters in the mass transit system use an RFID iButton to debit a prepaid account. A number of other companies are already running or testing their own solutions aimed at speeding transactions, making accounts more secure, and adding convenience and flexibility: Visa's Sue Gordon-Lathrop says mobile phones in the future could store credit card information and approve purchases when the correct PIN is entered on the keypad; the phones would have to be equipped with Bluetooth, RFID, infrared, or some other short-range wireless technology since customers would not want to dial in to transact. Biometrics, specifically fingerprints, are generating a lot of interest as a way to add security and convenience, as well as a greater marketing platform for companies. If consumers used only their fingerprint to access all their accounts, then retailers would have less trouble signing up customers for store-branded credit accounts. Indeed, retailers and financial firms have a lot to gain from new platforms, including reduced fraud and the ability to add value-added functions. A mother and daughter could access the same account using their fingerprints, but the mother could set spending limits for the daughter, for example.
Click Here to View Full Article

Light emitting diodes (LEDs), which offer less heat output and greater electrical efficiency than traditional lighting technology, have made significant headway in the consumer market in recent years--but LEDs' true advancement lies not in the technology itself, but in the unique applications of the technology. "As LEDs and the electronics become more efficient and smaller, you will start seeing LEDs being packaged in many different ways," predicts ANCSACO co-president Jonathan Labbee. "So the core technology will remain somewhat constant, but the packaging will be drastically different, offering more choices of applications." Fran Douros of Lumileds Lighting reports that LEDs currently turn out 25 to 30 lumens per watt, whereas common incandescent bulbs run at a paltry 12.5 lumens per watt; the industry is generally shooting for a lighting technology that can generate more than 40 lumens per watt in 2004. Technical Insights' Hrishikesh Bidwe says that liquid crystal display (LCD) technology is a better choice for computer or personal digital assistant displays than LEDs, although smaller, low-power organic light emitting diodes (OLEDs) could be used in a similar capacity. He adds, however, that current OLEDs suffer from high costs and shorter life spans in comparison to LCDs. Another challenge the LED industry needs to overcome is a lack of standards, according to Labbee; an overload of standards could cause prospective consumers to make incorrect assumptions about product quality. Meanwhile, some analysts believe that flexible screens could one day become a reality thanks to LED technology.
Click Here to View Full Article

Researchers at Georgia Tech University in Atlanta have developed a computer program that is designed to select the two best college football teams to play in the national championship game. Peter Mucha and his colleagues view the program as a more transparent alternative to the confusing formula used to determine the Bowl Championship Series (BCS) poll. The BCS system relies on polls from coaches and sportswriters, and seven different statistical algorithms based on wins and losses, margin of victory, strength of opposition, and whether it is early or late in the season, to produce its rankings. The Georgia Tech program makes use of a large number of "monkeys," or virtual voters, to pick teams based on their last performance. "We wondered what happens if you mimic the idea of 'my team is better because my team beat you team,'" says Mucha, who believes such a simple approach could replace the statistical algorithms used by the BCS. The virtual monkeys are fickle, arbitrarily switching allegiances when its team loses to another. University of Wisconsin-Madison programmer David Wilson praises the system's straightforwardness, but argues that any system must reliably pick the best teams. Mucha does not claim to have the best system, and acknowledges that "from a statistical point of view, it's incredibly naive."
Click Here to View Full Article

Rapid advancements in hard disk drive technology will pave the way for commercial products that consumers can use to store a lifetime's worth of personal data, such as TV programs they have watched, articles and books they have read, and phone calls they have made. Drive manufacturer Cornice estimates that hard disk drive capacity will increase by half every 12 months, while Portal Player CEO Gary Johnson claims that "small hard drives provide virtually limitless amounts of storage." However, he adds that the real difficulty lies in making it easy for users to retrieve specific information from the drive. Johnson says the key is metadata, in which each stored item is uniquely labeled so it can be extracted in numerous ways. Portal Player is believed to own 85 percent of the hard drive-based MP3 player market, while SigmaTel rules the market for solid state- or flash memory-based MP3 players; flash memory players are less costly, smaller, and more durable than hard drive players, but they currently can only store roughly 48 MP3 songs, while hard drive players such as iPod can store thousands. CIBC analyst Rick Schafer forecasts that more than 30 million MP3 units will be sold in 2006, accompanied by "excellent growth" for digital still cameras through 2006. Johnson thinks the next critical phase in the evolution of data storage is to enable content to be directly loaded into the player rather than through the PC. The ability to store and recover a lifetime's worth of personal material is a particularly enticing proposition to biographers, notes U.K. Biographers' Club secretary Andrew Lownie.

Fermi National Accelerator Laboratory physicist Richard Carrigan, Jr., believes it is within the realm of possibility that signals from extraterrestrial beings could be similar to malware, and he calls for Search for Extraterrestrial Intelligence (SETI) project researchers to step up their vigilance and implement a decontamination protocol for potential SETI transmissions. "Put simply, the receiver needs virus protection," he argues, adding that SETI's current signal handling protocol is only useful from a public relations standpoint in the event a signal is announced prematurely. In a paper he presented at the 54th International Astronautical Congress, Carrigan noted that early radio transmissions from Earth have already reached the receiving distance of approximately 400 stars; assuming there are civilizations that far out, some may have already responded with Earth-bound signals that could either be benevolent or malevolent. Carrigan also points out that the SETI Institute's Allen Telescope Array will greatly amplify the possibility of detecting signals of extraterrestrial origin, and he suggests that a workshop concerning "messages and message impact" be conducted before the facility is fully operational. The Fermi researcher notes the likelihood that actual extraterrestrial transmissions will be compressed, and explains that such messages will need to include an "advertisement" or lure so that a host will be drawn to them. Some SETI researchers are hoping to come across a massive interstellar archive, a sort of galactic equivalent of the Library of Congress; but Carrigan warns that such a huge amount of information would overwhelm the receiving system. SETI Institute senior astronomer Seth Shostak discounts much of Carrigan's concerns, commenting that signal averaging screens out messages with high information content, and further arguing that the likely technological gap between humans and extraterrestrials would preclude any malicious intent on the aliens' part.
Click Here to View Full Article

A team of MIT researchers and engineers is using data mining to set up an online public database that could become an invaluable resource for designers trying to develop novel materials for practically any purpose. The method could be used, for example, to search for specific information in a protein database to help scientists investigate the structure, properties, and functions of other proteins, notes Dane Morgan of MIT's Department of Materials Science and Engineering. The MIT technique has up to now been tested on an in-house database, but the public database will be established with funding from the National Science Foundation. The creation of new materials has typically been an arduous and painstaking process of experimentation and analysis, but ab initio methods enable computers to virtually screen potential substances via quantum mechanics equations. Morgan says the ab initio process is still very expensive and time-consuming because of the impracticality of having the computer explore all potential structures for any given material. He says the MIT method "establishes patterns among the many thousands of different possible structures" inherent in any given blend of materials, and these patterns can be employed to lower the number of structures the computer has to investigate. The team is particularly excited that the materials database, which will be comprised of data donated by the entire computational materials community, will allow information from previous ab initio computer calculations and lab experiments to be recycled. This will eliminate unnecessary and redundant research, according to MIT scientist Stefano Curtarolo, who co-authored a paper detailing the data-mining technique and its application to materials research in a recent edition of Physical Review Letters.
Click Here to View Full Article

The Internet Engineering Task Force (IETF) realizes its plight as administrators struggle under a growing load of proposals, and the informal system set up to deal with a research project now has to deal with time-sensitive production issues. IETF Chair Harald Alvestrand says the goal is still to produce quality standards--just that the process needs to move faster. Currently, vendor proposals often take more than five years to wind their way through the IETF stages of review. The group has ushered in many core Internet technologies since its inception in 1986, including email infrastructure, routing techniques, and the domain name system. With the resignation of several of the group's area directors due to overload, there is a consensus among the 2,000 or so IETF participants that the group must be reorganized, standards tracks revamped, and administrative structures flattened; the IETF's most recent meeting in Minneapolis failed to resolve any of these issues, but discussion continues in email lists and the topics will be addressed at the February IETF meeting in Seoul, South Korea. Former Internet Architecture Board (IAB) Chair John Klensin says another problem is the in-fighting and bias present in discussions, with many people arguing vehemently against papers they have not yet read. Reforms will likely involve giving more control to working group heads and shifting some document review tasks to the IAB, which is a sister organization with the IETF under the Internet Society. Another issue is funding, with the IETF budget in the red since 2002 when meeting attendance started to decline; about 1,200 engineers attended the last meeting compared to as many as 2,900 attendees at the height of the dot-com boom. Alvestrand warns the group will run out of funds in 2004 unless something changes.
Click Here to View Full Article

The growth of cheap overseas competition and packaged applications are reducing the job opportunities for corporate programmers in the United States, giving rise to a new worker model that stresses productivity and business acumen over pure technical skills. In fact, Owens & Minor CIO David Guzman reports a significant decline in the number of IT professionals who regard themselves as programmers, given the negative connotations the term has acquired recently. Archipelago programmer Kevin Mueller believes savvy companies will adopt a strategy in which small teams of programmers familiar with business goals collaborate with business managers to create problem-solving software, but he notes that many companies will choose to deal with problems through brute programming force, which is most likely to be outsourced in the near-term. Lower programmer prospects are being driven not only by burgeoning overseas outsourcing, but by less need for programmers overall. The move to object-oriented programming and packaged apps has shifted a great portion of business-process knowledge to business analysts, while Archipelago CTO Steven Rubinow says programmers with the best career prospects are those who receive the best training and are the most productive. He predicts that "the lower echelons of the skill levels are going to be washed away." Booz Allen Hamilton CIO George Tillman says that future CIOs are more likely to hail from the company's business unit, while Mueller says the insular nature of programming puts programmers at a disadvantage by cutting them off from critical contacts and customers. Intentional Software owner Charles Simonyi thinks that 20 percent or more of the least productive American programmers could be outsourced in the near future, but their outsourced jobs will eventually be mechanized by tapping the expertise of senior U.S. programmers.
Click Here to View Full Article

Service-oriented architecture (SOA) has emerged over the past year as perhaps the next distributed computing model, although its definition and distinction from earlier distributed computing implementations are somewhat vague. ZapThink senior analyst Jason Bloomberg describes an SOA as "an approach to distributed computing that represents software functionality as discoverable services on the network" that is about 10 to 15 years old, but it is only recently that SOAs are being constructed with standardized Web services. IBM's Dan Sturman explains that at the root of an SOA is the separation of integrated enterprise functions into more manageable components. Meanwhile, Dave Cotter of BEA Systems says the SOA's progress is being driven by Web services that are built atop familiar protocols and that uphold all key SOA traits. Analysts and vendors agree that the most compelling reason to adopt an SOA is its ability to boost business agility by tapping into existing infrastructure; however, there is disagreement in relation to the costs involved. "A lot of when SOAs will be seen in larger numbers depends on the economy and IT budgets in general," notes Sturman, contrary to Cotter's view. A key provision of an SOA is identity management because it supplies an effective user directory and policy management framework that coordinates user ID and confirmation, user application and system activity, and allows single sign-on. Bloomberg attributes the sluggish maturation of SOAs to four factors that must be juggled--security, management, business process, and the restructuring needed to migrate to an SOA; furthermore, products and standards are still characterized by immaturity. Vendors and analysts agree that the widespread adoption of SOA will not take place overnight, although their estimates for this development vary from two to five years.
Click Here to View Full Article

The National Conference of Commissioners on Uniform State Laws has abandoned its effort to have states pass the Uniform Computer Information Transaction Act (UCITA), but opponents believe the group's effort still could have an impact on information technology licensing. "A lot of people say it's dead, but we'd rather say it's dormant," says Carol Ashworth, coordinator for the Americans for Fair Electronic Commerce Transactions (AFFECT), an umbrella group of UCITA opponents. The proposed law is an attempt to create a national standard for the commercial licensing of software and other information products, and allow vendors to turn off products remotely when a license is breached. However, UCITA attempted to do so by prohibiting reverse engineering to address bugs, security breaches, and communications; preventing the press from reviewing products without the consent of the software publisher; while allowing vendors to change licensing terms any time with an email message or a new notice on a Web site. What is more, UCITA would have let software providers have their way with copyright law. Maryland and Virginia are the only states to adopt UCITA. Opponents of the proposed law believe software vendors might try to use parts of UCITA as a model for new licensing agreements.
Click Here to View Full Article

An intuitive software interface called Chandler being developed by software designer Mitch Kapor's Open Source Applications Foundation seeks to spare users the headache of laboriously sifting through programs to find related material by placing all connected spreadsheets, documents, and other items in the same area at one time. The interface's success will depend on a dramatic change in how both users and programmers perceive the presentation and organization of computer data, as well as the efforts of volunteer programmers working on the open-source code. Kapor conceived Chandler as an alternative to most of the "productivity" programs used by consumers and workers--programs characterized by complexity and inflexibility. Chandler groups calendars, file folders, and other entities--termed "contexts" by foundation programmers--around email; if a user is working on several projects, Chandler can reorganize the screen for each specific project. Programming heavyweights donating their expertise to make Chandler ready for its projected December 2004 debut include Macintosh operating system developer Andy Hertzfeld and Netscape founding engineer Lou Montulli. Hertzfeld is dealing with software agents, which will be a key driver of Chandler's flexibility: Hertzfeld thinks it is a reasonable assumption, for example, to have an agent that automatically books hotel rooms when it finds a preferred price, or updates one's address if a friend's contact list is outdated. Kapor, who describes himself as "the benevolent dictator" of the project, explains that one of the guiding principles of Chandler's development is complete honesty among the programming team about buggy code and scheduling setbacks. He also calls skeptical attitudes toward Chandler's prospects premature, as the project only concluded its "blue sky" phase four months ago.
Click Here to View Full Article(Access to full article available to paid subscribers only.)