"When he walked in the newsroom, it was like Thomas Jefferson walking into a history class at a university." – Sean McManus

"I must report that I recently paid another visit to Stanley Kubrick’s ‘2001’ while under the influence of a smoked substance that I was assured by my contact was somewhat stronger and more authentic than oregano. (For myself, I must confess that I soar infinitely higher on vermouth cassis, but enough of this generation gap.) Anyway, I prepared to watch ‘2001’ under what I have always been assured were optimum conditions, and surprisingly (for me) I find myself reversing my original opinion. ‘2001’ is indeed a major work by a major artist." – Andrew Sarris

"Japanese scientists grew up watching robot cartoons, so they all want to make two-legged companions." – Kenji Hara

"Oh my God, it was putrid. One of her sneakers is still down there." – Kim Longueira

"I’m frustrated. I can’t lend people books and I can’t sell books that I’ve already read, and now it turns out that I can’t even count on still having my books tomorrow. It illustrates how few rights you have when you buy an e-book from Amazon." – Bruce Schneier

"I never imagined that Amazon actually had the right, the authority or even the ability to delete something that I had already purchased." – Charles Slater

June 27th, 2009

French "3 Strikes" Law Returns, Now with Judicial Oversight!

The French Senate has once again approved a reworked version of the country's controversial "three strikes" bill designed to appease the Constitutional Council. Instead of a state-appointed agency cutting off those accused of being repeat offenders, judges will have the final say over punishment.
Jacqui Cheng

The French Senate has approved an updated version of the "three strikes" online copyright infringement bill aimed at taking repeat offenders offline. The approval comes exactly one month after the country's Constitutional Council ripped apart the previous version of the Création et Internet law. The nouveau version of the bill attempts to get around the constitutional limitations by moving the final decision to cut off users' Internet accounts to the courts.

Originally, Création et Internet set up a High Authority in France that would oversee a graduated response program designed to curb online piracy. Rightsholders would investigate, submit complaints to the High Authority (called HADOPI, after its French acronym), and the Authority would take action. Warnings would be passed to ISPs, who would forward them to customers; after two such warnings, the subscriber could be disconnected and placed on a nationwide "no Internet" blacklist.

The first version of the bill was foiled by a handful of Socialists who voted it down, but the law managed to pass on its second attempt in May of this year. The law still had to be scrutinized by the Constitutional Council, however, and this was where it ran into trouble. The graduated response program was nonjudicial, setting up a separate "administrative" authority, but it performed an essentially judicial function. The sanction proceedings had a presupposition of guilt—the burden of proof was on the Internet user to show that he or she had not been pirating.

The Council focused primarily on this aspect of the law, emphasizing that everyone is presumed innocent until proven guilty under the Declaration of 1789, and that this principle applies "to any sanction in the nature of punishment, even if the legislature has left the decision to an authority that is nonjudicial in nature." The Council's censure indicated that disconnections must be treated like court cases and not just administrative proceedings. As a result, the law was shot down.
Bring in the judges

Not content to let the idea die, President Nicolas Sarkozy's administration reworked the law in hopes of making it amenable to the Council—instead of HADOPI deciding on its own to cut off users on the third strike, it will now report offenders to the courts. A judge can then choose to ban the user from the Internet, fine him or her €300,000 (according to the AFP), or hand over a two-year prison sentence.

Those who are merely providing an Internet connection to dirty pirates can be fined €1,500 and/or receive a month-long temp ban from the online world. (A group of French hackers has already begun to work on software that cracks the passwords on locked WiFi networks so that there's an element of plausible deniability when law enforcement tries to go after home network owners.)

The Senate approved this version of the bill with a vote of 189-142 this week, sending it to the National Assembly for final passage.

While certainly an improvement, even the new version of the bill cannot escape criticism from open Internet groups, who still believe that the system makes it too easy for non-judicial entities to enforce punishments. In a post to its website, La Quadrature du Net wrote that those who approved the bill are trying to reduce the courts to nothing more than a rubber stamp and that the bill mocks the values of the constitution. The group called on other legislators to denounce the bill when it comes to a vote later this month.http://arstechnica.com/tech-policy/n...rom-senate.ars

Australia’s Conroy Vows to Tackle Illegal File Sharing
Ben Grubb

Government promises to "facilitate" a solution.

Federal communications minister Stephen Conroy has vowed to fight illegal file sharing head on in a report on the Digital Economy.

In a report unveiled at the Powerhouse Museum in Sydney last night, Senator Stephen Conroy, Minister for Broadband, Communications and the Digital Economy, said the Government, among other promises, will "facilitate development of an appropriate solution to the issue of unauthorised file sharing".

"The Government recognises a public policy interest in the resolution of this issue," the report said. "A number of submissions received during the consultation phase for the development of this paper argued that a role for Government exists in addressing the apparent popularity of peer-to-peer file sharing of music and movies, without the necessary permissions of the relevant copyright owners".

The report goes on to outline submissions made to the department by various stakeholders.

"One solution proposed by copyright owners is a "three strikes" or "graduated response" proposal under which copyright owners would work together with ISPs to identify the ISP's customers who are suspected of unauthorised file sharing and the ISP would then send a notice on behalf of the copyright owner to that customer advising of this allegation".

This was, however, unpopular amongst many concerned about consumer rights.

"Several submissions were received which opposed this proposal for reasons including the lack of judicial oversight of administering sanctions based on private allegations, the lack of public transparency about the process and concern over consumer rights," the report said.

"The Government is currently working with representatives of both copyright owners and the Internet industry in an effort to reach an industry-led consensus on an effective solution to this issue."

The Government's efforts to find a solution to illegal file sharing comes as a landmark court case between The Australian Federation Against Copyright Theft (AFACT) and Internet Service Provider (ISP) iiNet battle over whether ISPs are responsible for preventing illegal file sharing. Senator Conroy has previously noted he is watching this case "with interest."

Tackling unauthorised file sharing was among many areas of focus for Conroy's department listed in the Future Directions paper.

The paper also covers off the Government's focus on improving digital literacy, access to Government information online, the enhancing of user trust in digital technologies, the building of the National Broadband Network, allocation of mobile spectrum and switchover to digital television.

The paper was developed in collaboration with industry and other stakeholders through a three-stage consultation process that began September last year.

It is a broad summary of the Government's aims in terms of the digital economy portfolio, without revealing any new initiatives to meet these aims.

A proposal document for the review of section 92A of the Copyright Act 1994 and how to deal with repeat Internet copyright infringement has been released for public feedback by Commerce Minister Simon Power.

The document was the result of several months' work by a working group comprising intellectual property and Internet law experts assisted by officials from the

1. Where there has been suspected infringement, rights-holders could complain to the internet service provider (ISP) which would notify the subscriber. If there was further infringement, a cease-and-desist order would be sent.
2. If there was further infringement, the rights-holder could apply to the Copyright Tribunal for an order to obtain the subscriber's name and contact details.
3. The rights-holder could then serve an infringement notice. The subscriber could elect mediation. If that failed or there was no response, the tribunal would convene, and could impose penalties ranging from fines to termination of a user's internet account.

A targeted group of internet users, internet service providers, and copyright owners has been invited to comment on the proposal, but Mr Power hopes the public will also provide feedback.

"We need to provide a fair and efficient process to address repeat copyright offending, and I look forward to hearing what New Zealanders think about the proposed procedure.

"Unlawful file-sharing is very costly to New Zealand's creative industries and I am determined to deal with it.

"I am confident that at the end of this process we will have a law that is clear, sensible, and fair to everyone."

Hostilities are expected to resume this week between internet advocates and the music and movie industries after the Government releases details of controversial changes to copyright law.

A working group of copyright experts convened by the Economic Development Ministry is expected to release its recommendations today or tomorrow on how to replace Section 92a of the Copyright Act, which was scrapped in March after a wave of protests.

Section 92a, which never came into force, would have obliged internet providers to terminate the accounts of repeat copyright infringers "in reasonable circumstances".

The main target of the law change was the growing number of tech-savvy internet users who download music and videos, often pirated, through peer-to-peer file-sharing services, such as BitTorrent.

Despite speculation of a division in Cabinet on the issue, the working group is expected to back a reworked Section 92a that would be more specific about how the extra-judicial termination policy should be applied.

The recommendations were due to be released last week, but were delayed as Commerce Minister Simon Power was on leave.

Sources say an independent arbiter will be empowered to rule on disputes. That may be the Copyright Tribunal, which would require new powers and legal protections.

Section 92a was scuppered after the Creative Freedom Foundation co-ordinated a "blackout" campaign that saw protesters, including British actor Stephen Fry, replace their photos on networking sites with blacked-out rectangles.

Foundation director Bronwyn Holloway-Smith is unsure if the same campaign would be repeated, but says the foundation will be vocal in its opposition if the Government persists with a plan to disconnect the internet accounts of copyright infringers. That might involve finding a fresh approach to drawing attention to the issue.

Disconnection would be a "disproportionate" penalty that would affect people sharing internet accounts and would lead to people who relied on internet telephony being cut off from using the phone, Ms Holloway-Smith says.

A survey in Britain concluded most people viewed the internet as an essential utility.

Ms Holloway-Smith agrees with Labour communications spokeswoman Clare Curran that fines would be a better sanction.

InternetNZ spokesman Campbell Gardiner says the society also believes termination should be off the table.

Tony Eaton, director of the Federation Against Copyright Theft (NZFact), which is a branch of the United States' Motion Picture Association, hopes further protests can be avoided.

Mr Eaton says NZFact agreed with the Telecommunications Users Association, an industry body, prior to the scrapping of Section 92a, that infringers would be able to sign up with another internet provider the day after their accounts were terminated. NZFact had not formed a view on whether they should be allowed to sign back up with their original provider.

"It is not like the French law; they can go back the very next day and re-sign."

Disconnection would nevertheless be preferable to fines as it would be an inconvenience, he says.

AROUND THE WORLD

FRANCE

The French Government will next week debate a tough "three strikes" law that would let judges terminate internet accounts or impose stiff fines or jail terms on internet users who repeatedly infringed copyright. Trials would be simplified with one, rather than the usual three judges presiding. The Constitutional Court shot down an earlier proposal that would not have required a judicial process, ruling that internet access was a "human right".

BRITAIN

Britain's Culture Secretary, Ben Bradshaw, ordered a clampdown on illegal file sharing last month, but his proposal for written warnings was berated as toothless by copyright owners. Regulator Ofcom could order internet providers to block users from certain sites or decrease the speed of their connection, but only a year after a written warning and only if the warning regime failed to stem overall piracy by 70 per cent.

SWEDEN

Internet traffic plummeted 40 per cent after Sweden ordered internet providers to hand over the internet addresses of suspected copyright violators in April. The four founders of a peer-to-peer file sharing site Pirate Bay were later arrested and sentenced to jail terms. A backlash saw the establishment of the Pirate Party by internet rights' advocates. It won 7.1 per cent of the vote in Sweden and a seat in the European Parliament at elections last month.

AUSTRALIA

In a widely watched test case, Australian internet provider iiNet has been prosecuted by the Australian Federation Against Copyright Theft, which alleges iiNet ignored requests from its movie company members to discipline customers for breaking copyright laws. The case is due to go to trial at the Federal Court in October and has all Australian internet providers on the hop.http://www.stuff.co.nz/technology/di...opyright-fight

Because You Should Know

I believe the Censor's Office in New Zealand does a very good job. While our laws on Obscenity are just as vague and subjective as anyone else's, here they seem to be interpreted very liberally. In Australia, we're sometimes held up as an example of depravity for what our censor allows to go un-banned.

Given the law as it is written, it seems perfectly understandable that the Department of Internal Affairs should at least attempt to regulate internet content. There's no reason objectionable material should get a pass because it happens to be on a website instead of in a book or on a DVD. It's complicated, of course, because the material can be produced in one country, hosted in another, and viewed in a third, and the DIA only has jurisdiction over New Zealand, but they're surely legally obligated to try.
So in 2007 and 2008, the DIA ran a trial filtering program in conjunction with a selection of New Zealand ISPs. If you were with TelstraClear, Watchdog, Maxnet or Ihug in that period, you participated in that trial. But you know that, right? Your ISP told you they were filtering your internet connection, right? The Department of Internal Affairs' budget indicates that filtering will be introduced some time during the 2009/10 financial year. And I'm sure they were planning to tell you before they did it.

We do have details of the scheme, so there's probably nothing to worry about over the free and open flow of information. That's what Thomas Beagle's experience tells you. A few quick* Official Information Act requests and the DIA is perfectly happy to tell you just what it's up to. Sort of.

Thomas's hard work hasn't just provided the information. He's also written it up into two FAQs, general and technical. They provide a clear, easy summary of the information and I highly recommend them for anyone interested in what's about to happen to your internet connection.

It is, I have to say, a very good scheme as these schemes go. They are only blocking child pornography sites, and blocking to the level of individual pages and images rather than whole sites. Signing up to the scheme is entirely voluntary for ISPs. Sites are viewed by actual people, and for a site to be added to the list, three people have to agree that it fits the criteria. Each site is reviewed monthly to see if it should still be on the list. If you attempt to access a blocked site, you will be told that the site is blocked – unlike the similar system in place in Britain. Because traffic only referred to the DIA system if the requested site is on the list, the speed of most traffic should be entirely unaffected.

That sounds so much better than the Australian scheme, right? No wonder there's been so much less fuss.

It is easy to stop people fussing about things if you don't tell them.

So, given everything I've said, why am I still not happy about this? Partly, of course, because I'm a whiny bitch. But also because I'm a consistent whiny bitch.

I'm not the only one. Thomas has concerns. Mauricio Freitas has concerns. The staff at pretty much every café in a 2km radius of our house have concerns about the way my partner and I keep going for coffee and discussing child pornography.

My first concern is that nobody told anybody about this. This letter from TelstraClear certainly wasn't widely-distributed. It also contains a rather interesting statement:

Quote:

TelstraClear will not be keeping records of any users who attempt to access these sites. This is not an intelligence gathering or covert measure. It is a simple filtering process to make the internet safer for all.

This is absolutely true. The ISP doesn't collect the information. The DIA does. They log your IP number if you try to access a blocked site, for whatever reason.

No laws have needed to be passed. There has been no public debate. The decision to implement filtering came from with the DIA. As far as I am aware, the ISPs which took part in the trial chose not to tell their users that they were doing so.

My second concern is the nature of 'voluntary'. In Britain, the filtering scheme is enforced in this exact way: it's voluntary for any ISP to sign up, or not sign up. And every British ISP signed up. Here, the ISPs that took part in the trial, and the ones that have indicated interest in picking up the filtering scheme (Telecom and Vodaphone) represent 94% of the New Zealand market. Once this is up and running, how many people will be able to access unfiltered internet if they want to?

Then there's 'drift'. I know this is a dubious 'slippery slope' argument, but if you look at the legislation that governs censorship in New Zealand, nowhere is child pornography separated from other objectionable material. Customs already acts in cases of possession of bestiality material – why not filter for that as well? It's clearly objectionable. And if you're doing that, then why not the worst, most violent pornography as well, the kind covered by Britain's extreme pornography law? In fact, why not everything already defined as objectionable?

Quote:

All 'objectionable' material is banned. In deciding whether a publication is 'objectionable', or should instead be given an 'unrestricted' or 'restricted' classification, consideration is given to the extent, degree and manner in which the publication describes, depicts, or deals with:
• acts of torture, the infliction of serious physical harm or acts of significant cruelty
• sexual violence or sexual coercion, or violence or coercion in association with sexual conduct
• sexual or physical conduct of a degrading or dehumanising or demeaning nature
• sexual conduct with or by children, or young persons, or both
• physical conduct in which sexual satisfaction is derived from inflicting or suffering cruelty or pain
• exploits the nudity of children, young persons, or both
• degrades or dehumanises or demeans any person
• promotes or encourages criminal acts or acts of terrorism
• represents that members of any particular class of the public are inherently inferior to other members of the public by reason of any characteristic of members of that class being a characteristic that is a prohibited ground of discrimination specified in the Human Rights Act 1993.

It's their job, just as much as banning child pornography is. Many other lists of banned URLs, initially set up to combat child pornography, now contain much other material, some of it not sexual at all, some of it not violent. Drift happens. And if nobody told you they were filtering your connection in the first place, how likely are you to be told if extra material is added to the list?

The secret list. Thomas has, very ballsily, attempted to OIA the blacklist. The DIA says the blacklist is around 7000 URLs: for context, the Australian list is around 1400. This could easily be explained by the New Zealand list operating at a greater level of detail, so one site could contribute dozens of URLs to the list, where the Australian list would simply block the whole site. But we don't know that, and we're not allowed to know. The DIA refused to release the list. Thomas has complained to the Ombudsman.

It should also be noted that the DIA is required to make public its list of decisions in regard to other media. If it bans a book or a DVD, you're allowed to know about it. If it bans a website, you're not.

There's a good reason for this, of course. If only some ISPs are using the filter, then if you publish the list, people on other ISPs would be able to use the list to find child pornography. Assuming that people interested in looking at that kind of material don't already know where to find it, or aren't trading it between individuals over peer to peer networks.

The child pornography the DIA is blocking also includes material such as drawings or fiction, where no children were harmed in the production.

Most people won't care, frankly, if their internet connections are filtered. And a lot of people will heartily approve – after all, it's child pornography. The filter is 'making the internet safer for all'. But you should at least know.

Most Canadians support the idea of Internet traffic management as long as all users are treated fairly, a new poll suggests.

The Canadian Press Harris-Decima poll found only about one in five of those surveyed had heard of Internet traffic management or "traffic shaping," a contentious issue now before the federal regulator.

Internet service providers employ the practice, which slows down service to some users, to manage and prioritize online traffic during high-volume periods.

Telecom companies are appearing before the Canadian Radio-television and Telecommunications Commission this week over the question of guidelines for Internet traffic management.

Critics of the practice are pressing for so-called "net neutrality" so that the big service providers are prevented from treating some customers differently.

Sixty per cent of survey respondents said they found the practice reasonable as long as customers are treated fairly, while 22 per cent said Internet management is unreasonable regardless.

"Canadians like high-speed Internet access, and the speed of service provided by their Internet service providers is seen as satisfying their needs," said the survey.

Eighty per cent of households have Internet access at home, 73 per cent of them high-speed, the survey suggests. Eighty-five per cent of survey respondents said the speed of their home service is adequate.

Most - 54 per cent - said they did not know whether traffic management affects them personally. Just 15 per cent said they are affected by the practice.

"As long as all customers are treated fairly in the way they are affected, most believe that traffic shaping is a reasonable approach for ISPs (Internet service providers) to take," said the survey.

Telecom companies identify person-to-person file-sharing - such as uploading and downloading of movies - as the main problem they're trying to solve through traffic management.

Rogers, for one, uses complex technology to analyze what kinds of communications users are engaged in - sharing a Hollywood movie versus sending email, for example - and then "throttles" or slows down certain activities so the rest of its network moves faster.

The company compares person-to-person file-sharing to a car that parks in one lane of a busy highway at all times of the day or night, clogging the roadways for everyone unless someone takes action.

It sounds like something an overbearing customs officer would do with your airport bag. But deep packet inspection, or DPI - the practice of examining Internet transmissions to figure out what kind of content is being sent - is a hot-button issue in the online world.

Activists for a more open Internet say DPI limits freedom and innovation and threatens privacy. Big Internet service providers (ISPs) call it a reasonable way to keep costs and congestion down on their networks. Representatives from BCE Inc.'s Bell Canada unit and Rogers Communications Inc. will testify about the practice today at hearings of the Canadian Radio-television and Telecommunications Commission.

The controversy is about how and why Internet traffic in Canada is managed and controlled. If the CRTC chooses to get involved, it may swing that control away from the providers, which could signal that the regulator wants to have greater influence in the direction of the Internet than it has taken so far.

The key issue is peer-to-peer file sharing, a method of transmitting large files over the Internet. The format has attracted the ire of music, television, and movie industry associations, who allege that peer-to-peer makes it easy to share illegally obtained content. But ISPs worry too: because these transmissions can run all day and night, they say that a small number of users hog a disproportionate share of the bandwidth.

So while the ISPs continue to build network capacity, creating more room for transmission, they also use DPI to deal with Internet-clogging file-sharing.

Companies like Bell, Rogers and Shaw Communications Inc. employ DPI through the use of computer technology that makes an educated guess whether the information being sent is an e-mail, a picture, or a large video or software file being sent via a peer-to-peer application. They can then slow down the latter to avoid too much strain on their networks, or accord them a lower priority.

"We can't spend our way out of the peer-to-peer problem," said Ken Englehart, vice-president of regulatory affairs for Rogers Communications. "It can tie up a big part of the highway."

But activists say that the problems with peer-to-peer are exaggerated, and they call DPI a sledgehammer that isn't necessary to deal with a very limited problem.

Worrying about how much content is being transmitted over all is the wrong focus, activists say. Rather, "bursts of Internet traffic at specific times are what matter," said David Reed, a fellow at Hewlett-Packard Laboratories who testified this week on behalf of the Campaign for Democratic Media.

Those bursts don't necessarily involve peer-to-peer traffic. Last week's Michael Jackson memorial service was streamed by millions online, but most of that streaming used other applications, not peer-to-peer.

By using DPI, some say that ISPs threaten innovation. Technologies thrive on a sufficient number of users. But if DPI slows BitTorrent (a popular file-sharing application) and other applications, the people who write software might be less likely to use it.

DPI is intrusive because it is like the postal service "looking inside the envelope" that someone's mailed, Prof. Reed said. It can also make errors, wrongly labelling a transmission as peer-to-peer when it's really something else.

But the ISPs say it's reliable. "We find it highly accurate, and continuing to get better," said Matt Stein, vice-president of network services at Primus, which uses DPI to slow down file-sharing whenever there's too much traffic on the network. And they point to their policy to discard any information about the transmission after it's been analyzed.

Peer-to-peer technology is not just used to share pirated materials - the phone service Skype functions on similar principles - and none of the Canadian ISPs say they use DPI to determine whether content is legal or copyrighted.

But one point which is less debatable is that DPI can't catch everything. More users of file-sharing are starting to encrypt their transmissions, making it harder for ISPs to determine whether the transmissions are the Internet-clogging peer-to-peer ones.
Much of the fear on both sides comes from a high-profile U.S. case. In August, 2008, the Federal Communications Commission ordered Internet provider Comcast to end a "discriminatory" practice of actively blocking or slowing transmissions done through BitTorrent.

In Canada, the CRTC could use its power under the Telecommunications Act to regulate, through principles or very precise instructions. The Act says ISPs cannot "control the content or influence the meaning or purpose of their transmissions" or "unjustly discriminate toward any person." But the ISPs say the broader threats to which the activists allude - suppressing innovation, threatening privacy - are alarmist and exaggerated. And they are fighting the prospect of further regulation.

"Our customers would experience more delays and more inconsistent service, and possibly higher prices," if the CRTC were to regulate, said Mr. Englehart of Rogers.

Who peeks, and when

Deep Packet Inspection practices of some major Internet providers:

Bell, EastLink: Implement DPI during peak periods and slow down peer-to-peer applications for peak periods (for Bell, this is between 4:30 p.m. and 2 a.m.)

Rogers, Shaw, Cogeco: Implement DPI at all times for 'upstream' communications only (uploads from computer to the Internet), and slow down peer-to-peer applications.

Telus, MTS Allstream, Vidéotron, SaskTel: Don't use DPI.

Primus

Implements DPI at all times, giving lowest priority to peer-to-peer traffic whenever there is congestion on network.

These P2P networks let you share files of any kind with your trusted friends only, eliminating concerns over virus exposure that have discouraged many people from using public file-sharing networks like Kazaa.
Jackson West

Stephane Herry says that he founded his private file-sharing network GigaTribe out of frustration at not being able to share files with his friends on Kazaa. Every time he searched for a file that he knew a friend had uploaded, he saw only similar files uploaded by strangers.

Why not, Herry thought, create a peer-to-peer (P2P) application that permitted only trusted sources to share files? Such a network would be far more secure, because you’d be sharing files exclusively with people you know and trust--not with complete strangers, some of whom may wittingly or unwittingly be spreading viruses.

Herry’s idea is proving to be popular. Some of the biggest names in public peer-to-peer file sharing now offer private alternatives. In its latest release, venerable file-sharing client LimeWire now allows users to share files privately with contacts that it pulls from Google or LiveJournal contact lists. Azureus Vuze, a popular BitTorrent client, added a FriendBoost feature to speed torrent downloads by sharing them within a group of trusted users.

In the past few years, private file sharing has evolved, steadily improving in speed, security, and functionality. Depending on what you're looking for, you can probably find a software product or Web app that’s perfectly suited to help you and your friends (or coworkers) share anything from spreadsheets to home movies legally, safely, and privately.

We took a look at four applications that promise secure, efficient file sharing among private groups: QNext, GigaTribe, 2Peer, and LogMeIn's Hamachi.

QNext

The Explorer view in QNext lets you set up groups of shared files and folders, as well as permissions for file access.

File sharing is just one of the features offered by QNext. It's primarily designed to serve as an integrated communications suite, with IM, voice, and video-chat components. But it also allows you to share files securely--with no size restrictions--and it has special photo and music capabilities as well. Finally, QNext even lets you gain remote access to your computer through a standard Web browser.

Installation and set up are painless. You simply download the software (QNext is available from PC World's Downloads library), install it, and create an account--and you can begin adding IM accounts and creating folders of files that you want to share. Network configuration and input device detection--for hardware such as microphones and cameras--is automatic. To add friends, you enter your log-in data for popular instant messaging systems like AIM and Google Talk, and then ask your friends to download, install, and register for QNext.

Once you have one or more friends enrolled in your list of QNext contacts, you can set up shared folders through “zones.” Click File, Share Content to open the QNext explorer. Then click Share Folders and Files and drag and drop the data you want to share. You can set up secure sharing by adding only QNext contacts, or you can make the files publicly available to anyone with a Web browser by selecting 'Broadcast to Web browsers'.

The interface of the application opens with a vertical list of contacts from the IM accounts that you added during initial setup. You gain access to more features, options, and settings by clicking the blue monitor icon for the Explorer. In the Explorer you set up groups of shared files and folders, as well as permissions for access--one folder could be public, another could be for one specific user. The Explorer is also where you manage other settings, including chat, video, and audio. From there, you can set up shared files and folders, and browse and search data that others have shared with you.

One particularly nice aspect of QNext is that other users needn't have the application installed in order to receive messages, shared files, or photos, or even to listen to music streamed from your shared library. QNext's servers make much of your content available publicly via browsers, if you wish, so you can simply send a URL over IM or e-mail. If you want the transfers to be private and secure, however, both parties must have QNext installed.

You'll also need to have QNext turned on and running if you or your contacts need to access the data or use the machine via remote access. This is great if you have a machine at home or at the office that is online around the clock anyway. If you use a laptop, turning off your machine, letting it lapse into sleep or standby mode, or losing your Internet connection will cut off anyone who is connected to a download or stream from one of your music playlists.

Another potential bottleneck is bandwidth. Contacts can access files and streams only as fast as your machine can upload--and since most personal users on networks have limited upstream bandwidth, simultaneously downloading or streaming more than a few files music from your machine will quickly push it to the limit.

QNext is a free download available for Windows, Mac, and Linux operating systems. Versions for the iPhone, the iTouch, and Google Android-powered smartphones are currently in the beta stage.

GigaTribe

In GigaTribe, once you've set up some files to share, you can chat with other users directly through the program.

With a familiar and friendly interface, GigaTribe targets casual computer users who want to share media collections with friends. The download, installation, and account creation process is straightforward, with no router or firewall configuration necessary. You can invite friends to download, install, and register for GigaTribe through e-mail or via social networks such as Facebook, LinkedIn, and Flickr.

You can download GigaTribe from PC World's Downloads library. To share files with it, simply start the program, click the Share button, and select a folder on your computer. GigaTribe affords you plenty of control over which of your friends can access your files. All files are encrypted, and the program lets you set access to specific groups, permit contacts to upload or download files, and even password-protect shared folders.

Once you've set up some files to share, you can chat with other users directly through the program. If a user logs off while you're downloading a file, the program will check for another copy of the file among users still online, or it will pause the download and then resume it when the original user comes back online.

The free download includes GigaTribe’s EasyConnect feature, which uses GigaTribe’s servers as an intermediary to establish your connection, thus eliminating the need for a technical configuration on your side. That feature, however, is free for only the first 30 days; after that, file transfers may slow down unless you spend the time to configure your network manually. The full version, GigaTribe Ultimate, which includes EasyConnect, costs $5 a month or $30 a year; it offers improved download speeds (by sourcing downloads from multiple copies of the same file hosted by different users) and e-mail support.

GigaTribe is available only for Windows PCs, and the latest version is still in beta. Once the Windows version is finalized, the developers have promised to add a version for Mac users.

2Peer

2Peer shows you a Windows-like folder tree, and allows you to pick which folders you want to share with friends.

What makes 2Peer unique is that its interface works entirely within your browser--though additional software runs in the background, so an installation is required. Once that's completed and you've created a user account, however, starting up 2Peer will launch your default browser, from which you'll be able to manage your shared files and folders or connect with other users.

Like QNext, 2Peer lets you share files with users who don't necessarily have the program installed--in the case of 2Peer, you can rely instead on e-mailed links or on 2PeerWeb, a fully browser-based version that supports downloads (but not uploads or shared files and folders). Also like QNext and GigaTribe, you'll have to have 2Peer up and running for others to access your data, and vice versa.

You can invite friends to participate, by entering a list of e-mail addresses or by allowing 2Peer to scan your contacts in Yahoo Mail, Gmail, Windows Live Mail, AOL Mail, or Lycos Mail. 2Peer will send an e-mail invitation to those addresses, with instructions on how to download, install, and register.

It's easy to fine-tune the privacy controls for shared folders or individual files, with access levels ranging from public availability (anyone and everyone) to a specific 2Peer user. All data transferred between usersor to 2Peers servers is sent in encrypted form.

The service is completely free, and it works on Windows PCs, Macs, and iPhones (meaning that if you have an iPhone, you can download files from friends on the fly).

LogMeIn Hamachi

Hamachi lets you trade data by linking network drives through the operating system, as you would between machines on a local network.

LogMeIn Hamachi is not specifically designed for file sharing; however, it provides a quick, inexpensive, and relatively easy way to set up a virtual private network (VPN). This means that the connection between computers over the public Internet mimics that of a private network, such as a local area network.

All users that you want to connect will have to have Hamachi downloaded and installed on their machines. Officially the program works with Mac and Linux systerms as well as with Windows PCs, but only the PC version has a familiar graphical user interface; Mac and Linux users must install and configure the software through a command line interface. All versions will tunnel through your operating system or router firewall automatically, so little or no configuration is required.

As befits its bare-bones nature, Hamachi doesn’t invite your friends to download and install the software or to register an account, so you’ll have to do that yourself (in person, via e-mail, or by other means). Once two machines are connected, you can trade data by linking network drives through the operating system, as you would between machines on a local network. You can also stream video or audio, use remote access software to control another system on the network, or play multiuser games as if you were at a LAN party (Hamachi is popular among gamers).

In mimicking a LAN, Hamachi lets you use familiar Windows network drive sharing and file and folder permissions.

Speed over the network is limited by the bandwidth available between parties. If your friend is on a modem, you'll only be able to connect at modem speeds. A central server operated by LogMeIn manages authentication; this can make creating and connecting to the VPN during peak usage periods slow or otherwise problematic.

The service is free for personal use and costs $5 per month per license for business use.
Which One Is Right for You?

Private networks have a number of benefits. Security is easier to manage, and you also get the peace of mind of sharing a song or a video with a friend rather than with the whole World Wide Web. While many of these applications could be used to do business by connecting far-flung teams so that they can collaborate, the apps represent a move toward creating private, secure sharing for personal pursuits.

For instance, QNext appears to be a good match for IM junkies looking for a communications platform that offers a more reliable and secure way to share files than existing IM tools can manage, without the size limits and with faster transfers.

GigaTribe and 2Peer are ideal for heavier file sharers who may already have networks of friends with whom they trade media libraries. GigaTribe has the slicker interface, but 2Peer sets itself itself apart by offering iPhone access. Hamachi is a general-purpose VPN that supports all sorts of private, secure connectivity, including (but not limited to) file sharing; it is suited to more advanced users.
Know What You're Getting Into

People continue to create and collect more and more digital media. Meanwhile, everything from lawsuits against individual file sharers to embarrassing incidents when the public stumbles across a private moment shared online are increasing users' awareness that publishing data to everyone, everywhere is not always a great idea. These tools make it much easier to share the content you love with people you know.

Security note: Though none of these products came with any malware that I could detect, many of them do circumvent firewall protections in order to speed up connections or ease installation, and this poses a risk to your system or network. In fact, you incur a certain amount of risk (of viruses, malware, and the like) every time you share access to your computer online. Take care to protect important data by backing it up and encrypting it locally; only connect to users whom you know and trust; and never download and install applications through peer-to-peer networks. http://www.pcworld.com/article/16801..._sharing.htmlv

Hid.im Converts Torrents into PNG Images
Ernesto

Hid.im is a new web-based service that allows users to hide .torrent files inside PNG images. This means that users can easily upload hidden torrent files to their favorite image hosting service and forums, or use it as an avatar on social networking sites without being censored.

hid.imAre you one of those people who has always wanted to hide a torrent inside an image? Wait no longer, with Hid.im it takes just one click to convert a torrent into an image file, with the option to decode it later on.

We have to admit that the usefulness of the service escaped us when we first discovered the project. So, we contacted Michael Nutt, one of the people running the project to find out what it’s all about.

“It is an attempt to make torrents more resilient,” Michael told TorrentFreak. “The difference is that you no longer need an indexing site to host your torrent file. Many forums will allow uploading images but not other types of files.”

Hiding a torrent file inside an image is easy enough. Just select a torrent file stored on your local hard drive and Hid.im will take care the rest. The only limit to the service is that the size of the torrent file cannot exceed 250KB.

Once the torrent is converted you can easily share it via image hosting services or social networking sites that don’t allow the uploading of .torrent files.

People on the receiving end can decode the images and get the original .torrent file through a Firefox extension or bookmarklet. The code is entirely open source and Michael Nutt told us that they are hoping for people to contribute to it by creating additional decoders supported by other browsers.

The idea of converting torrents into images is not entirely new. Stegtorrent is an application that has been around for a few years already and does something similar. However, unlike Stegtorrent Hid.im is web-based and doesn’t require users to install any software.

Teenage web surfers are turning their backs to old-fashioned methods of online piracy, including file-sharing and P2P sites, in favour of live streaming, according to new research.

A study by industry analysts Music Ally found that overall levels of file-sharing were falling, particularly amongst UK teenagers - down from 42 per cent of 14 to 18 year olds filesharing once a month in December 2007, to just 26 per cent in January 2009.
The overall percentage of filesharing has gone down about a quarter, 22 per cent of those surveyed were regularly filesharing two years ago to just 14 per cent now.

This is despite the fact that the percentage of music fans who have ever fileshared has increased, rising from 28 per cent in December 2007 to 31 per cent in January 2009.

The move to streaming, including YouTube, MySpace and Spotify - is clear with the research, which shows that many teens (65 per cent) are streaming music regularly, or more than once a month.

If that wasn't music to record label's ears, the study also found that more UK music fans are more likely purchase legitimate singles (19%) through sites such as iTunes, rather than fileshare singles (17 per cent).

However, the percentage of fans sharing albums regularly (13 per cent) remains higher than those purchasing digital albums (10 per cent).

Research also shows the comparative volume of pirated tracks to legally purchased tracks has halved since their last survey just over 12 months ago.

In December 2007 the ratio of tracks obtained from file-sharing compared to tracks obtained as legal purchases on an ongoing basis was 4:1. In January 2009 the ratio had narrowed to just 2:1.

Tim Walker, cheif executive of The Leading Question, which co-sponsored the survey, said: Ultimately we believe that the best way to beat piracy is to create great new licensed services that are easier and more fun to use, whether that's an unlimited streaming service like Spotify or a service like the one recently announced by Virgin which aims to offer unlimited MP3 downloads as well as unlimited streams." http://www.revolutionmagazine.com/ne...P-filesharing/

U.K. Teen Piracy Down as Streaming Soars

The number of 14- to 18-year-olds in Britain who regularly file-share has dropped by a quarter as more listen to streaming music online
David Meyer

Illegal file-sharing in the UK has fallen dramatically, according to media and technology researchers at Music Ally.

The analyst firm published a study on Monday that showed the numbers of those who regularly file-shared had dropped by a quarter between December 2007 and January 2009. The trend was particularly pronounced among 14-18-year-olds—at the earlier date, 42 per cent were file-sharing at least once per month but at the latter date only 26 per cent were doing so.

At the same time, streaming music services appear to be taking off.

The researchers wrote: "The move to streaming—e.g. YouTube (GOOG), MySpace (NWS) and Spotify—is clear, with the research showing that many teens (65 per cent) are streaming music regularly (i.e. each month).

"Nearly twice as many 14-18-year-olds (31 per cent) listen to streamed music on their computer every day compared to music fans overall (18 per cent). More fans are regularly sharing burned CDs and Bluetoothing tracks to each other than file-sharing tracks."

Spotify is ad-funded, and is rapidly expanding its catalogue. The service is even name-checked in the Digital Britain report, along with Last FM (CBS), as showing "that where the system is failing to serve the needs of users, innovative business models will develop to fill the gap". Music Ally's figures appear to suggest that these new models are at least partially responsible for fighting piracy.

However, a move to streaming could have implications for the functioning of the internet. Larry Roberts, one of the inventors of packet-switching and the ARPANet, wrote in this month's IEEE Spectrum that the internet is broken ("I should know: I designed it") because traditional packet-based routing is not built for streaming services.

"Unlike email and static web pages, which can handle network hiccups, voice and video deteriorate under transmission delays as short as a few milliseconds," Roberts wrote. "And therein lies the problem with traditional IP packet routers: They can't guarantee that a YouTube clip will stream smoothly to a user's computer. They treat the video packets as loose data entities when they ought to treat them as flows."

Roberts argued that, while past overprovision by operators meant today's users were not yet seeing serious problems with streaming services, "things are already dire for many internet service providers and network operators".

"Keeping up with bandwidth demand has required huge outlays of cash to build an infrastructure that remains underutilised," he wrote. "To put it another way, we've thrown bandwidth at a problem that really requires a computing solution."

The answer, according to Roberts, is something called flow management, which he is developing at his start-up, Anagran. The company has a "flow manager", the FR-1000, which Roberts says can "the FR-1000, can replace routers and DPI systems or may simply be added to existing networks".

Brand Asset Digital, the distributed technologies company (P2P search marketing, P2P live streaming and P2P business Intelligence), today announced that search demand across all major P2P protocols for Michael Jackson and related keyword searches have far surpassed 250 million queries worldwide with an estimated 4+ billion impressions on those searches since the singer's passing on June 25th 2009.

P2Panalytics' data suggests that in the two weeks since Michael's untimely passing, file sharing demand of the late artist is more than 100 times that of the reported 2-3+ million downloads of tracks and albums on iTunes and digital download stores based on data from SoundScan (Nielsen Soundscan). There are over 500 million computers worldwide with P2P applications and growing.

"The explosion of demand and engagement across P2P for Michael Jackson is unprecedented. We've have been tracking the related demand of search queries and impressions as well as behavioral intelligence across all the major P2P protocols with P2Panalytics?", said Joey P. Brand Asset Digital Co-Founder, "Since June 25th the data has crushed any comparable major music or major film release we have ever seen across P2P. It is demand like this which continues to demonstrate why the P2P space is the next frontier of eyeballs for Search Marketing advertisers with technologies like P2Pwords' (SEO and SEM both) to capture and leverage searches, impressions and clicks the same way Google did with Adwords' on web search."

"Missy Hogan Brand Asset Digital's Head of Operations adds, "To see the daily demand go from a few hundred thousand queries a day to well over 20mm a day is more demand than any music, film, video-game, software file or search term we have ever encountered. We believe based on this growing trend that by the end of July Michael Jackson demand may break 1 Billion Search queries alone within a one month time frame making the estimated impression number across all P2P Search close to or well over 20 Billion Impressions."

"It points to the ubiquitous future of Distributed Computing," states former Chairman and CEO of Island DefJam and Brand Asset Digital Advisor Jim Caparro. "The number seems large at first but in reality with an artist as global as Michael was, if every person with a P2P application did simply one search that would be 500 million searches, if they did two searches, then the 1 Billion mark and scale makes sense especially over a 14 day period," Caparro worked closely with Michael early in his career at Sony. "He was an icon since the beginning and the demand across P2P Search today shows just how much his music and life has touched, still touches and greatly influences the global population more than any other single artist, there is no place or person on earth he has not touched with his music or life." http://www.mi2n.com/press.php3?press_nb=121309

MiniNova vs Them Verdict Put Back
P2PNet

Including the wise (we assume) auld (we presume) heads designated to reach a decision in the case of Dutch indexing site MiniNova versus the KKK - Korporate Kopyright Kartels.

“Here’s a short update on the ongoing court trial,” says the MiniNova blog, going on:

“The judge has postponed the date of the verdict (originally planned July 15) to August 26, due to the summer holiday period.”

Not that the guys over at MiniNova — under attack from BREIN, Holland’s version of the RIAA and MPAA combined — are taking things lying down, threatening to sue a parliamentary working party examining Holland’s copyright laws.

And, “Besides these factual errors, the report also implies that all torrents on Mininova refer to copyrighted material, that Mininova hosts the content itself, and that Mininova profits from distributing these torrents,” said MiniNova.

The RIAA's lawsuits are ineffectual and have become a PR disaster for the music industry.
Robert L. Mitchell

It is hard to understand what possible good can come from the Recording Industry Association of America's continuing lawsuit against Jammie Thomas-Rasset. Earlier this year, the single mother was hit by a $1.92 millon judgment for sharing 1,072 copyrighted songs on a home computer on which the Kazaa file sharing software was running. She continues to appeal.

With its announcement last December that the RIAA was abandoning mass lawsuits against individuals who may have illegally shared copyrighted songs online, the recording industry seemed to acknowledge that the campaign has severely damaged its public image while having little if any deterrent effect on music piracy - the original goal of the program.

Despite all that, the RIAA's lawyers just can't seem to swallow their pride and let go of this troublesome case, which has dragged on since 2007. The RIAA is unlikely to ever collect such a large sum of money from Thomas-Rassett. Suing a single mom just serves to worsen what has become an ongoing PR disaster for the industry. And the effort is unlikely to have any deterrent effect, especially since the RIAA already announced that it was no longer engaged in mass lawsuits (The RIAA did say that it would still go after major offenders or those who ignore repeated warnings to stop, and that it would continue to prosecute pending cases).

Stephen Fry has launched a ferocious attack on the music industry's approach to file sharing.

Speaking at the iTunes Live event last night, Fry claimed that in seeking to disconnect file sharers from the internet "the film business, the television business, the music business - is doing the wrong thing".

And Fry was just warming up. He claimed that attacking individual file sharers such as Jammie Thomas, the woman hit with a record $1.92 million fine for downloading 24 songs, "is the stupidest thing the recording industry can do."

He claimed the real target were those downloading on "an industrial scale" and looking to make a profit on their activity. He then ripped into "those preposterous" advertisements on DVDs warning that illegally downloading a movie is the same as stealing a handbag.

"[Is the music industry] so blind... as to think that someone who bit-torrents an episode of 24 is the same as someone who steals somebody's handbag," he asked.

He finished off by admitting he'd illegally downloaded episodes of House, the series featuring his former comedy partner Hugh Laurie, because he hadn't been able to buy it legally. However, he claimed he had subsequently paid for the series.

Intriguingly, after the event Fry admitted on Twitter that he hadn't planned what he was going to say and even seemed surprised by his comments.

After Global Gaming Factory (GGF) announced its intention to buy The Pirate Bay, the public was left wondering what the site’s future would look like. Today it was confirmed that sharing on the new site will come with a cost, as the new owners plan to charge the users of the site a monthly fee.

Thus far the plans revealed by GGF concerning the future of the site and tracker have been rather vague and uncertain. However, today the freshly appointed Wayne Rosso - who has previous experience with failing P2P services - came out with a few crucial additional details on the site’s future business model.

For years The Pirate Bay’s users have been able to share files without censorship or charges, but this is all about to change. Rosso said that under the new management, the 3.7 million Pirate Bay users (or whatever userbase remains) will have to pay a monthly fee to access the site.

The money collected from user subscriptions and advertising revenue will then be used to pay off the copyright holders. The exact monthly fee is yet to be decided, but Rosso did confirm that the more files people share, the lower it will be.

“The more of your computer resources you contribute to the network, the less you pay down to zero,” Rosso told Cnet. “The user is in control.”

In addition, GGF hopes to cut deals with ISPs. “We hope to introduce a new BitTorrent technology that will optimize ISP traffic,” Rosso said. “We can save ISPs up to 80 percent of their resources. Half of the Internet traffic is file sharing and half of that traffic is Pirate Bay.”

Rosso conveniently fails to mention that a Pirate Bay where users have to pay for access will not be generating much traffic at all, so this part of GGF’s business model has to be rethought. BitTorrent does not depend on The Pirate Bay, and new trackers have already lined up to take over its job.

Details about the actual acquisition of The Pirate Bay are still scarce. Pirate Bay’s Peter Sunde told TorrentFreak that GGF will get the domain names for thepiratebay (under all the tlds they exist) and a copy of the code and the database. If all goes well the transfer of ownership will take place at the end of July.

The music industry will attempt to seize money paid to acquire the Pirate Bay, according to a high-level music industry source and a spokesman for the International Federation of the Phonographic Industry (IFPI), the trade group representing the music industry worldwide.

Global Gaming Factory, a Swedish software company, made big news two weeks ago by announcing that it would acquire the Pirate Bay, the popular outlaw file-sharing site, for $7.8 million. Since then the company has been touting a new business model and even hiring executives, such as Wayne Rosso, the former Grokster president, to legally obtain content from film and music industries.

What remains to be seen is how that sale might be affected by attempts by the music industry to collect the $3.6 million damages that a court in Sweden awarded it in April. The court found the four operators of the Pirate Bay--Fredrik Neij, Gottfrid Svartholm Warg, Peter Sunde Kolmisoppi, and Carl Lundström--guilty of copyright violations and sentenced each to a year in jail. The court also ordered them to pay 30 million Swedish kronor ($3.6 million).

Alex Jacob, a spokesman for the IFPI, said that the group has always intended to collect the damages award, but now, should the sale go through, music execs know that the original Pirate Bay operators have access to the money.

Whether these attempts to seize part of the proceeds could hold up a sale remain unclear. The first thing to remember is that the sale isn't yet done.

According to a press release, Global Gaming's offer is to pay half of the $7.8 million in cash and the other half in the company's stock. To finance the deal, Global Gaming must issue new shares and to do that it needs the blessing of investors and board of directors. Any acquisition isn't expected to be finalized before August, the company said.

On the other side, the Pirate Bay's founders have said that they haven't owned the company for years.

"We never had any interest in earning money from the Pirate Bay," Peter Sunde told Dagens Nyheter, a Swedish newspaper. "We haven't owned TPB since the search and seizure in 2006... Those who will get the money, friends in a foreign company, have agreed as a condition to put the money in a foundation for future internet projects."

The legal adviser for Global Gaming has said that the Pirate Bay is owned by a company in the Seychelles called Reservella.

Jacob, from the IFPI, says it makes no difference who owns the Pirate Bay. He said: "The judge found the four operators guilty and ordered them to pay the damages."

But Isabel Toledo, creator of Michelle Obama's inauguration outfit, fears the DPPA could widen the rift between the fashion and apparel industries - and leave consumers with fewer options.

Limited Copyright Protection

The DPPA, which is pending in the U.S. Senate, aims to protect independent designers from companies that copy their work. If would require U.S. designers to register their designs for a fee in exchange for limited copyright protection.

In Europe, registration is not mandatory, and copyright protection can last up to 25 years.

The two most prominent U.S. fashion associations have come down on opposite sides of the bill, creating a split in the industry.

The invitation-only Council of Fashion Designers of America strongly supports the DPPA. It says the worldwide fashion counterfeit market may exceed $200 billion. Many of its members are designers whose work is vulnerable to copyists.

But the larger American Apparel and Footwear Association, which represents some of the retailers the law would impact, opposes it.

Cornejo said the law would encourage collaboration between the two sides of the clothing market. Under the DPPA, mass-market retailers would have to hire designers to consult, instead of copying, she said.

But Toledo disagrees.

"They said that manufacturers would be forced to hire us, the designers. Many of the interns I've had happen to work now for JC Penney, or the Gap -- they are designers!" she said. "What are you saying, it's a hierarchy? We're better?"

Toledo worries that the DPPA would give high fashion a monopoly on trends, making good design more expensive and reducing consumer choice.

"You're now saying that the top (designers) can own the top and the bottom levels of the market," she said.

Toledo also fears the law could hurt the independent designers it was written to protect, by making them risk expensive copyright lawsuits.

"Half these young designers can hardly pay their sewers. So you're going to take that money and go to court?" she asked.

Cornejo argues that without the law, copying will continue which will hurt the designer's business.

Not so, said Ruben Toledo, president of Isabel Toledo's label.

"The American fashion system is all levels of value," he said. "A woman knows when she's buying champagne and when she's buying soda-pop. It's two different markets. But why shouldn't a woman have the right to drink Coca-Cola when she feels like it and champagne when she wants to? That's the American way."

US radio stations don't pay performers and producers for the music they play, but the recording industry hopes to change that with a new performance rights bill in Congress. Webcaster Pandora has jumped into the fray on the side of the artists and labels, asking why radio gets a free ride when Pandora does not.
Nate Anderson

Pandora now pushing radio to pay for music, too

The campaign to get radio stations to pay up for the music they play marches on. With revenues from recorded music sales declining, rightsholders have turned their eyes in recent years to commercial US radio, which currently pays songwriters (but not performers or record labels) for the tunes that power their business.

The record labels now have Pandora on their side. The influential webcaster just wrapped up its own music licensing negotiations with rightsholders last week as both sides at last agreed to a deal that each could live with. With its own future secure for the next few years, Pandora is now turning its attention to the public performance debate here in the US, saying that the issue is a simple matter of fairness: why should webcasters have to pay more for music than traditional radio does?

In an e-mail to Pandora supporters last week, founder Tim Westergren called the current system "fundamentally unfair both to Internet radio services like Pandora, which pay higher royalties than other forms of radio, and to musical artists, who receive no compensation at all when their music is played on AM/FM radio." He went on to ask readers to call House Majority Leader Nancy Pelosi's office to request her support on the Performance Rights Act that would force radio to start paying a performance royalty to rightsholders.

Radio, of course, continues to claim that it is, in some special way, a promoter of music and that it drives tremendous interests in artists. That interest, in turn, is supposed to translate into increased album sales, ticket sales, and publicity opportunities.

Why this effect would suddenly cease to apply once one starts streaming the music on the Internet via satellite (even if the scale might currently be more limited) remains unclear; certainly, a powerful case for harmonization can be made, though the "fairness" argument could clearly go either way. Radio might start paying a performance right; on the other hand, perhaps webcasters and satellite radio companies should simply stop paying one, relying on the old argument about promotion.

The US is certainly an anomaly when it comes to radio stations—something the recording industry never tires of pointing out. As global music trade group IFPI noted in its most recent annual report, "Extraordinarily, it is in the US, the world's largest music market, that has traditionally championed intellectual property rights, that performers and producers have no rights to be paid when their music is broadcast over the radio. Other countries without broadcast rights are Rwanda, China, Iran, and North Korea."

Broadcasters, in turn, point out that most of the money paid by radio would go to "foreign-owned record labels"—a reference to EMI (UK), Universal (part of Vivendi, which is French), and Sony (Japanese). This isn't really a logical attack on the fairness issue as raised by Pandora and others, but it probably has some pull in Congress anyway.

A Los Angeles blogger who leaked new Guns N' Roses songs on the Internet before their official release on the band's first new album in 17 years, was sentenced to two months of home confinement on Tuesday.

Kevin Cogill also received one year's probation and must appear in an anti-piracy commercial under the terms of his plea deal with federal prosecutors.

He pleaded guilty last December to a single misdemeanor count of violating federal copyright laws, and agreed to help authorities identify the original source of the leak.

Cogill posted nine tracks from the Guns N' Roses album "Chinese Democracy" onto the Web site antiquiet.com (www.antiquiet.com) five months before the CD came out last November. The tracks were widely circulated, diminishing some of the anticipation surrounding the long-awaited album, which was a disappointing seller.

Cogill's public-service announcement for the Recording Industry Association of America, the trade group for the major U.S. music labels, is expected to air during the music industry's Grammy Awards on January 31.

Cogill had faced a maximum of one year in federal prison, a $100,000 fine and five years' probation. But U.S. Magistrate Judge Paul L. Abrams said there was no profit motive, the tracks were posted on the blog for a short period, and his cooperation proved useful.

A U.S. Dept. of Justice spokesman said the government was still investigating the original source of the tracks.

The founder of a street gang that administered beatings and made threats in its drive to control the punk rock music scene has been charged with extorting a Chicago performer, authorities said on Tuesday.

Elgin Nathan James, a self-proclaimed founding member of Boston-based FSU -- which stands for "Friends Stand United" -- was arrested on Monday by FBI agents at his Los Angeles home. The attempted extortion charge was then unsealed by the prosecutor's office in Chicago.

FSU boasted in videos dating to 2004 about beatings it administered to punk music fans and performers. The aim was to establish control at clubs and concert venues and drive "Nazi skinheads" out, according to prosecutors.

The victim in this case was a "popular recording artist from the Chicago area" who was not named. The victim and his friends were beaten and repeatedly threatened by FSU members while on tour in late 2005 and early 2006, prosecutors said.

Cooperating with the FBI, the victim tape-recorded James seeking to extort money from him in a telephone call and agents observed James accepting a $5,000 payoff at a club to stop the harassment. If convicted, James could face 20 years in prison.

Several paintings by actor Robert De Niro's late father were sold without the actor's permission as part of an art scam by a New York gallery, the Manhattan District Attorney's office said on Tuesday.

Art dealer Lawrence Salander, 59, was indicted on additional charges for stealing $5 million from several estates on Tuesday after he was arrested in March for orchestrating a sophisticated $88-million art investment scam that also duped former tennis champion John McEnroe and Bank of America.

Salander and other dealers at his New York gallery sold the works by Robert De Niro Sr., an abstract Expressionist painter who died of cancer in 1993 aged 71, and did not pay out the majority of the sales to his estate, according to the charges.

As a result of the scam, De Niro Sr.'s estate lost more than $1 million, the DA's office said.

Other victims relating to the additional charges include the Lachaise Foundation, who consigned the works of French-American sculptor Gaston Lachaise, as well as the estate of Elie Nadelman, an American sculptor who died in 1946.

Robert De Niro has organized exhibitions of his father's works around the world and has said he keeps many of his works at home.

Tougher federal laws seem to be having blockbuster effects on shutting off the cameras of movie pirates in Canadian theatres.

Only a few years ago, Montreal was known as the illegal camcording and movie piracy capital of North America and fingers were also pointed at Calgary. Now reports from those cities show little, if any, activity on the movie theft front.

Two arrests in Montreal and one in Calgary are the results of a crackdown facilitated by the new laws, urged by Hollywood and even the governor of California, former action-movie star Arnold Schwarzenegger during a visit to Canada.

But skeptics question whether the problem was ever more than just Hollywood hype.

"Is it likely that a couple of arrests in Montreal and one in Calgary have had this huge change?" said Michael Geist, a University of Ottawa Internet law professor.

"I don't think so."

Industry numbers blamed Canada for between 20 and 70 per cent of global camcords and huffed that Montreal alone was responsible for up to a quarter of the world figure.

"The industry data was always so widely inconsistent to simply not be credible," Geist said.

The Canadian Motion Picture Distributors Association described the movie thieves as belligerent, brash and intimidating towards theatre staff, knowing that there were no laws to stop them.

Enter Bill C-59, a Criminal Code amendment introduced in June 2007, that made recording a movie without permission a crime punishable by two years in jail. Taping a film for future sale or rental carries a maximum five-year jail term.

Prior to legislation coming into effect, the distributors' association reported that at least 116 camcords were sourced back to Montreal theatres in 2007 alone. But since October 2007, not a single incident has been traced to Montreal.

"There have been no camcords sourced back to Calgary since the time of (the) arrest," said Steve Covey, the association's director and a former RCMP executive.

"We have seen a sharp decline of sourced camcords back to the Montreal area," Covey said.

Vince Guzzo, head of Quebec-based Guzzo Cinemas, said his chain had long been a target for illegal dubbers. He says the new law and increased security at the theatres has paid off, with his staff nabbing people bringing in camcorders or trying to tape movies.

"We've been good at weeding out the bad ones," Guzzo said.

"Guys who aren't necessarily criminals but thought it would be cool to do it are less likely to do it now and those who were crooks and were doing it have been caught."

Louis-Rene Hache pleaded guilty in February to illegally filming the romantic comedy "Dan in Real Life" at a Montreal theatre and was sentenced to 24 months probation. He must also complete 120 hours of community service.

In the Calgary case, Richard Lissaman was fined $1,500 in November 2008 for illegally recording the film "Sweeney Todd" the previous December. He was also banned from theatres for one year.

Another Montreal man, Geremi Adam is due in court in the fall.

Adam, who operated under the Internet alias maVen, uploaded some of the highest quality pirated films. His handiwork drew the attention of both the FBI and RCMP.

RCMP Sgt. Noel St-Hilaire acknowledges that police feel more empowered with the new legislation.

"The number of complaints have dropped drastically," St-Hilaire said, adding that while there have been sporadic incidents, few have been prosecuted.

St-Hilaire mused that technology allowing a film to be transmitted all over the world in minutes may have fuelled the notion that there were dozens of people at work in Montreal.

"It can create an illusion that Montreal was the hot spot where it was really only a few individuals who were involved," St-Hilaire said.

"They might have been just a few individuals but for the industry they were causing quite a bit of damage."

Geist said U.S. interests likely pressured Canada to act on the copyright violations so that they could use it as an example for other jurisdictions.

"Many of the clams were exaggerated and we had this sort of almost hysteria that developed where the studios threatened holding back on some movies."

It's clear that films remain readily available on the web - but mainly through inside jobs at the studios themselves. This year, an unfinished copy of "X-Men Origins: Wolverine" appeared on the Internet a month before its release.

Geist said camcorded versions still occasionally pop up online for trade but "no one is keen on those anyways."

While he acknowledged some works were coming from Canada, Geist dismissed the idea that Canada could have been solely to blame as a source of the problem.

The original recordings of the first humans landing on the moon 40 years ago were erased and re-used, but newly restored copies of the original broadcast look even better, NASA officials said on Thursday.

NASA released the first glimpses of a complete digital make-over of the original landing footage that clarifies the blurry and grainy images of Neil Armstrong and Buzz Aldrin walking on the surface of the moon.

The full set of recordings, being cleaned up by Burbank, California-based Lowry Digital, will be released in September. The preview is available at www.nasa.gov.

NASA admitted in 2006 that no one could find the original video recordings of the July 20, 1969, landing.

Since then, Richard Nafzger, an engineer at NASA's Goddard Space Flight Center in Maryland, who oversaw television processing at the ground-tracking sites during the Apollo 11 mission, has been looking for them.

The good news is he found where they went. The bad news is they were part of a batch of 200,000 tapes that were degaussed -- magnetically erased -- and re-used to save money.

"The goal was live TV," Nafzger told a news conference.

"We should have had a historian running around saying 'I don't care if you are ever going to use them -- we are going to keep them'," he said.

They found good copies in the archives of CBS news and some recordings called kinescopes found in film vaults at Johnson Space Center.

Lowry, best known for restoring old Hollywood films, has been digitizing these along with some other bits and pieces to make a new rendering of the original landing.

Nafzger does not worry that using a Hollywood-based company might fuel the fire of conspiracy theorists who believe the entire lunar program that landed people on the moon six times between 1969 and 1972 was staged on a movie set or secret military base.

"This company is restoring historic video. It mattered not to me where the company was from," Nafzger said.

"The conspiracy theorists are going to believe what they are going to believe," added Lowry Digital Chief Operating Officer Mike Inchalik.

And there may be some unofficial copies of the original broadcast out there somewhere that were taken from a NASA video switching center in Sydney, Australia, the space agency said. Nafzger said someone else in Sydney made recordings too.

"These tapes are not in the system," Nafzger said. "We are certainly open to finding them."

And you thought the HBO hit TV series "Entourage" would never be streamed over the Internet - at least legally.

Comcast Corp. said Monday it will be streaming HBO and Cinemax shows, movies and other content online to 5,000 subscriber households in a national trial set to start in coming weeks. It is the first time the two premium movie channels will be offering their programs over the Internet to computers. Downloads to mobile devices may come in the future.

HBO and Cinemax will join TNT, TBS and Starz in Comcast's online video trial. If the technical test is successful, Comcast will roll out access coast-to-coast to its subscribers at no additional cost.

The trial is part of a joint effort with Time Warner Inc. to offer cable programming on the Internet as viewership increasingly moves outside of the living room. But programmers and pay-TV operators will provide access only behind a walled garden of subscribers.

Unveiled last month, the venture dubbed "TV Everywhere" by Time Warner and "On Demand Online" by Comcast began with TNT and TBS.

The cable channels will be available through Comcast.net and Fancast.com, Comcast's video aggregator site supported by advertising. About 750 hours of programming a month initially will be available and expected to increase over time.

Critical to the trial is authentication of viewer as a paying subscriber. Users will be asked to log on with a user name and password. If they are paying for the five cable networks, the system will authenticate them.

The trial includes only Comcast customers of both video and Internet services. In the future, deals will be made with other ISPs, even Comcast rivals such as phone companies.

The HBO and Cinemax trial will include full-length episodes of "Entourage" and "Curb Your Enthusiasm" as well as movies including "Kung Fu Panda" and "The Dark Knight."

Subscribers can watch the programs online right after they are shown on television. They also can access a library of older programming.

Blockbuster Inc on Tuesday announced an agreement that allows consumers to instantly view movies and video from its OnDemand service on Samsung's televisions and electronics devices.

The deal expands the reach of the company, best known for its brick-and morter-movie rental stores, further into the market for digital distribution of video.

The service, due to launch in September or October in the United States, is similar to Blockbuster's existing pacts with TV maker Vizio and digital video recorder maker TiVo Inc, which was announced in March.

Under the pact with South Korea's Samsung Electronics Co Ltd, the world's top maker of memory chips and flat screen TVs, Blockbuster's OnDemand service will be integrated into new Samsung HD TVs, Home Theater Systems and Blu-ray players.

Financial terms of the deal were not disclosed, nor would Blockbuster give any indication of expected revenues from the agreement.

Kevin Lewis, senior video president of digital entertainment at Blockbuster, said the deal has the potential to put some of the newest video titles -- available for around $2 to $4 each -- on millions of Samsung devices.

"We believe that just as we are the leading offline rental company, we should be the leading online rental company," Lewis said in an interview.

While Blockbuster is planting its brand in more and more video downloading locations, it still faces tough competition from cable and satellite TV operators, which also offer robust lineups of movies and shows via set-top boxes already in place in millions of homes.

In addition, many others enjoy watching films downloaded from Apple Inc's iTunes or from Napster-like file-sharing sites such as The Pirate Bay.

Consumers who already own certain 2009 Samsung Blu-ray players, home theater systems, LCD and Plasma HDTVs can access the service though a software upgrade. Samsung's agreement with Blockbuster also calls for some Samsung devices, such as Blu-ray players, to be sold in Blockbuster stores.

Everyone knows the best thing to film at home in 3D is porn. There's no point arguing -- it's a simple, unavoidable fact. But now it's more affordable than ever thanks to a £50 3D webcam called Minoru.

We've been playing with the cute little Minoru for a while, tolerating people from all over the CBSi offices ditching their work to 'ooh' and 'aah' at stereoscopic images of their nipples hands.

How does it work?

The Minoru produces something called anaglyph images -- images which contain two superimposed pictures, one slightly offset atop the other to give the impression of depth and distance between objects being photographed. This results in a stereoscopic image which, when viewed through 3D glasses, gives the effect of being three-dimensional. And it records video in the same way, too.

But what makes this camera so fun isn't just its low cost, but its ease of use: just plug it in via USB, install the software, then hit one button to take a 3D photo, or a different one to record a 3D video. It can produce images up to a resolution of 800x600 pixels, and can function as a normal webcam if needed, with a built-in microphone for recording sound.

It's a really cute little camera, well-built and suited to being fixed to your computer's display. The image quality is decent, too, though don't expect rich colours -- you're viewing through red and blue lenses remember, so everything's got a heavily surreal colouration effect on top of it that's rather like watching a film about a drug trip, while on a drug trip. But for £50 we're not complaining -- this is more about fun than anything else.

Of course, anyone you're video-conferencing with in 3D will need a pair of those blue-and-red glasses on. If you've got a pair with you right now, you can enjoy the world's first stereoscopic tour of the CBSi offices here in London over the next few pages.

And don't miss the first 3D video we've ever published, which is embedded below. You can buy the Minoru webcam from Firebox now.

THE aging duelist sits in his Upper East Side apartment and contemplates all that is past, the polemics and late-night arguments and denunciations in one magazine or another.

His life in the 1960s was a blur of darkened screening rooms, celluloid epiphanies and running back to his desk to type with an eye on his competitors. What will Pauline Kael say? Or that snapping crab of a stylist, John Simon?

And always there were the films and directors that stir his passions to this day. Michelangelo Antonioni’s “Avventura” is a “modern ‘Odyssey’ for an alert audience.” Stanley Kubrick? “His faults have been rationalized as virtues.”

“We were so gloriously contentious, everyone bitching at everyone,” said Andrew Sarris, 81, nattily attired in gray slacks and a blue sport jacket, his hair slicked back. “We all said some stupid things, but film seemed to matter so much.”

He peered up through his owlish eyes. “Urgency seemed unavoidable,” he said.

Mr. Sarris, who in June experienced a sort of slow-motion layoff at The New York Observer for which he had written reviews since 1989, is one of the last refugees of the heroic age of film criticism. From the 1950s to the early 1970s the movies of François Truffaut, Ingmar Bergman, Akira Kurosawa and Jean-Luc Godard broke like ocean swells upon the United States, followed in time by no less astonishing American films. A handful of critics — Mr. Sarris, Ms. Kael, Mr. Simon, Stanley Kauffmann and Manny Farber — argued that this was art worthy of sustained thought and argument.

They defined a cultural moment. (“Don’t Go to the Movies to Escape: The Movies Are Now High Art,” The New York Times proclaimed in a headline in 1969.) As moviegoers lined up at art houses like the New Yorker and the Thalia in Manhattan, arguments turned on the merits of the films — “Shoot the Piano Player,” “Dr. Strangelove,” “Psycho,” “Bonnie and Clyde” — and of the critics.

“This was a period when film truly spoke to the modern experience, and this wonderful handful of critics transmitted that to the broader culture,” said Morris Dickstein, who teaches English at the City University of New York Graduate Center.

Mr. Sarris contributed to this ferment twice over. He introduced to Americans and argued for the French auteur theory, which holds that a great director speaks through his films no less than a novelist speaks through his books. A brilliant actor might transcend a mediocre film, but only a director can offer the sustained coherence and sensibility that yields great art.

And he argued that Hollywood had produced auteurs, championing the distinctive voices of Orson Welles, John Ford and Sam Fuller, not to mention younger Turks.

Mr. Sarris once shared a tiny office with Martin Scorsese on 42nd Street; the critic typed while the director cast films for which he had not yet raised money.

“What Andrew did, especially for young people, was to make you aware that the American cinema, which you had been told was just a movie factory, had real artistic merit,” Mr. Scorsese said. “He led us on a treasure hunt.”

The quarrelsome critics mostly lived in genteel poverty, but their dialectical rumbles were delicious and widely advertised. One Sunday in 1971 The New York Times devoted acreage in the Arts & Leisure section to a mano-a-mano between Mr. Simon and Mr. Sarris. Mr. Simon’s pen came acid dipped, and his disdain for auteurism, which he believed devalued narrative, was fairly overwhelming. “Perversity is certainly the most saving grace of Sarris’s criticism,” he wrote, “the humor being mostly unintentional.”

To which Mr. Sarris later rejoined, “Simon is the greatest film critic of the 19th century.”

Mr. Sarris and Ms. Kael, who died in 2001, defined a more primal rivalry, to the extent that their followers came to be known as the Sarristes and the Paulettes, the Sharks and Jets of the sun-starved cinephile crowd.

Mr. Sarris, who is also a film professor at Columbia University and author of “The American Cinema,” was steeped in film culture, and his reviews read as if he had turned a movie over in his hands. He was fascinated by integrity of vision and mise-en-scène, the gap between what the screen shows and the audience feels. (He hosted a radio show on film for WBAI-FM, and could talk unscripted for an hour with nary a semicolon misplaced.) Ms. Kael was scarcely less learned, but being a film intellectual struck her as a drag. Writing in The New Yorker she sought a visceral engagement with film. Her book titles summed up her view: “I Lost It at the Movies” and “Kiss Kiss Bang Bang.”

She wielded style like a stiletto. “The auteur theory is an attempt by adult males to justify staying inside the small range of experience of their boyhood and adolescence, that period when masculinity looked so great and important,” she wrote of Mr. Sarris.

Ms. Kael’s devotees note that she seldom attacked him after that, even as he fumed. But that is like knocking a fellow flat then puzzling at his foul mood.

But these gunslingers had one another’s backs when shooting at establishment critics. And they often championed the same directors. “Pauline turned out to be a most dedicated auteurist,” noted J. Hoberman, now the senior film critic for The Village Voice. “She loved everything by De Palma and Scorsese.”

Mr. Sarris cut a curious figure at the congenitally contentious Village Voice of the early 1960s. He had passed a year in Paris, he said, drinking coffee with New Wave directors and later would edit an English-language edition of Cahiers du Cinéma. But back in New York he lived with his Greek monarchist mother in Queens and went to “Gone With the Wind” four dozen times, as besotted with Vivien Leigh on the 48th viewing as the first.

In his first review for The Voice in 1960, of “Psycho,” he threw down the gauntlet in service of a commercial director, Alfred Hitchcock. Mr. Sarris was characteristically assertive. “Hitchcock is the most daring avant-garde filmmaker in America today,” he wrote. “Besides making previous horror films look like variations of ‘Pollyanna,’ ‘Psycho’ is overlaid with a richly symbolic commentary on the modern world as a public swamp in which human feelings and passions are flushed down the drain.”

The Voice was an inverted universe, in which the mainstream was regarded with deep suspicion. Angry mail piled up: Who was this philistine? But Voice editors embraced Mr. Sarris as another controversialist.

Mr. Sarris guarded his reviewing territory with the glower of a medieval duke guarding his fief. When The Voice put Mr. Hoberman’s essay about the director Chantal Akerman on its cover, Mr. Sarris facetiously grumbled in print that he had taken “mainstream, white-bread assignments” while “Hoberman was freaking out on art-house acid.”

“He was pretty full of himself, although I’m not clear to what extent that was a real reflection of his character or just his manner,” said Robert Christgau, the longtime rock critic for The Voice, who left the paper in 2006.

Still, Mr. Sarris was more enthusiast than ideologue. He had no love of rock ’n’ roll but described “A Hard Day’s Night” as “the ‘Citizen Kane’ of jukebox musicals.” His willingness to revisit old judgments on “2001: A Space Odyssey,” Kent Jones wrote in Film Comment in 2005, resulted in a “one of the most charming passages” in film criticism:

“I must report that I recently paid another visit to Stanley Kubrick’s ‘2001’ while under the influence of a smoked substance that I was assured by my contact was somewhat stronger and more authentic than oregano. (For myself, I must confess that I soar infinitely higher on vermouth cassis, but enough of this generation gap.) Anyway, I prepared to watch ‘2001’ under what I have always been assured were optimum conditions, and surprisingly (for me) I find myself reversing my original opinion. ‘2001’ is indeed a major work by a major artist.”

In 1989 Mr. Sarris left The Voice for The Observer, where he wrote reviews until June. The critical wars are long past but he is not in mourning.

“I was a solipsist and a narcissist and much too arrogant,” he said. “I have a lot more compassion now, but it took a long time.”

Are there favorites he thinks less of now? “I prefer to think of people I missed the boat on,” he said. “Truffaut talked me into rethinking Billy Wilder, and I finally apologized to Billy.”

When The Observer pleaded financial difficulties and took Mr. Sarris off staff last month, editors suggested he write periodic reviews. But, Mr. Sarris said, that relationship has now ended. For now he will write essays for Film Comment, although he noted he’s not as fluid as in the past.

He peered up, his brown eyes intent.

“There’s a part of me that looks beyond everything now,” he said. “I don’t approve of Woody Allen’s view of death. I acknowledge it, but I hope there’s more time, as there’s a lot of movies I’d like to see and think about.”http://www.nytimes.com/2009/07/12/movies/12powe.html

A Hollywood Blogger Feared by Executives
David Carr

Right now, there’s a good chance a Hollywood executive is leaning into a colleague’s office and quietly asking, “Did you see what Nikki just wrote?”

That would be Nikki Finke, a well-traveled newspaper reporter who has found her moment as a digital-age Walter Winchell.

In the three years since she started Deadline Hollywood Daily, a daily blog about the entertainment business, her combination of old-school skills — she is a relentless reporter — and new-media immediacy has made her a must-click look into the ragingly insecure id of Hollywood.

Among movie executives, the stories of Ms. Finke’s aggressiveness are legion, but they remain mostly unspoken because people fear being the target of one of her withering takedowns.

“I’d prefer not to ever deal with her,” said a senior communications executive at a studio who declined to be identified. Many others declined comment saying, variously, “she gave me a nervous breakdown,” “she terrifies me,” and “there’s no percentage in me saying anything to you about Nikki no matter what it is.”

But they all read her. In a town where people often secretly hope for the worst, Ms. Finke delivers wish fulfillment. During the recent merger of the William Morris and Endeavor agencies, she ridiculed William Morris executives to the point of distraction. She has published network schedules before many people at the network knew what was on them.

During the recent parting of the ways between Paramount Film Group and John Lesher, the former president, she ran a picture of Mr. Lesher with a big red X over his face and a series of breathless news breaks until he was out, finishing him off by quoting him (accurately) saying, “Nikki Finke knew about it before I did.”

In a telephone interview, Ms. Finke said the stories about her thuggish ways were just that. “I don’t bully and terrify people,” she said, cheerfully adding, “I’m not mean, I just write mean.”

She writes mean about business: celebrities don’t interest her. She isn’t always right and, as her critics have pointed out, she’s not above using the new-media prerogative of going into her archives and changing the bad call to a good one. But Ms. Finke’s relatively small audience includes most everyone who matters in Hollywood.

“I’d like to think that people read me because they find out things that are true and that they didn’t know,” she said. “If I was just some kind of car wreck who got all sorts of things wrong, I don’t think they’d be reading.”

Last month, it was announced (on Deadline Hollywood Daily, of course) that Mail.com Media, an Internet company controlled by Jay Penske, son of Roger Penske, the American automotive magnate, was buying her site. The site had been hosted by LA Weekly, but owned by Ms. Finke.

“She has one of the quickest minds I have ever seen and is one of the funniest people I have ever met,” Mr. Penske said.

Most people in Hollywood are not able to meet Ms. Finke, 55. Like another influential blogger, Matt Drudge, she is private to the point of hermitic, spending most of her time in front of a computer at her home in Westwood. Her site has gone dark several times when she has worked herself to exhaustion.

In a place built on appearances, she is never seen at the right premiere, the right lunch spot, the right address. Her presence in Hollywood is spectral — she has a single photo taken in 2006 that runs everywhere.

“I just don’t go out to industry events, in part because it puts my sources in an awkward situation,” she said, adding that “the other thing about going out with these people is that when it comes time to cover something involving them, they say, ‘But, Nikki, we’re friends.’ I don’t want those kind of friends.”

A onetime debutante — an experience she wrote about for The New York Times in 2005 — Ms. Finke had a long career in journalism, including serving as a correspondent in Moscow for The Associated Press and covering Washington for Newsweek.

“I really don’t see covering Hollywood as all that different from covering the Kremlin or the federal government,” she said. “I’m always fascinated by closed societies that don’t want prying eyes.”

As a traditional print reporter, she had a problem with deadlines, trying the patience of many editors. She began her blog in March 2006 partly of necessity: she had run low on money and options. A never-published book on talent agencies gave her sources, and LA Weekly gave her blog a home.

Her liabilities in the world of print — a penchant for innuendo and unnamed sources — became assets online. To admirers and detractors, she is the perfect expression of the Web’s original premise, which suggested that a lone obsessive could own the conversation, which she punctuates with the phrase TOLDJA in capital letters. “It is not a great mystery how all of this happened,” said Joe Donnelly, who edited her column at LA Weekly. “It happened because Nikki willed it to happen through a lot of hard work. She is not afraid of new technology and new ideas. She saw this coming.”

Her big close-up arrived during the writers’ strike in 2007 and 2008 when she provided up-to-the-minute, highly partisan coverage — in favor of the writers. Bill Condon, the director of “Dreamgirls,” said, “Yes, she was a partisan, but she is one of the few people to stand up for the writers, to stand up against powerful interests in a town that is full of them.”

She does have her favorites, however. Sony Pictures almost always seems to get a pass, Ron Meyer at Universal is frequently portrayed in heroic terms, and Brad Grey at Paramount manages to remain curiously above the fray. On the other side of the ledger, she has shown a serial contempt for NBC’s chief, Jeff Zucker, and has treated Ben Silverman, the co-chairman of NBC Entertainment, like a piñata. (“He’s the gift that keeps on giving,” Ms. Finke says of Mr. Silverman.)

“She,” said Patrick Goldstein, a longtime entertainment reporter at The Los Angeles Times, “has attacked me in a very personal way, but I give her total props because she has shown what works on the Web.”

He hastens to add that Ms. Finke has gone into her own archive to correct errors. Bill Wyman, who blogs at Hitsville.com, documented an instance in which she altered a previous post about a director getting a job, then took credit for a scoop when it turned out to be somebody else.

Ms. Finke said both men were wrong on the specifics and each had a personal vendetta against her, a frequent theme whenever criticism of her work came up. She does say that she considers Web articles to be living things, reflecting “the latest information I have received.”

Her aggression is not limited to journalism. Ms. Finke is a frequent and enthusiastic litigant. She sued The New York Post, the News Corporation and the Walt Disney Company for wrongful dismissal after she wrote an unflattering article about Disney. According to numerous media accounts, she received a settlement. Ms. Finke would say only, “The matter was resolved.”

In 2006, she filed a suit against E*Trade accusing it of recording her phone calls while she was a client there and a class action was certified, resulting in a proposed $7.5 million settlement that will get a final hearing this fall. (Ms. Finke would receive $40,000 as the class representative under the terms of the settlement. She refused to comment on the publicly filed case, saying it was private.) She has sued and settled with a company that worked on her house, saying it had caused her to injure her foot and sued a car dealer saying it had failed to live up to the terms of an extended warranty.

She acknowledges having had financial struggles, but those seem firmly in the past. With the sale to Mail.com, Ms. Finke stands to make more than $5 million in the next eight years, and her deal could go as high as $10 million, according to one of the people involved in the deal who declined to be quoted citing the private nature of the negotiations.

If the deal works out, Ms. Finke’s probing phone calls will continue to panic the suits in Hollywood for some time to come. Without saying who it was, she gave a recent example of someone who ended up as a pelt on her wall.

“I implored him to talk to me, and he did a little, but not enough,” she explained. “He should have protected himself.”

An erotic audio site is marketing itself to blind and visually-impaired people. But have disabled people been excluded from the world of "adult" entertainment?

Lud Romano - who runs an internet communications business - was on holiday in South Africa with his partner when they discovered erotic audiobooks on iTunes.

They found the idea of a single voice reading aloud to be a little "empty".

"If you're going to get an erotic charge from that, you have to do a lot of work yourself," he says.

He decided there and then to commission a series of short radio dramas which would be made available from a website.

The original target audience for Clickforeplay was sexually confident, upwardly-mobile young women - the sort of people who felt comfortable about buying erotic fiction from a High Street bookshop or browsing the more female-friendly "adult" shops.

After a failed attempt at the "soft porn" market - "people wanted it a lot harder than we could ever achieve in the audio domain" - he looked at what was available for blind and partially-sighted people.

Erotic audiobooks

He was surprised to discover how under-served the market was in terms of adult material.

This is not to suggest that there is nothing "out there" for people who do not have access to standard print or video

For instance, the Royal National Institute for Blind People (RNIB) has erotic fiction in its audiobook library, as any mainstream library might have.

And general audiobook companies have sections dedicated to erotica which can run to 200 or so titles.

But one of the few dedicated erotic offerings for blind people that Mr Romano could find was a website containing an archive of audio recordings of American volunteers describing what they could see while watching hardcore pornography clips.

A brief listen to a couple of the audio files at the site is probably enough to convince most people to entertain themselves with something a little more improving. Deadpan, monotone descriptions of mainstream porn might even seem to the casual surfer like some sort of prank.

"It's just so bad, it's ridiculous," Mr Romano says.

His approach has been to get beyond what he describes as the "bored housewife meets young pool cleaner" type plot and to aim for something that will appeal to more sophisticated tastes.

He has a group of three writers who are simply told to "write naughty stories".

The plays are then recorded in a suite of rooms in north London "as live".

"It's not actors gathered around a microphone - they really act this, dynamically."

For those who worry about the exploitative nature of pornography, it might be reassuring to know that Mr Romano's actors do, of course, keep their clothes on.

Each drama has a setting that is "ripe for erotic development", according to Mr Romano.

One concerns the interaction between an artist, his female assistant and a nude female model.

Another is set in a laboratory in which two male scientists accidently discover a powerful aphrodisiac which their female boss insists upon trying. Unfortunately, she uses all of it before they can analyse it and produce another batch.

Each drama costs around £2,500 to produce.

Mr Romano's firm has signed a deal with a company which gives text-to-speech output from webpages and magnifies the text as well.

Society's reluctance

And while some people may disapprove of the enterprise as just another example of the internet being used to disseminate sexual content, it will be welcomed by those disability rights activists who believe the exclusion of disabled people from the sexual arena mirrors their marginalisation in other areas of life.

Writer and performer, Mat Fraser, says that making adult material available to disabled people is an intrinsic part of inclusion.

"It is the erotic that helps us to feel alive, real, included, and disabled people have so much to offer the world of the erotic and the adult," he said.

Society's reluctance to accept disabled people's sexuality is perhaps based on a deep-rooted but unspoken belief that they should not reproduce.

This is a prejudice that is being challenged by activists, artists and writers, like Penny Pepper - a writer of erotic fiction that includes disabled characters.

"We are tired of being nannied and denied the rights to sexual expression that non-disabled people take for granted - so on that level, at least, we should fight for equal access to view and enjoy such material," she says.

Certainly, the RNIB makes sure that a wide range of tastes is catered for when choosing material for its Talking Book library.

The library's manager, Pat Beach, says the main problem is access to printed material per se - less than 5% of books published in the UK ever appear in large print, audio or Braille.

"We do not act as a censor - erotic fiction can be found on our shelves just as it is in a public library or a bookshop," he says.

Others believe that - because disabled people can experience difficulty in forming intimate relationships - accessing erotica and adult entertainment can provide an alternative outlet.

"As part of the wider campaign for barrier removal, it is really important also to remove barriers to erotica and sexual expression for disabled people," says disabled academic, Tom Shakespeare.

In a stunningly misguided program implemented by the British government, all children's book authors who visit schools must register with a national database intended to protect children from pedophiles, and they must pay a fee to do so. Beginning October 12, 2009, the Vetting and Barring Scheme (VBS) will require that all adults who work with children, including authors such as J.K. Rowling and Philip Pullman if they make special visits to schools, will be required to register with the database for a fee of £64 ($105).

The Independent reports that as a result, several well-known authors will boycott schools in protest of the requirement. Philip Pullman, Anne Fine, Anthony Horowitz, Michael Morpurgo, and Quentin Blake have all publicly stated that they object to having their names listed in the database. Pullman, author of the popular fantasy trilogy His Dark Materials, called the policy "corrosive and poisonous to every kind of healthy social interaction." He eloquently adds, "This reinforces the culture of suspicion, fear and mistrust that underlies a great deal of present-day society. It teaches children that they should regard every adult as a potential murderer or rapist." Anne Fine, the former Children's Laureate for the U.K. and author of over 50 children's books, labelled the requirement "government idiocy." "When it [the VBS] becomes essential, I shall continue to work only in foreign schools, where sanity prevails," she said. "The whole idea of vetting an adult who visits many schools, but each only for a day, and then always in the presence of other adults, is deeply offensive. Our children will become further impoverished by this tiresome and ill-considered scheme, and yet another gulf will be created between young people and the rest of society."

The VBS was set up in 2002 following the tragic murders of Jessica Chapman and Holly Wells by the janitor at their school, Ian Huntley. A government spokesperson defended the new rigorous regulation, saying, "The new scheme means every individual working in a field that requires more than a tiny amount of contact with children and/or vulnerable adults will have to be vetted. If they are passed, they will be placed on a register that says they are allowed to work in a regulated field. If they are barred, they will go on a separate register and it will be a criminal offence for them to try and obtain work in a regulated field, carrying a penalty of up to five years in prison. It will also be illegal for anyone to employ them."

Indeed, while such reasoning seems to make sense, the ramifications are far from sensible and grossly unfair to children and adults alike. This policy borders on hysteria and panders to the public's basest fears by assuming the worst of everybody. While none of these authors wants to see any child harmed, they point to the damage such a policy has on society as a whole. In an editorial in the Independent, Anthony Horowitz, author of the The Alex Rider Collection (Alex Rider Adventure) and the Power of Five book series, perhaps put it best: "This is a law made by people with a bleak and twisted view of society. And such people, quite simply, should not be making laws." http://carnalnation.com/content/1235...tial-pedophile

Harry Potter and the Vengeful Malware
Brad Stone

Some Internet bad guys are exploiting heightened interest in the new Harry Potter movie to further their devious plot to take over the Web.

Their target is people who can be enticed to illegally watch a copy of the new film on the Web. (So perhaps there is some kind of perverse vigilante justice at work here.)

PC Tools, a security company bought by Symantec last year, described the attack Thursday on their ThreatFire blog.

The company says the bad guys are populating sites like Digg.com and Blogspot and sending spam e-mail with enticements to “‘Watch “Harry Potter and the Half-Blood Prince online free” and links to a bogus film site. They are also flooding the comment sections of these sites with various Harry Potter related keywords, to try to trick search engines into displaying their site in search results.

People who fall for it and visit the bogus film site are then treated to a barely credible Harry Potter page on Google’s Blogspot service (although PC Tools says the location of the site changes) with images from the movie. (Here’s a screenshot.)

And finally, for the people who are both gullible and simply cannot wait to see the movie in the theater, clicking further on the page prompts a download of a file called “streamviewer,” and well, you can imagine what happens from there. PC Tools reports that the victim’s computer is visited by a rotating selection of choice malware, including the koobface worm.

“This is headline malware,” said Mike Greene, vice president of product strategy at PC Tools. “When Michael Jackson passed away we saw a surge around that. Whenever you see a headline, you can be pretty confident you will find some hot malware.” http://bits.blogs.nytimes.com/2009/0...l-malware/?hpw

Emma Watson and Rupert Grint may have momentarily distracted the world by flashing their underwear and contracting swine flu (respectively), but at Thursday's New York premiere for Harry Potter and the Half-Blood Prince, Daniel Radcliffe was on an impressive charm offensive that made it clear which of the three is still going to be making bank when he's fifty. We watched as he moved down the carpet, joking about his height, laughing at the sexual tension and drug-use allusions in the film, and gamely imagining who would win a battle between wizards and vampires ("Wizards! Vampires have to get near a wizard to kill him and we’d be able to keep our distance"). And he certainly got on our good side by telling us he wants to be back on Broadway in two or three years and is even taking tap lessons to get ready ("So far it's just a bit of shuffling... If I do a musical I want to be proper! Tap dance is so cool!"). But it was his completely adorable interaction with a terrified 11-year-old reporter from Scholastic News that won us over for good. Imagine half a dozen pushy TV reporters shoving their mikes in Radcliffe's face as he zeroes in on a little girl holding her microphone with both hands, voice shaking so much she can hardly form sentences.

Radcliffe (to other reporters): "One moment, one moment. I will come back to you. [Locks eyes with Scholastic News girl] Hello!"

Girl: "Hey. I’m Danielle from Scholastic News."

Radcliffe: "Hello, Danielle! Pleasure to meet you."

Danielle: "This is an HONOR to interview you."

Radcliffe: "Oh, thank you. You’re very sweet. Thank you."

Danielle: "I’ve seen the first and second movies and read the first and second book and they are SO good. Especially the movies. I loved them, the movies!"

Radcliffe: "Thank you very much. You’re very, very kind. They get even better than that, though, so when you get time or when your parents think you’re old enough, you must watch the rest. They’re very cool."

Danielle: "Um, and I have two questions. How are you most like Harry?"

Radcliffe: "I think in the way that we value our friends. Friendship is very important in both of our lives. I think I have Harry’s natural curiosity as well. He’s interested in a lot of things in this world, as am I."

Other reporter: "Daniel, how…"

Radcliffe: "One moment, one moment."

Other reporter: "They told us to group together."

Radcliffe: "It will happen. One second, sorry." [re-focuses on Danielle]

Danielle: "How did Harry change from the first movie to the second? I mean, the sixth?"

Radcliffe: "To the sixth? Well, he grew marginally taller. The films have gotten a lot darker since that first film, so I think he has had to get a lot tougher since then. Thank you very much."

If ever there was a film release almost certain to turn a tidy profit, it would be any Harry Potter movie, and Warner Bros. executives can rest assured that Wednesday's debut of the franchise's sixth installment will pile the grosses high through Sunday. But to understand just how fervently studio insiders will be hoping for a muscular box-office bow by "Harry Potter and the Half-Blood Prince," consider that this time last year Warners launched a little film called "The Dark Knight" to rather good effect.

No pressure there.

"The reviews are great," Warners domestic distribution president Dan Fellman said with a what-me-worry nonchalance. "I think it's the best Harry Potter picture so far. Certainly, as the cast matures, they keep getting better."

"Potter" movies have carried PG or PG-13 ratings, with "Prince" toting the less-restrictive former designation. As the cast and their book-based characters age, Warners hopes to attract new, younger patrons while continuing to draw older fans of the series.

"Half-Blood Prince" is set for 4,275 U.S. and Canadian locations Wednesday and 50 more beginning Friday, and its screen count runs north of 8,000. A consensus estimate for its first five days in domestic release has it pulling in $140 million or more, with about $100 million of that sum likely to be rung up during the Friday-Sunday span.

Previous Potter pics have posted cumulative domestic grosses ranging upward from the $249.5 million fetched by 2004's "Harry Potter and the Prisoner of Azkaban," with 2001 franchise launcher "Harry Potter and the Sorcerer's Stone" enjoying the series' best domestic take to date: $317.6 million.

'Prince' Over 'Phoenix'

The most recent release, 2007's "Harry Potter and the Order of the Phoenix," registered $292 million domestically and another $646.2 million internationally. "Phoenix" fetched $44 million on its first day and $139.7 million during its first five days. There is broad consensus that "Prince" can best those numbers.

It already has surpassed the franchise-best tally of $12 million in midnight box office posted by "Phoenix." By late afternoon Tuesday, exhibition sources made it clear that advance sales of "Prince" tickets for 12:01 a.m. Wednesday performances were outpacing the witching-hour numbers for its immediate predecessor.

In a sign of just how hot tickets sales have been for "Prince," industryites are whispering that the "Potter" pic has an outside shot at besting the record $18 million midnight box office registered by "The Dark Knight" last July 18. All signs are certainly auspicious, with Fandango and MovieTickets reporting that thousands of performances have already sold out.

"I think they're beatable," Fellman said of the "Phoenix" grosses. "Ticket prices have gone up, and the last time we had the first 'Transformers' opening just five days before us."

That picture's sequel, "Transformers: Revenge of the Fallen," enters its fourth frame this weekend. No other film opens wide domestically this session, and the most prominent second-weekend holdover -- Universal's R-rated comedy "Bruno" -- couldn't have a more distinct target audience from that of "Prince."

Still, there will be no getting away from those batty comparisons: "Dark Knight" fetched $158 million during its first weekend and $533 million overall domestically. The chances of "Prince" matching that are slim to none.

"Prince" also debuts this week in 85 foreign territories, including Wednesday's openings in the U.K. and Japan, territories that are key in any Potter bow.

Domestically, "Prince" will play in just three Imax venues -- one each in New York, Los Angeles and Chicago -- until more of the specialty screens are freed up in two weeks. But 57 international Imax venues are set to open the picture this week.

The latest "Harry Potter" movie cast a $104 million spell over worldwide box offices during its first day in theaters, setting a new record for the boy wizard, distributor Warner Bros Pictures said on Thursday.

"Harry Potter and the Half-Blood Prince," the sixth in the film series based on the popular books by J.K. Rowling, grossed $58.18 million in North America and $45.85 million overseas on Wednesday, the Time Warner Inc-owned studio said.

The U.S.-Canadian tally, which includes a record $22.2 million from midnight showings, marks the second-biggest Wednesday opening domestically.

Only last month's "Transformers: Revenge of the Fallen" scored a bigger midweek first-day gross, with $62 million in domestic ticket sales on Wednesday June 24, according to Paul Dergarabedian, box office analyst for Hollywood.com.

"Quite simply, we owe this record-breaking opening to the remarkable fans who have stood by us and who stood in line to be among the first to see 'Harry Potter and the Half-Blood Prince,'" Warner Bros President and Chief Operating Officer Alan Horn said in a statement.

Said Dergarabedian: "This is a tremendous opening. It's in the box office stratosphere."

The previous "Harry Potter" movie, "Harry Potter and the Order of the Phoenix," also opened on a Wednesday last year with first-day domestic receipts of $44.2 million. That film went on to gross $937 million worldwide.

The first five films in the franchise, one of the most lucrative in Hollywood history, have so far taken in about $4.5 billion collectively at the global box office.

And the Oscar for the best animated short film goes to ... an Internet community?

That teaser, posted last fall on Facebook by the upstart company Mass Animation, kicked off a project many people in Hollywood thought was laughable: making a five-minute animated film using the Wikipedia model, with animators from around the world contributing shots, and Facebook users voting on their favorites.

But it worked. The completed short, “Live Music,” has been deemed of high enough quality by Sony Pictures Entertainment to warrant a theatrical run. Sony will bring the tale of star-crossed love involving an electric guitar and a violin to the multiplex masses on Nov. 20 as an opener for its “Planet 51” animated feature. “Social networks can operate like automated talent scouts, helping the cream rise more quickly to the top, and that’s what happened with ‘Live Music,’ ” said Michael Lynton, chairman and chief executive of Sony’s entertainment division. “While creativity has been pretty evenly distributed in society, it hasn’t always been easy to tap into.”

Few actually expect “Live Music” to win an Academy Award or even be nominated for one. But even Pixar started somewhere, and Sony’s enthusiasm for the short underscores the potential power of social networking in creating high-quality content.

The marketplace — advertising, gaming and, of course, Hollywood — is hungry for content, animated in particular, that is done in a faster, cheaper way. “Live Music” was made for about $1 million and took about six months to complete. Intel, hoping to peddle its new Core i7 processor to animation geeks, was the principal backer. The finished film is made up of scenes submitted by 51 people, who received $500 per scene and a film credit for their efforts.

Yair Landau, the founder of Mass Animation (and a former digital and animation executive at Sony, where he oversaw production of the movie “Surf’s Up”), said he hoped “Live Music” was just the tip of this iceberg. His goal is to produce a feature-length film in the same manner, essentially pushing the heavy lifting off on a crowd.

“I certainly see this as a step in the democratization of creative storytelling in Hollywood,” Mr. Landau said. “But my aim was primarily to prove that you could bring a group of people together on the Internet and create good work.”

Mass Animation is just one of several entertainment companies working the “crowdsourcing” angle. Perhaps the largest is Aniboom, which was founded in 2006 by the Israeli media mogul Uri Shinar and bills itself as a Web-based animation studio. Aniboom has built a global team of nearly 8,000 animators who have uploaded more than 13,000 clips into Aniboom’s library.

Mr. Shinar plans to pitch the best submissions to television networks as potential series and movie studios as possible features. (The creators retain some ownership rights.) Aniboom is showing big potential, signing a flurry of deals in recent months. Aniboom and the Fox television network, for instance, recently announced a project similar to “Live Music” that involves animators competing to create a new holiday special.

“We’re proving in small steps that without any physical studio you can create professional content,” Mr. Shinar said. Aniboom ran the “Live Music” competition for Mass Animation and provided technology infrastructure.

Mr. Landau, a former investment banker whom Sony credits with securing the film rights to “Spider-Man,” has always been an early adopter. At Sony he was an advocate for multiplayer online games before they broke into the public consciousness, and developed a downloadable movie service before online piracy became entrenched.

He left Sony in spring 2008, partly because of a desire to move on after two decades at the studio but also because of an internal shakeup. To create the “Live Music” project, he first approached Matt Jacobson, a friend at Facebook who works on market development, who in turn introduced him to executives at Intel. (“We seized the opportunity to be involved with a project on the cutting edge of social media,” said John Cooney, Intel’s online programs manager.)

Using Facebook, Mass Animation invited animation enthusiasts — from total amateurs to professionals working in their spare time — to compete to create individual shots for the short. Mass Animation provided downloadable Maya software, the story and a soundtrack. The company also rendered the first scene to set the style and look. Mr. Landau then worked with the winners to smooth their creations into a cohesive whole.

In the end 57,000 people from 101 countries became “fans” of the Mass Animation page on Facebook and about 17,000 downloaded the software application, Mr. Landau said. The 51 winning animators hail from 17 countries, including Kazakhstan and Colombia. Eleven are women — the Hollywood animation mines are staffed almost entirely by men — and the group ranges in age from 14 to 48.

Mr. Jacobson called aspects of the project “fiendishly complicated” but said Facebook now wanted to work with Mass Animation on more ambitious ideas, perhaps even the dream of a full-blown feature film. “We didn’t know if we could do it, but we decided to take the chance and couldn’t be more pleased with how it turned out,” he said. http://www.nytimes.com/2009/07/16/movies/16mass.html

A Fun-Loving Sponge Who Keeps Things Clean
Alessandra Stanley

IT was ridiculous when some conservative religious leaders complained of a hidden homosexual agenda lurking behind the jellyfish and floating plankton of “SpongeBob SquarePants.”

Ridiculous, but not totally absurd. Adults have been trying to detect some sort of subtext to that cheerful, cheeky and almost inexplicably popular Nickelodeon cartoon series since it first bubbled to the surface a decade ago.

There have been books, dissertations and seminars dedicated to the study of the fun-loving yellow kitchen sponge who lives in a pineapple under the sea. There was a theatrical-release movie version. President Obama said during the campaign that SpongeBob was his favorite television character, and that he rarely misses the show because he can’t; it is always on in the Obama household. David Bowie and Johnny Depp are among the many stars who boast or blog about having been guest stars.

To fete the show’s 10th anniversary, Nickelodeon plans to wring “SpongeBob” of every drop with a 50-episode weekend marathon on Friday that will include 10 new episodes, while its sister network, VH1, plans on Tuesday to show a documentary, “Square Roots: The Story of ‘SpongeBob SquarePants,’ ” that interviews its creator, Stephen Hillenburg, an illustrator and marine biologist; many of its writers and animators; media scholars; and celebrities like the comedian Ricky Gervais, who explains with a smirk, “I like the fact that he’s yellow, I like the fact that he’s porous, and I like the fact that he wears pants.”

And while some of the I’m-so-cool-I-watch-“SpongeBob” cult status has worn thin of late, the series celebrates its first decade as popular as ever and without having disclosed any higher meaning to Bikini Bottom. The mystery lives on.

SpongeBob’s zany charm is obvious and infectious, but his lasting popularity with children and grown-ups of all kinds — demonstrated by the ratings and even the global sales of SpongeBob T-shirts, video games and bed sheets that rival the earnings of Bart Simpson and even Mickey Mouse — is daunting. SpongeBob merchandizing has become something of an industry joke, but there really is a Kraft macaroni & cheese named after SpongeBob, as well as a digital camera and even an amusement park roller coaster.

Part of the show’s mystique is precisely that it has so little edge or subversive double-entendres. The writers send up all sorts of American quirks and conventions while placing them underwater, but gently and benignly. The tone of “SpongeBob” is so boldly silly at times that it is tempting to put a metaphoric spin on it or even a psychedelic one — all those ocean floor antics hark back to the Beatles in their “Yellow Submarine” period — especially since so many college kids claim to get high and watch “SpongeBob.”

The show is a fun, pleasantly mindless viewing experience, but it’s not just a stoner’s Rosetta Stone.

It’s a cartoon. And an old fashioned one at that. “SpongeBob” remains distinctive, if only for its retro look: Mr. Hillenburg and his colleagues say they were inspired by Bugs Bunny and other old-school cartoons, and their animation is hand-drawn in the same way as a Bugs Bunny or Road Runner cartoon, with each episode requiring more than 20,000 drawings.

Mostly it’s the sensibility that is a throwback to a loonier Looney Tunes era. “SpongeBob” became a huge hit in the early ’00s when some of the most popular — and talked about — cartoons, like “Beavis and Butt-head,” “Ren and Stimpy” and “South Park” had a cynical, perverse edge that appealed to both teenagers and adults.

Even “The Simpsons,” which is now in its 20th season, is an adults-mostly series, wickedly funny and filled with fast-moving, tongue-in-cheek cultural references that fly high above even older heads. So many cartoons these days put adult words and neuroses into the mouth of children, from the toddlers of “Rugrats” to Stewie, the catty and supercilious infant son on “Family Guy.”

SpongeBob is an optimist, a naïf and a child, and the unifying joke is that he is impervious to danger or dislike — as were Bugs Bunny, Road Runner, Rocky and Bullwinkle and even Charlie Chaplin. Mostly he is happy, though when he is upset, tears gush out of his eyes like an open hydrant; in one episode SpongeBob cries so hard at having to leave his best friend, Patrick Star, to go to summer camp that he misses the “Sun & Fun” boat, and boards a convict ship bound for “Inferno Island” instead. He thinks the prison is a really enjoyable summer camp, and not even solitary confinement, breaking up rocks or prison slop can dissuade him.

SpongeBob loves his friends and doesn’t realize that some, notably his neighbor Squidward and even Mr. Krabs, his miserly boss at the Krusty Krab food shack, do not exactly reciprocate. Mostly SpongeBob lives in his own watery universe, helping his friends and pursuing his interests, which include the lifestyles of Vikings or the research projects of his friend Sandy, an air-breathing squirrel scientist who lives under a bubble in Bikini Bottom to explore the ocean floor.

At times, however, the writers seem to poke fun at some of the sick humor so prevalent on “South Park” and other more sophisticated animated series.

In one episode SpongeBob inadvertently drives a school-crossing guard to abandon her post and flee; a line of tiny schoolchildren cross the street by themselves and right into oncoming traffic. They aren’t crushed and smeared across the sidewalk, as the setup suggests and as some “South Park” viewers have come to expect. Instead the approaching vehicles turn out to be a slow-moving, colorful parade, to the delight of SpongeBob and the children.

So far 2009 is the first year in the last seven in which “SpongeBob” has not been Nickelodeon’s top-rated series. At the moment “iCarly,” a tween-aimed live-action series, is the channel’s No. 1 show, followed by “Penguins of Madagascar,” which is made with computer-generated animation and based on a hit DreamWorks movie, and has that tell-tale 3-D sheen. “SpongeBob” is third, but neither “iCarly” nor “Penguins” has the same sweetness or inspired asides of animated lunacy, like a squid whose long, drooping arms are milked like a cow’s udder.

The penguins have deep voices and names like Kowalski and Rico and fancy themselves an elite strike force based in the Central Park Zoo but reaching into the city and even the world beyond. They have silly misadventures, usually at the hands of their larcenous, parasitic neighbor, Julien, King of the Lemurs; but the heroes are adults in penguin form.

Sometimes Carly, the heroine of “iCarly,” played by Miranda Cosgrove, who is 16 and looks 26, also seems a lot older than her years, possibly because she is the star of her own hit Web show, which she produces with her two best friends. There are rivals but no parental supervision: Carly, who lives with a guardian, her irresponsible 20-something brother, is a Lizzie McGuire refigured for a new generation obsessed with celebrity and Web networking.

It’s been 10 years now, and “SpongeBob” still seems refreshing and innocent compared with so much other precocious children’s programming. Edward Gorey, the master illustrator of the macabre, once said that there is no such thing as “happy nonsense.” “SpongeBob” could be the exception.http://www.nytimes.com/2009/07/12/ar...on/12stan.html

The $99 iPhone as an Inexpensive Tracking Device
David Molnar

I recently helped my girlfriend move her stuff from Chicago to Oakland. The movers were scheduled to arrive at 8AM on the 5th of July, and we were stressing the day before about all the things that could go wrong with a move. We realized that if we knew where her stuff was, it'd make us feel better. This is a post about using the $99 iPhone to help us out...and about a somewhat surprising potential use of Find My iPhone.

We started by looking at a couple of dedicated GPS tracking devices, with the thought that we could put one in a box and check its location remotely. Unfortunately, none of these were easily available on the day after the 4th of July. Most of the ones we saw that could be checked remotely cost in the $250-$400 range, too, plus monthly recurring charges in the $45-$50 range. Best Buy, on the other hand, was open at 10am on the 5th. Best Buy sells iPhones.

"Why not use Find My iPhone to see where our stuff is?"

That is, we could buy an iPhone, pack the iPhone with the boxes, and then use Find My iPhone to see where the boxes went. I've held out against the iPhone craze until now, so that meant I had no AT&T contract. I was therefore eligible for the $99 iPhone. What's more, AT&T has a clause in their contract where you can opt out within 30 days without paying the early termination fee. So, an iPhone would be cheaper than a dedicated GPS device (most of which don't play music), and I could return it when we were done.

While the movers packed and loaded boxes the next morning, I went to the nearest Best Buy. In about an hour, I had a $99 iPhone 3G, an extra battery pack ($79), and a year's subscription to MobileMe ($99). Another hour or so, and I'd updated the iPhone to the 3.0 firmware, charged the extra battery, and checked that Find My iPhone successfully located the iPhone. I dropped the phone and extra battery into one of the boxes.

Finally, the last piece of furniture made its way onto the truck and the movers motored off. We waved goodbye and raced off to the airport to catch a flight to see her family.

Find My Moving iPhone

Right before heading to our gate, I opened up MobileMe, logged in, and clicked on "Find My iPhone." It worked! We saw the Google Maps come up, with a red circle on the freeway leading south out of Chicago. The movers were on their way to Oakland! The next day, my girlfriend pointed out that I could take screen shots of the Find My iPhone display. Here's a few from the next day, showing the boxes making progress through New Mexico:

Unfortunately, the next morning we saw an error screen when trying to find the iPhone:

A check of the MobileMe status page revealed that Find My iPhone was down for some users. That would include me, unfortunately.

MobileMe got worse before it got better.

By afternoon, however, MobileMe and Find My iPhone had recovered. We could see that the boxes had crossed into California.

As you can see from the screen shots, the movers were making quick time. In fact, it was too quick: we'd originally planned on having the boxes arrive in Oakland on the 14th of July. Despite the fact that the moving broker had said this would be "OK," the movers themselves had other ideas. By the morning of 8 July they were already near Fresno and wanted to deliver the boxes to us in Oakland as soon as possible that day!

After a lot of negotiation, we agreed to take delivery of the boxes at 11pm on Wednesday 8 July. We flew back to Oakland, arrived at 10PM, and then raced to my girlfriend's place. Naturally, Murphy's Law kicked in and Find My iPhone chose this time to go down:

We called the movers, confirmed they were close, and set about making things ready for their arrival. To cut a long story short, the movers arrived around 11:30PM and unpacked without incident. I opened the box with my new iPhone and found it working fine, just like new. It'd gone through the extra battery, but a quarter of the internal battery was left. Probably could have gone another day or so before running out of juice.

Tracking Your Friends' iPhones

I noticed something else while doing all this. When Apple first announced the Find My iPhone feature, my girlfriend and half the people I follow on Twitter asked "does this mean if I know your MobileMe password, I can track your iPhone?" Yes, you can do that if you know someone's password and if that person has enabled "Find my iPhone" on their phone.

What's more, you can do something that I haven't seen discussed: if you have a friend's un-screen-locked iPhone for a few minutes, you can set up that iPhone to report its location to you. It's simple: go to their Settings app, then add your MobileMe account to their list of accounts, set their data update to "Push," and finally enable Find My iPhone. Now return the iPhone to your friend. When you next go to Find My iPhone in MobileMe, you will see the location of their iPhone.

The key here is that when you click "Update Location" for an iPhone, the phone itself shows no indication that it is reporting its location to MobileMe. No sound plays, no message pops up, nothing like that. At least, nothing happens on my iPhone 3G with 3.0. This means your friend won't realize the iPhone is reporting its location to you. Now, of course, if you use MobileMe's "pop up a message" feature, or heaven forbid, Remote Wipe, your friend will notice! The basic location update, however, appears to be silent.

This means your friend won't realize anything has happened unless he or she checks the list of accounts on the phone, sees the MobileMe account and then notes that the account has "Find My iPhone" turned on. The way the iPhone Mail client is set up, I don't see this as all that likely for someone who has a single e-mail account, since I'd expect that person to rarely or never check the list of accounts.

Once your friend does notice, though, he or she will have your MobileMe account name. Plus the ability to check your me.com e-mail. Suggest only doing this with really, really good friends whom you are confident won't come after you when they figure it out! (Edit: [info]arvindn points out that you can use a throwaway MobileMe account for this. I think you might be able to get some mileage out of using something like "firstname.lastname@me.com", so even if someone notices, he or she might think it's a free preinstalled MobileMe account. That's speculation, though. I haven't done that and don't plan to do it.)

Conclusion

Find My iPhone turns the $99 iPhone into an inexpensive GPS tracking device. I used it to successfully track my girlfriend's boxes across the country from Chicago to Oakland. On the upside, despite downtimes, the service worked well at showing us exactly where her stuff was. This was a huge relief. In addition, you can program a MobileMe account into multiple iPhones and track them all from the same page...even iPhones that aren't necessarily yours.

On the downside, while the phone is $99, the the extra battery, line activation, and MobileMe subscription eats into the cost advantage over a dedicated GPS device. Overall, the cost wasn't much cheaper than the low end of a dedicated device, and both have monthly fees. In retrospect, I could have used the 60 day free MobileMe trial and saved $99. The MobileMe downtimes (twice in two days?) were also not impressive.

The Find My iPhone service is also quite limited, in that it only lets me see an instantaneous location, and it actually does only cell tower localization instead of real GPS. If I had more time (and I weren't planning to return the device), I might explore Cylay, which works with jailbroken iPhones and offers a wider range of functionality for $19.99/year. The xGPS application also claims to offer GPS location logging, but I don't know if it works with the 3.0 software.

Climbing into his Volvo, outfitted with a Matrics antenna and a Motorola reader he'd bought on eBay for $190, Chris Paget cruised the streets of San Francisco with this objective: To read the identity cards of strangers, wirelessly, without ever leaving his car.

It took him 20 minutes to strike hacker's gold.

Zipping past Fisherman's Wharf, his scanner detected, then downloaded to his laptop, the unique serial numbers of two pedestrians' electronic U.S. passport cards embedded with radio frequency identification, or RFID, tags. Within an hour, he'd "skimmed" the identifiers of four more of the new, microchipped PASS cards from a distance of 20 feet.

Embedding identity documents — passports, drivers licenses, and the like — with RFID chips is a no-brainer to government officials. Increasingly, they are promoting it as a 21st century application of technology that will help speed border crossings, safeguard credentials against counterfeiters, and keep terrorists from sneaking into the country.

But Paget's February experiment demonstrated something privacy advocates had feared for years: That RFID, coupled with other technologies, could make people trackable without their knowledge or consent.

He filmed his drive-by heist, and soon his video went viral on the Web, intensifying a debate over a push by government, federal and state, to put tracking technologies in identity documents and over their potential to erode privacy.

Putting a traceable RFID in every pocket has the potential to make everybody a blip on someone's radar screen, critics say, and to redefine Orwellian government snooping for the digital age.

"Little Brother," some are already calling it — even though elements of the global surveillance web they warn against exist only on drawing boards, neither available nor approved for use.

But with advances in tracking technologies coming at an ever-faster rate, critics say, it won't be long before governments could be able to identify and track anyone in real time, 24-7, from a cafe in Paris to the shores of California.

The key to getting such a system to work, these opponents say, is making sure everyone carries an RFID tag linked to a biometric data file.

On June 1, it became mandatory for Americans entering the United States by land or sea from Canada, Mexico, Bermuda and the Caribbean to present identity documents embedded with RFID tags, though conventional passports remain valid until they expire.

Among new options are the chipped "e-passport," and the new, electronic PASS card — credit-card sized, with the bearer's digital photograph and a chip that can be scanned through a pocket, backpack or purse from 30 feet.

Alternatively, travelers can use "enhanced" driver's licenses embedded with RFID tags now being issued in some border states: Washington, Vermont, Michigan and New York. Texas and Arizona have entered into agreements with the federal government to offer chipped licenses, and the U.S. Department of Homeland Security has recommended expansion to non-border states. Kansas and Florida officials have received DHS briefings on the licenses, agency records show.

The purpose of using RFID is not to identify people, says Mary Ellen Callahan, the chief privacy officer at Homeland Security, but rather "to verify that the identification document holds valid information about you."

Likewise, U.S. border agents are "pinging" databases only to confirm that licenses aren't counterfeited. "They're not pulling up your speeding tickets," she says, or looking at personal information beyond what is on a passport.

The change is largely about speed and convenience, she says. An RFID document that doubles as a U.S. travel credential "only makes it easier to pull the right record fast enough, to make sure that the border flows, and is operational" — even though a 2005 Government Accountability Office report found that government RFID readers often failed to detect travelers' tags.

Such assurances don't persuade those who liken RFID-embedded documents to barcodes with antennas and contend they create risks to privacy that far outweigh the technology's heralded benefits. They warn it will actually enable identity thieves, stalkers and other criminals to commit "contactless" crimes against victims who won't immediately know they've been violated.

Neville Pattinson, vice president for government affairs at Gemalto, Inc., a major supplier of microchipped cards, is no RFID basher. He's a board member of the Smart Card Alliance, an RFID industry group, and is serving on the Department of Homeland Security's Data Privacy and Integrity Advisory Committee.

Still, Pattinson has sharply criticized the RFIDs in U.S. driver's licenses and passport cards. In a 2007 article for the Privacy Advisor, a newsletter for privacy professionals, he called them vulnerable "to attacks from hackers, identity thieves and possibly even terrorists."

RFID, he wrote, has a fundamental flaw: Each chip is built to faithfully transmit its unique identifier "in the clear, exposing the tag number to interception during the wireless communication."

Once a tag number is intercepted, "it is relatively easy to directly associate it with an individual," he says. "If this is done, then it is possible to make an entire set of movements posing as somebody else without that person's knowledge."

Echoing these concerns were the AeA — the lobbying association for technology firms — the Smart Card Alliance, the Institute of Electrical and Electronics Engineers, the Business Travel Coalition, and the Association of Corporate Travel Executives.

Meanwhile, Homeland Security has been promoting broad use of RFID even though its own advisory committee on data integrity and privacy warned that radio-tagged IDs have the potential to allow "widespread surveillance of individuals" without their knowledge or consent.

In its 2006 draft report, the committee concluded that RFID "increases risks to personal privacy and security, with no commensurate benefit for performance or national security," and recommended that "RFID be disfavored for identifying and tracking human beings."

For now, chipped PASS cards and enhanced driver's licenses are optional and not yet widely deployed in the United States. To date, roughly 192,000 EDLs have been issued in Washington, Vermont, Michigan and New York.

But as more Americans carry them "you can bet that long-range tracking of people on a large scale will rise exponentially," says Paget, a self-described "ethical hacker" who works as an Internet security consultant.

Could RFID numbers eventually become de facto identifiers of Americans, like the Social Security number?

Such a day is not far off, warns Katherine Albrecht, a privacy advocate and co-author of "Spychips," a book that is sharply critical of the use of RFID in consumer items and official ID documents.

"There's a reason you don't wear your Social Security number across your T-shirt," Albrecht says, "and beaming out your new, national RFID number in a 30-foot radius would be far worse."

There are no federal laws against the surreptitious skimming of Americans' RFID numbers, so it won't be long before people seek to profit from this, says Bruce Schneier, an author and chief security officer at BT, the British telecommunications operator.

Data brokers that compile computer dossiers on millions of individuals from public records, credit applications and other sources "will certainly maintain databases of RFID numbers and associated people," he says. "They'd do a disservice to their stockholders if they didn't."

But Gigi Zenk, a spokeswoman for the Washington state Department of Licensing, says Americans "aren't that concerned about the RFID, particularly in this day and age when there are a lot of other ways to access personal information on people."

Tracking an individual is much easier through a cell phone, or a satellite tag embedded in a car, she says. "An RFID that contains no private information, just a randomly assigned number, is probably one of the least things to be concerned about, frankly."

Mark Roberti, editor of RFID Journal, an industry newsletter, recently acknowledged that as the use of RFID in official documents grows, the potential for abuse increases.

"A government could do this, for instance, to track opponents," he wrote in an opinion piece discussing Paget's cloning experiment. "To date, this type of abuse has not occurred, but it could if governments fail to take privacy issues seriously."http://www.foxnews.com/story/0,2933,531720,00.html

If attackers intent on data theft can tap into an electrical socket near a computer or if they can draw a bead on the machine with a laser, they can steal whatever is being typed into it.

How to execute these attacks will be demonstrated at the Black Hat USA 2009 security conference in Las Vegas later this month by Andrea Barisani and Daniele Bianco, a pair of researchers for network security consultancy Inverse Path.

“The only thing you need for successful attacks are either the electrical grid or a distant line of sight, no expensive piece of equipment is required,” Barisani and Bianco say in a paper describing the hacks.

The equipment to carry out the power-line attack could cost as little as $500, and the laser attack gear costs about $100 if the attacker already owns a laptop with a sound card, says Barisani. Carrying out the attacks took about a week, he says.
“We think it is important to raise the awareness about these unconventional attacks and we hope to see more work on this topic in the future,” Barisani and Bianco say in their paper. Others with more time and money could doubtless create better spying tools using the same concepts, they say.

In the power-line exploit, the attacker grabs the keyboard signals that are generated by hitting keys. Because the data wire within the keyboard cable is unshielded, the signals leak into the ground wire in the cable, and from there into the ground wire of the electrical system feeding the computer. Bit streams generated by the keyboards that indicate what keys have been struck create voltage fluctuations in the grounds, they say.

Attackers extend the ground of a nearby power socket and attach to it two probes separated by a resistor. The voltage difference and the fluctuations in that difference – the keyboard signals – are captured from both ends of the resistor and converted to letters.

To pull the signal out of the ground noise, a reference ground is needed, they say. “A “reference” ground is any piece of metal with a direct physical connection to the Earth, a sink or toilet pipe is perfect for this purpose (while albeit not very classy) and easily reachable (especially if you are performing the attack from [a] hotel room,” they say in their paper.
Since keyboards and mice signals are in the 1 to 20 kHz range, a filter can isolate that range for listening, they say.

Variations in individual keyboards and mice result in each keyboard signaling in a slightly different frequency range. With careful filtering, that makes it possible to zero in on a particular keyboard in an environment where many keyboards are in use, the researchers say.

The attack proved successful when tapping electric sockets located up to 15 meters from where the target computer was plugged in the researchers say.

This method would not work if the computer were unplugged from the wall, such as a laptop running on its battery. The second attack can prove effective in this case, Bianco’s and Barisani’s paper says.

Attackers point a cheap laser, slightly better than what is used in laser pointers, at a shiny part of a laptop or even an object on the table with the laptop. A receiver is aligned to capture the reflected light beam and the modulations that are caused by the vibrations resulting from striking the keys.

This modulation is converted to an electrical signal that is fed into a computer soundcard. “The vibration patterns received by the device clearly show the separate keystrokes,” the researchers’ paper says. Each key has a unique vibration pattern that distinguishes it from the rest. The spacebar creates a significantly different set of vibrations, so the breaks between words are readily apparent.

Analyzing the sequences of individual keys that are struck and the spacing between words, the attacker can figure out what message has been typed. Knowing what language is being typed is a big help, they say.

Laptop lids, especially shiny logos and areas close to the hinges, provide the most easily read vibrations.

Anyone worried about this type of attack can make sure there is no line of sight to the laptop, move position frequently while typing and polluting the signal by striking random keys and later deleting them with the backspace key.

While they admit their hacking tools are rudimentary, they believe they could be improved upon with a little time, effort and backing.

“If our small research was able to accomplish acceptable results in a brief development time (approximately a week of work) and with cheap hardware,” they say. “Consider what a dedicated team or government agency can accomplish with more expensive equipment and effort,”http://www.networkworld.com/news/200...ata-theft.html

You might think your password protects the confidential information stored on Web sites. But as Twitter executives discovered, that is a dangerous assumption.

The Web was abuzz Wednesday after it was revealed that a hacker had exposed corporate information about Twitter after breaking into an employee’s e-mail account. The breach raised red flags for individuals as well as businesses about the passwords used to secure information they store on the Web.

On Web sites containing personal information like e-mail, financial data or documents, there is usually just a user name and password for protection. More individuals are storing information on Web servers, where it is accessible from any online computer through services offered by Google, Amazon, Microsoft, social networks like Facebook or back-up services like Mozy.

But password-protected sites are growing more vulnerable because to keep up with the growing number of passwords, people use the same simple ones on numerous sites across the Web. In a study last year, Sophos, a security firm, found that 40 percent of Internet users use the same password for every Web site they access.

The attack on Twitter highlights the problem. For its internal documents, the company uses the business version of Google Apps, a service that Google offers to individuals free. Google Apps provides e-mail, word processing, spreadsheets and calendars over the Web.

The content is stored on Google’s servers, which can save time and money and enable employees to work together on documents at the same time. But it also means that the security is only as good as the password. A hacker who breaks into one person’s account can access information shared by friends, family members or colleagues, which is what happened at Twitter.

The Twitter breach occurred about a month ago, Twitter said. A hacker calling himself Hacker Croll broke into an administrative employee’s e-mail account and gained access to the employee’s Google Apps account, where Twitter shares spreadsheets and documents with business ideas and financial details, said Biz Stone, a Twitter co-founder.

The hacker then sent documents about company plans and finances, confidential contracts, and job applicants to two tech news blogs, TechCrunch, in Silicon Valley, and Korben, in France. There was also personal information about Twitter employees including credit card numbers.

The hacker also broke into the e-mail account of the wife of Evan Williams, Twitter’s chief executive, and from there accessed several of Mr. Williams’ personal Internet accounts, including those at Amazon and PayPal, Mr. Stone said.

TechCrunch revealed documents showing that Twitter, a private company that so far has no revenue, projected that it will reach a billion users and $1.54 billion in revenue by 2013. Michael Arrington, TechCrunch’s founder, said in an interview that the hacker had also sent him detailed strategy documents about potential business models, the competitive threat from Facebook and when the company might be acquired.

Some analysts say the breach highlights how dangerous it can be for people and companies to store confidential documents on Web servers, or “in the cloud.”

But Mr. Stone said that the attack “isn’t about any flaw in Web apps,” but rather about a bigger issue that affects individuals and businesses alike. “It speaks to the importance of following good personal security guidelines such as choosing strong passwords,” he said.

Instead of circumventing security measures, it appears that the Twitter hacker managed to correctly answer the personal questions that Gmail asks of users to reset the password.

“A lot of the Twitter users are pretty much living their lives in public,” said Chris King, director of product marketing at Palo Alto Networks, which creates firewalls. “If you broadcast all your details about what your dog’s name is and what your hometown is, it’s not that hard to figure out a password.”

Security experts advise people to use unique, complex passwords for each Web service they use and include a mix of numbers and letters. Free password management programs like KeePass and 1Password can help people juggle passwords for numerous sites.

Andrew Storms, director of security operations for nCircle, a network security company, suggested choosing false answers to the security questions like “What was your first phone number?” or making up obscure questions instead of using the default questions that sites provide. (Of course, that presents a new problem of remembering the false information.)

For businesses, Google allows company administrators to set up rules for password strength and add additional authentication tools like unique codes.

With all the chatter about the current security issues surrounding Twitter, its workforce and the cloud-based Google apps they use, a new security issue has popped up that makes it trivially easy for anyone to access the Twitter servers directly. The problem? The password to the servers was, literally, “password.”

Twitter co-founder Biz Stone, responding to our email, said “this bug allowed access to the search product interface only. No personally identifiable user information is accessible on that site.” Although no user accounts were compromised or accessible, the vulnerability speaks to a greater culture of lax security at the startup, and may be indicative of how earlier breaches possibly occurred.

With that in mind, we have some friendly advice for Twitter. For instance, it would be wise if in the future Twitter insiders do not use the password “password” for the back ends of its systems or one of its co-founder’s names (Jack) as a username.

Why do we think this advice could prove helpful? Well without taking this type of precaution, before you know it malicious hackers or just plain mean people who have it in for you could do some serious damage and/or embarrass you in front of all your friends and followers by invading your personal digital territory.

Again, for the record, this has absolutely nothing to do with the other security breach we’re publishing ongoing reports about and which Twitter has already publicly responded to. We notified Twitter about this breach as well, and waited until they took action to close it off before posting.

Mozilla was investigating bug when the attack code went public
Gregg Keizer

Mozilla Corp. yesterday confirmed the first security vulnerability in Firefox 3.5 and said that the bug could be used to hijack a machine running the company's newest browser.

A noted Firefox contributor called the situation "self-inflicted" and said it was likely that the hacker who posted public exploit code Monday became aware of the flaw by rooting through Bugzilla, Mozilla's bug- and change-tracking database.

The vulnerability is in the TraceMonkey JavaScript engine that debuted with Firefox 3.5, said Mozilla. "[It] can be exploited by an attacker who tricks a victim into viewing a malicious Web page containing the exploit code," the company's security blog reported Tuesday.

Secunia, a Danish security company, rated the bug "highly critical," the second-highest threat ranking in its five-step system, and added that the vulnerability is in TraceMonkey's processing of JavaScript code handling "font" HTML tags.

Older versions of Firefox, including Firefox 3.0, are not vulnerable, according to a message posted by Asa Dotzler, Mozilla's director of community development, in a comment to the company's blog.

"Mozilla developers are working on a fix for this issue, and a Firefox security update will be sent out as soon as the fix is completed and tested," said that same blog.

In lieu of a patch, users can protect themselves by disabling the "just-in-time" component of the TraceMonkey engine. To do that, users should enter "about:config" in Firefox's address bar, type "jit" in the filter box, then double-click the "javascript.options.jit.content" entry to set the value to "false." The popular NoScript add-on will also ward off attacks.

The hacker who published exploit code on the Milw0rm.com malware site Monday was not the first to uncover the vulnerability: Mozilla developers first noted the flaw last Thursday and were in the middle of working on it when the attack code appeared.

"Looking at the exploit code and our test cases, I think this is self-inflicted and we should have hidden the bug earlier," argued Andreas Gal on Bugzilla. Gal is a project scientist at the University of California, Irvine, where the technique called "trace trees" was developed. Firefox 3.5's TraceMonkey engine is based on that technique, and builds on code and ideas shared with the open-source Tamarin Tracing project.

Another contributor agreed. "It would seem that the Milw0rm exploit code is based on the test cases for this bug," said someone identified only as "WD" in the same Bugzilla thread. "When you look at the crash details in a debugger, it's pretty clear that it's exploitable with a heap spray to the access violation address in question."

The fix has been slated for Firefox 3.5.1, a fast-track update that was originally set to be released in the last two weeks of this month.

That update will be accelerated to plug the just-gone-public hole, said Daniel Veditz, a security lead at Mozilla. "[The bug] was checked in yesterday, a few hours before we learned of the Milw0rm posting," Veditz said Tuesday night in a comment on the Mozilla security blog. "This fix was going to be in the 3.5.x update we had scheduled for the end of July, but obviously now we have moved up the schedule for release."

As part of the Mozilla Corporation’s ongoing security and stability process, Firefox 3.5.1 is now available for Windows, Mac, and Linux users as a free download from www.firefox.com.

We strongly recommend that all Firefox 3.5 users upgrade to this latest release. If you already have Firefox 3.5, you will receive an automated update notification within 24 to 48 hours. This update can also be applied manually by selecting “Check for Updates…” from the Help menu.

For a list of changes and more information, please see the Firefox 3.5.1 release notes.

North Korea's military is behind a series of cyber attacks against South Korean and U.S. websites that slowed or disabled access by saturating them with traffic this week, a South Korean news report said on Saturday.

The attacks on dozens of U.S. and South Korean government and business sites appeared to have all but ceased as of Friday but have crippled hundreds of personal computers that had been turned into "zombies" when they were enlisted for the attacks.

North Korea has been seen as a prime suspect for launching the attacks, although the isolated state was not named on a list by the South's Communications Commission (KCC) of sites from five countries where the attacks may have originated.

"The No. 110 lab of the North's Ministry of People's Armed Forces, which is a team of hackers, was ordered to destroy the South Korean networks," the South's National Intelligence Service was quoted as telling a closed-door parliamentary briefing.

The secret unit has been adding computer specialists who work with the North's security apparatus in and outside the country including in China to wage a systematic cyber warfare, the spy agency was quoted as telling the briefing by the JoongAng daily.

If the North was responsible, it would mark an escalation in tensions already high from Pyongyang's nuclear test in May, a barrage of ballistic missiles in July and repeated taunts of long-time foes Seoul and Washington in its official media.

Internet access is denied to almost everyone in impoverished North Korea, a country that cannot produce enough electricity to light its cities at night. Intelligence sources say leader Kim Jong-il launched a cyber warfare unit several years ago.

But some analysts have questioned the North's involvement, saying it may be the work of industrial spies or pranksters.

The South's Communications Commission said there had been a sharp drop in traffic against target sites by Friday night which appeared to signal an end to the wave of attacks that first hit on a large scale on Tuesday.

The agency said 438 cases of personal computers have been reported destroyed by malicious software used in the attacks.

It is an axiom that “on the Internet nobody knows that you are a dog.”

By the same token, it is all but impossible to know whether you are from North Korea or South Korea.

That puzzle is plaguing law enforcement investigators in several nations who are now hunting for the authors of a small but highly publicized Internet denial-of-service attack that briefly knocked offline the Web sites of some United States and South Korean government agencies and companies.

The attack, which began over the Fourth of July weekend and continued into the next week, led to South Korean accusations that the attack had been conducted by North Korean military or intelligence agents, possibly in retaliation for new United Nations sanctions. American officials quickly cautioned that despite sensational news media coverage, the attacks were no different from similar challenges government agencies face on a daily basis.

Cyberwarfare specialists cautioned this week that the Internet was effectively a “wilderness of mirrors,” and that attributing the source of cyberattacks and other kinds of exploitation is difficult at best and sometimes impossible. Despite the initial assertions and rumors that North Korea was behind the attacks and slight evidence that the programmer had some familiarity with South Korean software, the consensus of most computer security specialists is that the attackers could be located anywhere in the world.

“It would be incredibly difficult to prove that North Korea was involved in this,” said Amrit Williams, chief technology officer for Bigfix, a computer security management firm. “There are no geographic borders for the Internet. I can reach out and touch people everywhere.”

But researchers said that law enforcement investigators were likely to be aided in their pursuit by a second computer security truism — that the only ones who get caught are dumb, unsophisticated or both.

For starters, the attacking system, which cannibalized more than 50,000 computers and which is known as a botnet, was actually small, computer researchers said, compared with similar computer malware programs that are now routinely used by members of the computer underground.

Moreover, independent researchers, who have examined the programmer’s instructions used to lash together the tens of thousands of computers, said that it showed that the program, known as a D.D.O.S., or a distributed denial of service attack, revealed a high degree of amateurism.

That fact suggested that the authors, who hid themselves by masking their actions behind an international trail of Internet-connected computers, might have left telltale fingerprints that will ultimately be their undoing.

Last week, investigators quickly located computers that were involved with the control of the botnet in Britain and several other countries. However, the Internet service provider whose systems were implicated in the attack quickly issued a news release stating that the attack was actually coming from Miami. The company said that it was cooperating with the Serious Organized Crime Agency, a law enforcement agency that is part of the British government.

But independent investigators who have tracked the botnet cautioned against placing reliance on the locations for the command-and-control computers that have been publicly identified.

“We’re still looking for the initial infection vector,” said Jose Nazario, a network security researcher at Arbor Networks, a computer security provider for large network systems.

Several researchers recalled a similar incident in 2000, when a series of high-profile denial of service attacks were conducted against companies including Yahoo!, Amazon.com, Dell, ETrade, eBay and CNN. The culprit proved to be a 15-year-old Canadian high school student who was identified as a suspect only after publicly bragging about the attacks in an online forum.

Finding attackers who have no desire to reveal their locations — even amateurs — may be far more vexing.

“The truth is, we may never know the true origin of the attack unless the attacker made some colossal blunder,” said Joe Stewart, a director in the Counter Threat Unit at SecureWorks, a computer security consulting organization.

Some experts pointed to an entirely different origin for the attacks, or at least the attention paid to them. Cyberwarfare has become a hot topic in Washington this year, with the Obama administration undertaking a detailed review of the nation’s computer security preparedness.

“There is a U.S. political debate going on right now with high stakes and big payoffs,” said Ronald J. Deibert, director of the Citizen Lab at the Munk Center for International Studies at the University of Toronto. “With the administration cyberreview there are many government agencies orbiting around the policy debate that have an interest in pointing to this incident as evidence with obvious implications.” http://www.nytimes.com/2009/07/17/te...y/17cyber.html

Friendly visits

Queensland Police Plans Wardriving Mission
Brett Winterford

Crack down on unsecured wireless networks.

The Queensland Police plans to conduct a 'wardriving' mission around select Queensland towns in an effort to educate its citizens to secure their wireless networks.

'Wardriving' refers to the technique of searching for unsecured wireless networks by driving the streets armed simply with a laptop or smartphone seeking network connections.

Detective Superintendent Brian Hay of the Queensland Police, who today was honoured by security vendor McAfee with an "International Cybercrime Fighter Award" told the audience at McAfee's Strategic Summit in Sydney that his unit is "about to undertake a wardriving program, in which we drive through areas of Queensland trying to identify unsecured networks".

When unsecured networks are found, the Queensland Police will pay a friendly visit to the household or small business, informing them of the risks they are exposing themselves to.

"It is a simple campaign, much like past police campaigns in which officers walk around railway station checking cars have been locked. If you leave your car unlocked, you come back and find a note from the Police warning you of the dangers involved with leaving your car unsecured," Hay told iTnews.

"We know unsecured networks are a problem," Hay said. "We know the crooks are out there driving around trying to identify these networks. We can't just sit back and not address the issue.

"What we need to do is put it on the agenda. We are pretty sure this is a big problem, so let's test the waters - let's scan the environment. And let's tell people, 'Excuse me, this could happen to you and your family and this is how you can rectify it'."
Hay said the Queensland Police won't require too much resource to make the exercise a success.

"We pick out small geographic locations, scan the environment and promote it through the media - highlighting the significance of problem and how to take corrective steps," he said.

Hay said the Police would ideally hope to return to surveyed areas within a month to "see if they've fixed the problem."

He said Queensland Police had discussed the potential to conduct and promote the exercise on conjunction with unnamed corporate partners, but "with or without them, I can assure you the Queensland Police is going to do this. I'll make sure it gets off the ground."

EU ministers are gathering in Stockholm this week to advance their work on the Stockholm Programme, a five-year plan they claim is designed to make it easier to catch criminals and keep Europe’s citizens safe.

But despite soothing words from politicians about the programme’s virtues, it’s critical for EU citizens to stand up now and protest against the threat it presents to privacy and individual rights.

On the surface, the Stockholm Programme’s professed set of goals may appear somewhat benign – perhaps even sensible –with its calls for increased cooperation to fight terrorism and organized cross-border crime.

But we’ve already got a pretty good idea that the kinds of measures under consideration for meeting the Stockholm Programme’s goals are anything but benign.

In short, we’re talking about increased surveillance which tramples on the privacy rights of individuals and about higher walls being constructed around Europe’s borders.

Last summer, a number of details about the concrete steps associated with the Stockholm Programme were leaked from the EU’s so-called Future Group in connection with a meeting of EU justice ministers in Nice.

While the drafters of the Stockholm Programme profess it is a tool that will aid the “free movement of people” within the EU, there is very little about one’s movements that will remain “free” if EU ‘securocrats’ are allowed to implement the sorts of measures hinted at in the Future Group document.

Among other things, the leaked Future Group document envisages “new and more flexible expulsion and surveillance measures” which would make it easier for states across Europe to gather increasingly detailed information about citizens and their movements, as well as block the entry of others.

Moreover, the authors also discuss the need for “increased synergies between police and security intelligence services” across Europe, meaning that information gathered by local law enforcement in Piteå could eventually end up in the hands of counter-terrorism agents in Palermo.

Are we really “free” if our movements are tracked by the state and that information can end up being read by any intelligence or law enforcement agency in Europe?

Will we be “free” if the state has access to information about our banking habits, internet use, and can pinpoint our location using mobile phone data?

Whatever happened to the notion that the citizens of Europe could go about their business without having Big Brother continually tapping them on the shoulder and watching them with a suspicious eye?

While the indications we’ve seen so far about the plans for fulfilling the Stockholm Programme are frightening, it’s still early enough in the process for the citizens of Europe to make their voices heard.

While demonstrators plan on taking to the streets in Stockholm, we here at the European Parliament in Brussels are getting ready to fight the next round from within the system.

It’s going to be a long, difficult autumn for us privacy advocates and bloggers as we do battle to make sure some of the more intrusive proposals don’t end up making it into the final document, which is expected to be presented for signature in December by heads of state and government EU Summit in Stockholm.

And even in the years after the programme is adopted, those of us who support privacy rights will have to be vigilant regarding additional measures which will likely be debated in reference to the Stockholm Programme.

But what’s important now is that we, at an early stage, show how we feel and are clear about what concerns us.

If the politicians don't meet with some resistance, they’ll never put the brakes on the Stockholm Programme before it ends up in a train wreck of invasive measures which all but wipe out any notion of personal privacy and integrity among the citizens of Europe.

It’s exciting to see how many activists have been mobilized so far by these important issues of privacy and individual rights. And it’s important to protest. If we don’t, politicians won’t realize that they’ve stepped over the line.

So get out and demonstrate! Blog, write, and shout to show everyone in the capitals of Europe as well as the European capital that privacy is an important right for every individual in the 27 member states of the European Union,

If we don’t speak loudly now, we may find our views barely able to utter a whisper without the Big Brother of Europe holding his hand across our mouths. http://www.thelocal.se/20680/20090715/

This week in Rome, bloggers and activists wore gags to protest a proposed law that could impose heavy fines on bloggers who don’t correct “offensive” comments within 48 hours.

About 200 bloggers gathered at sunset in the picturesque Piazza Navona July 15, while hundreds others joined the protest online by freezing blog posts for a day.

“A blogger is not a professional reporter,” yelled 35 year-old Guido Scorza from atop a marble bench as he held a heavy megaphone. “A blogger doesn’t have a legal office to defend him from lawsuits,” he said.

The controversial Alfano proposal — named after its author, Italy’s Minister of Justice Angelino Alfano — has already been approved by Parliament and awaits Senate approval.

If passed, the law would force bloggers to edit any post denounced to the government as defamatory. If the blogger refused, the denouncing citizen could sue for as much as $18,000.

Few bloggers can afford such a high price for freedom of expression. Strike organizers said that the provision especially aims to discourage bloggers from commenting on politicians and other public figures.

“They are trying to reduce the number of bloggers in Italy,” said Scorza, a lawyer and expert in digital civil rights. He said the internet has given Italians the tools to question their elected representatives.

One such Italian, the comedian-turned-blogger Beppe Grillo, has used the web to expose the Italian Parliament’s inability to act on crucial issues such as conflict of interest, corruption and the environment.

Every year, Grillo organizes a popular event called “V-Day,” which promotes active citizenship, and the use of the web as a news source. Recently, Grillo’s popularity won him an online election as the next secretary of the Italian Democratic Party (PD). The party, however, refused his candidacy.

In a country where the prime minister owns the three largest commercial TV channels, the biggest publishing house, a leading advertising agency, and — as head of state — oversees Italian public television, RAI, bloggers represent a fresh breeze of critical voices.

A blog reader like Damiano Zito, a 22-year-old engineering student from southern Italy, put it bluntly: “If bloggers start shutting down, I won’t have any alternative source of information.”

But Antonio Palmieri, a Parliament member from Berlusconi’s People of Freedom Party (PdL), said the Alfano law aims at stopping bloggers from abusing the freedom of the internet.

“How would you feel if you were anonymously insulted on the internet every day?” he said.

Palmieri defended the “Alfano” proposal but also said it was written as an emotional reaction. He is working to improve the language of the proposal by clarifying what kinds of blogs and web sites should be liable. Palmieri thinks bigger blogs and online newspapers that affect public opinion should be regulated.

The Alfano proposal updates a 1948 law that was passed to regulate newspapers created after World War II. The law required newspapers to either correct published information that citizens denounced as defamatory or be subject to a fine. The Alfano proposal extends the rule to so-called “Information Sites.”

Bloggers demonstrating in Piazza Navona called it a “geriatric” reaction by Italian Parliament members who don’t understand the nature of the web. But recent Italian events shed a different light on this latest attempt to regulate the Italian web — maybe more strategic than geriatric.

In 2007, a YouTube video of a recorded phone call between then-opposition leader Silvio Berlusconi and a former RAI director exposed Berlusconi to public scrutiny. In the call, Berlusconi asked the RAI director to hire two women as a favor for a senator of the majority. Berlusconi said explicitly that he expected the senator to return the favor by helping him regain a Parliamentary majority.

Recently, now-Prime Minister Berlusconi has been under heavy public scrutiny for an ambiguous relationship with an 18-year-old aspiring TV star. Within days, the internet was saturated with satirical renditions of the alleged relationship via print, audio photo and video.

“This government is formed by people who for 30 years got used to having a tight grip on the media,” said Scorza. He said his goal is for the web to be regulated with civil lawsuits.

“The principle of accountability is sacred and I think that anyone posting information of public interest should be accurate,” said Scorza, “but the way they [legislators] want to apply the principle is twisted.”http://www.globalpost.com/dispatch/i...loggers-strike

Update in the ZGeek Legal Battle

Good news and bad news everyone. Last night I got an email from Greg Smith of Myrmidon enterprises regarding a hearing for today (Surprise!). The hearing was an effort to have ZGeek closed by a court order. Our lawyers attended and we were victorious. The request to have ZGeek closed was denied and the summons for the 17th of August was struck out.

So basically, we are in the clear... For a little bit as I don’t think this is the last we’ve heard from these guys and our lawyer says it is obvious that they are not done with us.

Unfortunately today’s proceedings cleared out the last of our legal funds. I'm sorry guys, but I won't be sending any money back, it's going to our lawyers. Luckily they are pretty damn awesome.

If the trouble returns, then I will put out the call to arms then as we will need funds to beat this. In the meantime, if you want to donate you can at anytime. Use this link or the donate link in the top navigation bar in the forums. I promise that none of your money will be spent on me. I have a full time job for that . All donations will be used to protect the site from future problems like this.

But now, some even worse news. ZGeek is abandoning Australia. ZGeek as a company has been shut down and any future of the site conducting business in Australia is just not going to happen until the laws change as they offer no protection for internet content hosts based in Australia. Basically, if you allow comment on your website and you live here, you are open for the same troubles as I am having. Even if your site is hosted OS. Got your own blog? Be very worried. Even after we complied with their lawyers demands they are still coming after me and the Broadcasting Act allows them.

That's pretty fucked up.

Although we are not finished, I’d just like to say a special thanks to Sagacious, That_bloke, Scythe and Donk. These guys have been above and beyond with their help. Plus to all those who have donated and sent their well wishes. This time has been pretty stressful and all of your support has meant the world to buffy and I. It shows this site IS worth fighting for and our community kicks ass. We have a fighting force of extraordinary magnitude! So you have my gratitude.

A US citizen who uploaded thousands of images of National Portrait Gallery paintings to Wikipedia has been threatened with legal action, if a letter from Lincoln's Inn lawyers Farrer & Co., reproduced on the Wikimedia Commons website, is genuine.

All of the paintings are thought to be from the Victorian era or earlier, and are therefore in the public domain. The rather gristly bone of contention, however, is whether the high resolution images of those paintings are protected by their own copyright.

The user, 'DCoetzee', is said to have circumvented technical barriers put in place by NPG and downloaded 3,300 high resolution images, which he or she then uploaded to Wikipedia. Here's one, for example.

Assuming the letter is genuine, this raises all kinds of questions, both about the specific case and its implications to web publishing. Can a flat photograph of an image constitute a creative work in its own right? The US courts seem to think not, as decided in the case of Bridgeman v. Corel. But this case would be brought under UK jurisprudence, where the matter has not been tested. According to the letter from Farrer & Co, 'practicing lawyers and legal academics alike generally agree that under a UK law analysis the judgment in Bridgeman v. Corel is wrong and that copyright can subsist in a photograph of a painting'. The waters are further muddied because both the threatened user and the Wikipedia servers are based in the USA. Some think this matters, some think it does not. It's a tangled web, as the comments on Slashdot testify.

Finally, there's a question mark over the NPG's behaviour in threatening such an action. These paintings belong to the nation and some were acquired with public money. As we are prohibited from taking our own photographs of the collection, shouldn't we be allowed free and unfettered use of the official imagery? Or does the gallery have a responsibility to protect the definitive digital versions of its holdings? We'll watch this one with interest.http://londonist.com/2009/07/nationa..._to_sue_wi.php

Some E-Books Are More Equal Than Others
David Pogue

This morning, hundreds of Amazon Kindle owners awoke to discover that books by a certain famous author had mysteriously disappeared from their e-book readers. These were books that they had bought and paid for—thought they owned.

But no, apparently the publisher changed its mind about offering an electronic edition, and apparently Amazon, whose business lives and dies by publisher happiness, caved. It electronically deleted all books by this author from people’s Kindles and credited their accounts for the price.

This is ugly for all kinds of reasons. Amazon says that this sort of thing is “rare,” but that it can happen at all is unsettling; we’ve been taught to believe that e-books are, you know, just like books, only better. Already, we’ve learned that they’re not really like books, in that once we’re finished reading them, we can’t resell or even donate them. But now we learn that all sales may not even be final.

As one of my readers noted, it’s like Barnes & Noble sneaking into our homes in the middle of the night, taking some books that we’ve been reading off our nightstands, and leaving us a check on the coffee table.

In George Orwell’s “1984,” government censors erase all traces of news articles embarrassing to Big Brother by sending them down an incineration chute called the “memory hole.”

On Friday, it was “1984” and another Orwell book, “Animal Farm,” that were dropped down the memory hole — this time by Amazon.com.

In a move that angered customers and generated waves of online pique, Amazon remotely deleted some digital editions of the books from the Kindle devices of readers who had bought them.

An Amazon spokesman, Drew Herdener, said in an e-mail message that the books were added to the Kindle store by a company that did not have rights to them, using a self-service function. “When we were notified of this by the rights holder, we removed the illegal copies from our systems and from customers’ devices, and refunded customers,” he said.

Amazon effectively acknowledged that the deletions were a bad idea. “We are changing our systems so that in the future we will not remove books from customers’ devices in these circumstances,” Mr. Herdener said.

Customers whose books were deleted indicated that MobileReference, a digital publisher, had sold them. An e-mail message to SoundTells, the company that owns MobileReference, was not immediately returned.

Digital books purchased for the Kindle are sent to it over a wireless network. Amazon can also use that network to synchronize electronic books between devices — and apparently to make them vanish.

An authorized digital edition of “1984” from its American publisher, Houghton Mifflin Harcourt, was still available on the Kindle store Friday night, but there was no such version of “Animal Farm.”

People who purchased the rescinded editions of the books reacted with indignation, while acknowledging the literary ironies involved.

“Of all the books to recall,” said Charles Slater, an executive with a sheet-music retailer in Philadelphia, who bought the digital edition of “1984” for 99 cents last month. “I never imagined that Amazon actually had the right, the authority or even the ability to delete something that I had already purchased.”

Antoine Bruguier, an engineer in Silicon Valley, said he had noticed that his digital copy of “1984” appeared to be a scan of a paper edition of the book. “If this Kindle breaks, I won’t buy a new one, that’s for sure,” he said.

Amazon appears to have deleted other purchased e-books from Kindles recently. Customers commenting on Web forums reported having digital editions of the Harry Potter books and the novels of Ayn Rand disappear over similar issues.

Amazon’s published terms of service agreement for the Kindle does not appear to give the company the right to delete purchases after they have been made. It says Amazon grants customers the right to keep a “permanent copy of the applicable digital content.”

Retailers of physical goods cannot, of course, force their way into a customer’s home to take back a purchase, no matter how bootlegged it turns out to be. Yet Amazon appears to maintain a unique tether to the digital content it sells for the Kindle.

“It illustrates how few rights you have when you buy an e-book from Amazon,” said Bruce Schneier, chief security technology officer for British Telecom and an expert on computer security and commerce. “As a Kindle owner, I’m frustrated. I can’t lend people books and I can’t sell books that I’ve already read, and now it turns out that I can’t even count on still having my books tomorrow.”

Justin Gawronski, a 17-year-old from the Detroit area, was reading “1984” on his Kindle for a summer reading assignment and lost all his notes and annotations when the file vanished. “They didn’t just take a book back, they stole my work,” he said.

On the Internet, of course, there is no such thing as a memory hole. While the copyright on “1984” will not expire until 2044 in the United States, it has already expired in other countries including Canada, Australia and Russia. Web sites in those countries offer digital copies of the book free to all comers. http://www.nytimes.com/2009/07/18/te.../18amazon.html

Death by Cliff Plunge, With a Push From Twitter
Monica Corcoran

VIRUSES may spread quickly on the Internet, but hoaxes can be pretty contagious, too. In the same week that Ed McMahon, Farrah Fawcett and Michael Jackson died, the Web became a hotbed of made-up death reports about various celebrities.

Jeff Goldblum was the first to go. A headline on Google News read, “Jeff Goldblum Has Died, Falls to Death on Set!” Details were murky, but just specific enough to sound plausible. The story went that Mr. Goldblum, 56, had plummeted off the 60-foot Kauri Cliffs in New Zealand while filming a movie.

What started out as a prank soon took on a life of its own. Twitter users retweeted the item, and the community became an echo chamber. Facebook members chimed in.

By the week’s end, the celebrity death toll had turned into a conga line. Harrison Ford had gone down in a capsized yacht in St-Tropez; George Clooney’s private plane had nose-dived somewhere in Colorado. Miley Cyrus? Car accident. Natalie Portman? That tricky cliff in New Zealand. Ellen DeGeneres, Britney Spears and the comedian Louie Anderson were allegedly R.I.P., too.

“We got a phone call from a friend who read it on Facebook, that’s how we found out,” said Mr. Clooney’s publicist, Stan Rosenfield, who also received calls from news outlets seeking confirmation. Instead of issuing a news release, Mr. Rosenfield contacted TMZ, a celebrity news and gossip site, which posted a story that dispelled the rumor and shook a finger at the mongers.

As for Mr. Clooney himself, “George quoted Mark Twain and said his death had been ‘greatly exaggerated,’ ” Mr. Rosenfield said.

Twitter may have been the messenger, but most of the rumors did not originate there. The hoax trifecta of Mr. Goldblum, Mr. Ford and Mr. Clooney started at a prank Web site called Fakeawish.com, which offers visitors a template to generate outlandish stories about the actor or actress of their choice. Think of it as macabre Mad Libs for the crowdsourcing era.

It works like this: a user enters a celebrity’s name and is given a list of fake news stories to choose from — the celebrity can die by plane, yacht or cliff, or be hospitalized after a traffic altercation. The user must choose whether the victim is male or female.

From there, the prankster is directed to a site called Global Associated News, where a vaguely plausible story appears, ready to be e-mailed, linked to and instant-messaged. A disclaimer at the bottom of the page reveals that the content is “100% fabricated.”

The Borat of this particular Web site is Rich Hoover, a 37-year-old Atlanta resident who parlayed his information-technology expertise into a modest empire of 20 Web sites, including Global Associated News and a YouTube-style pornography site. He is proud to say that he makes money off of his sites (through advertising) and generates all the death hoax stories himself.

“I’d be lying if I said there wasn’t some twisted sense of satisfaction or accomplishment,” said Mr. Hoover, who designed the site in 1998 to amuse his co-workers and refined it in 2002 to concentrate on celebrities. The recent popularity is a result of all the traffic driven to his site by Twitter feeds. “In a small way, you have to pinch yourself and think, ‘Wow! I caused all this,’ ” he said.

Mr. Hoover was also behind the 2006 hoax that had Tom Hanks careering off the Kauri Cliffs. Ditto for Tom Cruise, whom Mr. Hoover had plunging to his death in 2008.

Why New Zealand? “I’m an avid golfer, and I saw a segment on New Zealand when I was watching the PGA Tour — it looked beautiful,” Mr. Hoover said, adding that he mostly uses international locations because they take longer to disprove. He said the only cease-and-desist letter he has received since 2002 was sent by a lawyer for Michael Vick, the football star (whose problems with dog-fighting may have pushed this concern to the back burner).

Clearly, pumping up fake stories about famous people has been a popular and even lucrative pastime for ages. (Supermarket tabloids, anyone?) These days, the same naughty human instincts are still there — you know, the ones that have prompted generations of teenagers to make prank phone calls — and technology has moved things forward.

“Within Internet memes, it’s natural for people to build on top of what is already happening,” said Tim Hwang, a research associate at the Berkman Center for Internet and Society at Harvard.

Twitter, for instance, already has credibility as a news site, thanks to its users’ real-time coverage of the recent violence in Iran, the shooting rampage in Mumbai last year and the US Airways plane that landed in the Hudson River. That type of citizen journalism helped legitimize Twitter as a place people turn for the freshest developments.

TMZ is also known as being nimble. After it beat the print and broadcast news outlets in reporting Michael Jackson’s death, a flurry of Twitter posts followed. Harvey Levin, TMZ’s editor in chief, said he receives celebrity death tips all the time. “But we fact-check everything,” he said. “We have legal and research departments. It’s rigorous.”

This is not the case on Twitter, which is just as useful for disseminating bad tips as good ones. The blogger Emily Miller at Politics Daily has coined the term TwitterDead to refer to victims of the latest hoaxes.

Biz Stone, a founder of Twitter, said by e-mail, “We don’t typically identify rumors as abuse, nor do we actively monitor user content or censor user content.” Among those rumored dead on that site was Rick Astley, the “Never Gonna Give You Up” singer whose name is synonymous with a prank called Rickrolling (blasting a clip of his signature song at an inappropriate place or time). And among the people who fell for the Goldblum hoax was Demi Moore, who used Twitter to express her grief (perhaps she should have been mindful of her Twitter handle, Mrs. Kutcher, and her husband’s former TV show, “Punk’d”).

In Mr. Goldblum’s case, the rumor spread so quickly and widely on Twitter and Facebook that his publicist issued a statement to assure people that he was “fine in Los Angeles.” Four days later, Mr. Goldblum appeared on “The Colbert Report” to dispel the hearsay and playfully eulogize himself.

In a telephone interview, Mr. Goldblum said he saw the humor in the prank: “We have to surrender to our own death. That’s what monks do.”

He even derived some benefits. His name surged in Google searches, and “people came back into my life that I had been out of touch with,” he said. “They called to say ‘I was very upset and I’m glad you’re alive,’ and so it’s been a sort of reunion for me.”

Consider Ed McMahon, who had been rumored dead on the Internet well before he actually died. “Twitter can be a wonderful social tool, but at the same time it can be confused as a media outlet,” said Howard Bragman, a publicist who had represented Mr. McMahon. “Once something gets out there, I’m not naïve enough to think I can stop it.”

Nicholas DiFonzo, a professor at the Rochester Institute of Technology who studies the psychology of rumors, said that the mass confusion over Mr. Jackson’s sudden death probably left people craving a feeling of control. “People spread rumors when there is some uncertainty or anxiety that they are trying to calm,” he said.

Franklin D. Roosevelt’s death on April 12, 1945, caused similar morbid ripples, according to Alex Boese, who wrote a book about historical hoaxes and founded an online Museum of Hoaxes. Back then, rumors spread that Frank Sinatra, Babe Ruth, Al Jolson, Errol Flynn and other notables had suddenly expired as well. “This has been going on for hundreds of years,” Mr. Boese said. “It’s the people, not the Internet. You can’t blame Twitter.”http://www.nytimes.com/2009/07/12/fashion/12hoax.html

15-Year-Old Analyst Sparks Storm After Trashing Twitter
Barry Collins

A report on teenagers' media habits written by a 15-year-old schoolboy at Morgan Stanley has become an overnight sensation.

Intern Matthew Robson was asked to write a report about his friends' use of technology during his work experience stint with the firm's media analysts.

Team leader Edward Hill-Wood said the report was "one of the clearest and most thought-provoking insights we have seen," according to a report in the Financial Times, and so decided to publish it.

The report generated "five or six times" more interest than the team's usual reports, according to Hill-Wood. "We've had dozens and dozens of fund managers, and several CEOs, e-mailing and calling all day," he told the newspaper.

The 15-year-old poured scorn on social-networking site of the moment, Twitter, claiming that teenagers don't use it because "they realise that no one is viewing their profile, so their tweets are pointless".

He also claimed that teens were deserting traditional media such as television and newspapers in favour of advert-free music on sites such as Last.fm and online news sources.

Robson also had bad news for the mobile phone operators, claiming that games consoles have become a more attractive medium for chatting to friends than their phones.

We all know that walking and texting is a tough combination -- but a Staten Island teen learned the hard way when she fell into an uncovered sewer manhole while trying to send a message.

Now, the family of Alexa Longueira, 15, intends to sue.

The girl suffered a fright and some scrapes on her arms back after she dropped into the hole on Victory Boulevard.
Story continues below ↓advertisement | your ad here

"It was four or five feet, it was very painful. I kind of crawled out and the DEP guys came running and helped me," Longueria told the Staten Island Advance.. "They were just, like, 'I'm sorry! I'm sorry!"

For its part, the Department of Environmental Protection said its workers had turned away briefly to grab some cones when the incident occurred.

"We regret that this happened and wish the young woman a speedy recovery," DEP spokeswoman Mercedes Padilla said in a statement. She added that crews were flushing a high-pressure sewer line at the time.

The girl was checked out at Staten Island University Hospital and released.

Sewer line workers are supposed to cut off pedestrian access to work sites or at least mark them with warning signs.

The family said they will file a lawsuit -- for what, though, is not immediately clear. Her mother, Kim Longueira, said it doesn't matter that her daughter was walking and texting, and also, the 'gross' factor that can't be ignored.

You’re having dinner with your teenage kids, and they text throughout: you hate it; they’re fine with it. At the office, managers are uncertain about texting during business meetings: many younger workers accept it; some older workers resist. Those who defend texting regard such encounters as the clash of two legitimate cultures, a conflict of manners not morals. If a community — teenagers, young workers — consents to conduct that does no harm, does that make it O.K., ethically speaking?

The Argument:

Seek consent and do no harm is a useful moral precept, one by which some couples, that amorous community of two, wisely govern their erotic lives, but it does not validate ubiquitous text messaging. When it comes to texting, there is no authentic consent, and there is genuine harm.

Neither teenagers nor young workers authorized a culture of ongoing interruption. No debate was held, no vote was taken around the junior high cafeteria or the employee lounge on the proposition: Shall we stay in constant contact, texting unceasingly? Instead, like most people, both groups merely adapt to the culture they find themselves in, often without questioning or even being consciously aware of its norms. That’s acquiescence, not agreement.

Few residents of Williamsburg, Va., in, say, 1740 rallied against the law that restricted voting to property-owning white men. For decades, there was little active local opposition to the sexual segregation in various Persian Gulf states. A more benign example: few of us are French by choice, but most French people act much like other French people, for good and ill. Conformity does not imply consent. It simply attests to the influence of one’s neighbors.

So it is with incessant texting, a noxious practice that does not merely alter our in-person interactions but damages them. Even a routine conversation demands continuity and the focus of attention: it cannot, without detriment, be disrupted every few moments while someone deals with a text message. More intimate encounters suffer greater harm. In romantic comedy, when someone breaks a tender embrace to take a phone call, that’s a sure sign of love gone bad. After any interruption, it takes a while to regain concentration, one reason few of us want our surgeon to text while she’s performing a delicate neurological procedure upon us. Here’s a sentence you do not want to hear in the operating room or the bedroom: “Now, where was I?”

Various experiments have shown the deleterious effects of interruption, including this study that, unsurprisingly, demonstrates that an interrupted task takes longer to complete and seems more difficult, and that the person doing it feels increased annoyance and anxiety.

Mine is not a Luddite’s argument, not broadly anti-technology or even anti-texting. (I’m typing this by electric light on one of those computing machines. New fangled is my favorite kind of fangled.) There are no doubt benefits and pleasures to texting, and your quietly texting while sitting on a park bench or home alone harms nobody. But what is benign in one setting can be toxic in another. (Chainsaws: useful in the forest, dubious at the dinner table. Or as Dr. Johnson put it in a pre-chainsaw age, “A cow is a very good animal in the field; but we turn her out of a garden.”)

Nor am I fretful that relentless texting hurts the texter herself. Critics have voiced a broad range of such concerns: too much texting damages a young person’s intelligence, emotional development and thumbs. That may be so, but it is not germane here. When you injure yourself, that is unfortunate; when you injure someone else, you are unethical. (I can thus enjoy reading about a texting teen who fell into a manhole. When a man is tired of cartoon mishaps, he is tired of life. And yes, that teen is fine now.)

Last week, a Massachusetts grand jury indicted a Boston motorman who crashed his trolley into another, injuring 62 people: he was texting on duty. Last month, Patti LuPone berated an audience member who pulled out an electronic device during her show in Las Vegas. (Theaters forbid the audience to text during a performance, a rule routinely flouted. Perhaps stage managers could be issued tranquilizer darts and encouraged to shoot audience members who open any device during a show. At intermission, ushers can drag out the unconscious and confiscate their phones. Or we might institute something I call Patti’s Law: Any two-time Tony winner would be empowered to carry a gun onstage and shoot similar offenders.)

These are the easy cases, of course: clearly it is unethical to text when doing so risks harming other people. And formal regulation can easily address them; a dozen states and the District of Columbia prohibit texting while driving, for example. But the problem of perpetual texting in more casual settings cannot be solved by legislation. No parent will call the cops if a son or daughter texts at table. Instead, we need new manners to be explicitly introduced at home and at work, one way social customs can evolve to restrain this emerging technology.

Lest casual texting seem a trivial concern, remember that some political observers trace the recent stalemate in the New York Senate to the wrath of power-broker Tom Golisano, who was offended that majority leader Malcolm Smith fiddled with his BlackBerry throughout a meeting between them. When the dust settled, the State Senate had been transformed from merely disheartening to genuinely grotesque. I wouldn’t want that on my conscience.http://ethicist.blogs.nytimes.com/20...ting-is-wrong/

When a Blogger Voices Approval, a Sponsor May Be Lurking
Pradnya Joshi

Colleen Padilla, a 33-year-old mother of two who lives in suburban Philadelphia, has reviewed nearly 1,500 products, including baby clothes, microwave dinners and the Nintendo Wii, on her popular Web site Classymommy.com. Her site attracts 60,000 unique visitors every month, and Ms. Padilla attracts something else: free items from companies eager to promote their products to her readers.

Marketing companies are keen to get their products into the hands of so-called influencers who have loyal online followings because the opinions of such consumers help products stand out amid the clutter, particularly in social media.

“You can’t really write a review if you haven’t used it or done it,” Ms. Padilla said. “It really is a valuable thing for marketers. It’s a real mom with a real voice.”

Ms. Padilla typically acknowledges in each review which products were sent to her by companies and which items she bought herself. Other items on her site include her own videos for brands like Healthy Choice, which she labels as sponsored posts. But unlike postings in most journalism outlets or independent review sites, most companies can be assured that there will not be a negative review: if she does not like a product, she simply does not post anything about it.

The proliferation of paid sponsorships online has not been without controversy. Some in the online world deride the actions as kickbacks. Others also question the legitimacy of bloggers’ opinions, even when the commercial relationships are clearly outlined to readers.

And the Federal Trade Commission is taking a hard look at such practices and may soon require online media to comply with disclosure rules under its truth-in-advertising guidelines.

A draft of the new rules was posted for public comments this year and the staff is to make a formal recommendation to be presented to the commissioners for a vote, perhaps by early fall.

“Consumers have a right to know when they’re being pitched a product,” said Richard Cleland, an assistant director at the Federal Trade Commission.

Yet in many ways, the hypercommercialism of the Web is changing too quickly for consumers and regulators to keep up. Product placements are landing on so-called status updates on Facebook, companies are sponsoring messages on Twitter and bloggers are defining their own parameters of what constitutes independent work versus advertising.

TNT, for instance, is experimenting with a paid relationship with a popular blogger, Melanie Notkin, founder and chief executive of SavvyAuntie.com, a site that has carved out a demographic niche of professional aunts without children.

Ms. Notkin is sending out several messages to her more than 10,000 Twitter followers on Tuesday nights, when a new episode of “Saving Grace” is shown.

Ms. Notkin declined to disclose how much she is paid by TNT, only saying that she is “well compensated.” But she says she is upfront with her readers about the relationship with the network by labeling every commercial tweet with “[sp],” which stands for sponsored post.

“TNT never told me and will never tell me what to say,” Ms. Notkin stressed. “They want to associate with brands that people trust.”

For some bloggers, product sponsorships have become a lucrative side business. Drew Bennett of North Attleboro, Mass., began a photo-a-day blog more than four years ago and was one of the early participants in the site PayPerPost.com in 2006 and later, its sister network SocialSpark.

In three years, Mr. Bennett has written more than 600 posts for companies including Blockbuster and Xshot, a telescopic camera extender, typically making $5.35 to $10 a post. Through some arrangements, he says, he earns 11 cents to 68 cents every time a reader clicks from his site to a corporate site that sells the product.

“You can gain a whole different audience with social media,” said Mr. Bennett, who runs BenSpark.com and other sites. Mr. Bennett is also making money by recruiting and teaching other bloggers how to optimize their sites.

Izea, an online marketing company based in Orlando, Fla., which created PayPerPost, says it has 25,000 active advertisers ranging from Sea World to small online retailers. It feeds to 265,000 bloggers in its network, and pays, on average, $34 a post.

“Our focus has always been sponsored conversations,” said Ted Murphy, founder and chief executive of Izea.

Within the next few weeks, Izea plans to introduce another way to outreach to consumers — a “Sponsored Tweets” platform for Twitter users to blast promotional messages to their followers.

A campaign that Izea conducted in December for Kmart generated 800 blog posts and 3,200 Twitter messages that reached 2.5 million people over 30 days, Mr. Murphy said. In the campaign, six popular bloggers known to be influencers were given $500 gift cards to shop at the discount chain and asked to write about their experiences.

Many of these marketing practices have created gray areas as to what constitutes advertising versus consumer outreach. For instance, an expensive gadget in short supply in the hands of influential bloggers could be worth a lot more intangibly than a cash payment of, say, $50 a post.

Mr. Cleland said that the F.T.C. would most likely not spell out the disclosure requirements but instead would rely on Internet users to judge what constitutes fair disclosure, adding that a lengthy description written in legalese would probably be counterproductive.

At the same time, the marketing industry has also been revising its guidelines. The Word of Mouth Marketing Association’s ethics code says that manufacturers should not pay cash to consumers to make recommendations or endorsements, but it is evaluating how companies should handle disclosures of product giveaways to bloggers.

“I’m actually thrilled that the F.T.C. is looking hard at the issue,” said Paul Rand, president-elect of the association, a trade group representing 400 companies.

For many bloggers, commercial messages are often integrated into their missions. Katja Presnal, who created the Skimbaco Lifestyle and Skimbaco Home blog, recently wrote about e.l.f. cosmetics after meeting its chief marketing officer at a conference. The company has since provided her with products for giveaway bags for events she has hosted and has asked her to provide testimonials for an online video, all of which she did free.

Mrs. Presnal, who has more than 14,000 followers on Twitter, says that while she is paid fees for a few blog posts, she does not always accept endorsement deals.

“There is this misconception that bloggers write product reviews to get free stuff,” said Mrs. Presnal, a mother of three who lives in upstate New York. “I don’t blog about a product if I don’t really like it.”

In March, Better Homes and Gardens gave her a $500 gift certificate to do a bedroom makeover in her house using its new line of home furnishings bearing the magazine’s name. She also received a fee for providing a review to BetterHomesandGardens.com.

When Ford Motor flew her to Detroit this year for a test drive of the Fusion hybrid, she says, she expressed her true assessment on her blog, saying that she thought the vehicle would work for a family with teenagers, but would not fit the needs of her three children and a dog.

“I still wrote my honest opinion,” said Mrs. Presnal, who did not receive a fee from Ford, but had her travel expenses paid. “If you don’t, in the long it’s going hurt your credibility.”

Still, the encroachment of commercialism into new-media formats worries some consumer advocates. Many forms of online word-of-mouth marketing depend on the perception of unsolicited or personal opinions, said Robert Weissman, managing director of the advocacy group Commercial Alert.

Web Traffic (or Lack of) May Be a Reason for a Columnist’s Dismissal
Brian Stelter

The political columnist Dan Froomkin was hired by The Huffington Post last week, two short weeks after being fired by a more traditional Post, the venerable newspaper in Washington.

In his departure from The Washington Post, there may be a lesson for journalists: keep close tabs on Web traffic.

The Washington Post indicated that a slump in visitors to Mr. Froomkin’s well-known Web column, White House Watch, contributed to its decision not to renew his contract in June. The popularity of Mr. Froomkin’s column was tied in part to its consistent critiques of the Bush administration, and he acknowledges that his page views declined after President Obama took office.

Still, the rationale — even if it was masking other reasons for Mr. Froomkin’s departure — surprised some writers who are uncomfortable being judged by their Web traffic. The Washington City Paper, in an analysis of Mr. Froomkin’s departure, called it a historical marker for The Post, “the first time that a major personnel decision has hinged so squarely on Web hits.”

“It’s an unusual public rationale for serious newspaper people, that’s for sure,” said Jay Rosen, a journalism professor at New York University.

Mr. Froomkin, a contract employee who worked from home, wrote more than 1,000 columns for The Post’s Web site, beginning in 2004. He filtered the news media’s coverage of Mr. Bush through a critical lens, writing in his farewell column that when he thinks of the Bush years, “I think of the lies. There were so many.”

He regularly criticized the news media’s handling of the president, saying in a final column last month that “mainstream-media journalism missed the real Bush story for way too long.”

Mr. Froomkin’s dismissal raised a remarkable amount of ire among many liberal bloggers, some of whom wondered whether there were ideological motives for The Post’s decision. His fans sense that he will be more valued at Arianna Huffington’s Post, where he will write regular dispatches and manage four reporters in Washington.

Like Ms. Huffington, Mr. Froomkin speaks supportively of a “call it like you see it, let the chips fall where they may” style of journalism, a tacit rejection of the “triangulation” style that he says is too common in Washington.

In his discussions with The Post about his dismissal, “I was never entirely clear on what their reasoning was,” Mr. Froomkin said in an interview, adding that traffic was “one of the things they mentioned.” He does not think that traffic was “the major factor.”

Similarly, news media critics like Mark Glaser, the executive editor of the PBS blog MediaShift, have suggested that Mr. Froomkin’s dismissal was a consequence of the difficult merger of the print and Web sides of The Post. He noted that the company could be hiding other reasons for the decision, “but I’m not sure if we’ll ever know that for sure.”

Mr. Froomkin said that executives told him that they were reviewing all contracts for the Web site. The two sides had clashed in the past over the column, including over Mr. Froomkin’s tendency to criticize the news media. Mr. Rosen said he believed The Post cited traffic declines to feed its narrative that “the column had run its course.”

The paper’s ombudsman, Andrew Alexander, said in a blog post that “reduced traffic played a big role” in the decision. Fred Hiatt, the editorial page editor of The Post, told The City Paper that “his traffic had gone way down.”

Mr. Hiatt referred an interview request to the paper’s spokeswoman, who refused to comment. Detailed Web traffic data of newspapers is not normally shared with the public, and it is unclear how often traffic is a factor in personnel decisions.

More broadly, journalists are adjusting, sometimes awkwardly, to the assessments of popularity made possible by the Web. Gone are the days where reader letters and telephone surveys were the best gauges of a newspaper section’s popularity.

Making decisions based on user preferences makes sense, to a point, and the careers of television journalists are certainly measured in part by TV ratings. Mr. Rosen said he was sure The Post had “dropped features that were not very popular with the users of WashingtonPost.com.”

At online publications, “there are some things you do whether or not people click on them, and there are some things you do because you’re hoping to get clicks,” Mr. Froomkin said.

If those in the second category are not popular, it is perfectly reasonable to phase them out, he added, but he placed his column squarely in the first category. “It was a very appropriate thing for The Post to do even if not a lot of people were reading it,” he said.

Mr. Glaser said he was wary of some applications of Web data, like the blog network Gawker Media’s trials of pay bonuses based on traffic.

“Raw traffic numbers should not be the only gauge of a writer’s work,” Mr. Glaser said in an e-mail message. “I would also look at the quality of the work, whether they’ve broken important news, whether they provide a certain voice for the publication, whether they have a loyal audience that returns often and comments more, whether they are the start of other conversations on other blogs and forums.”

Page views and visitor counts can be misleading if viewed out of context. Mr. Froomkin said his traffic suffered when White House Watch switched to a blog format, requiring fewer clicks for readers reach it and fewer page views for the feature. The columnist had also pushed for more links from The Post’s home page, generally the most important promotional place on any Web site.

By no means had White House Watch lost all its influence. Mr. Rosen observed, “If The Post thought Froomkin was valuable, and they wanted to maintain that asset, they would have expected a drop in traffic after Bush left,” and would have helped to rebuild the franchise.

Instead, he will rebuild it at The Huffington Post beginning at the end of July. Ms. Huffington is happy to have the traffic.

It’s clear from the reaction to Mr. Froomkin’s dismissal, she said, that The Washington Post “underestimated how strong Dan’s following is.”

For the most part, the traditional news outlets lead and the blogs follow, typically by 2.5 hours, according to a new computer analysis of news articles and commentary on the Web during the last three months of the 2008 presidential campaign.

The finding was one of several in a study that Internet experts say is the first time the Web has been used to track — and try to measure — the news cycle, the process by which information becomes news, competes for attention and fades.

Researchers at Cornell, using powerful computers and clever algorithms, studied the news cycle by looking for repeated phrases and tracking their appearances on 1.6 million mainstream media sites and blogs. Some 90 million articles and blog posts, which appeared from August through October, were scrutinized with their phrase-finding software.

Frequently repeated short phrases, according to the researchers, are the equivalent of “genetic signatures” for ideas, or memes, and story lines. The biggest text-snippet surge in the study was generated by “lipstick on a pig.” That originated in Barack Obama’s colorful put-down of the claim by Senator John McCain and Gov. Sarah Palin that they were the genuine voices for change in the campaign. Associates of Mr. McCain suggested that the remark was meant as an insult to Ms. Palin.

The researchers’ data points to an evolving model of news media. While most news flowed from the traditional media to the blogs, the study found that 3.5 percent of story lines originated in the blogs and later made their way to traditional media. For example, when Mr. Obama said that the question of when life begins after conception was “above my pay grade,” the remark was first reported extensively in blogs.

And though the blogosphere as a whole lags behind, a relative handful of blog sites are the quickest to pick up on things that later gain wide attention on the Web, led by Hot Air and Talking Points Memo.

The Cornell research, like so much of the data mining on the Web, does raise the issue of whether something is necessarily significant just because it can be measured by a computer — especially when mouse clicks are assumed to represent broad patterns of human behavior.

“You can see this kind of research as further elevating the role of sound bites,” said Jon Kleinberg, a professor of computer science at Cornell and a co-author of a paper on the research that was presented two weeks ago at a conference in Paris. “But what we’re doing is more using them as the approximation for ideas and story lines.”

“We don’t view quotes as the most important object, but algorithms can capture quotes,” Mr. Kleinberg said. “And we see this research as using a rich data set as a step toward understanding why certain points of view and story lines win out, and others don’t.”

The paper, “Meme-tracking and the Dynamics of the News Cycle,” was also written by Jure Leskovec, a postgraduate researcher at Cornell, who this summer will become an assistant professor at Stanford, and Lars Backstrom, a Ph.D. student at Cornell, who is going to work for Facebook. The team has set up interactive displays of their findings at memetracker.org.

Social scientists and media analysts have long examined news cycles, though focusing mainly on case studies instead of working with large Web data sets. And computer scientists have developed tools for clustering and tracking articles and blog posts, typically by subject or political leaning.

But the Cornell research, experts say, goes further in trying to track the phenomenon of news ideas rising and falling. “This is a landmark piece of work on the flow of news through the world,” said Eric Horvitz, a researcher at Microsoft and president of the Association for the Advancement of Artificial Intelligence. “And the study shows how Web-scale analytics can serve as powerful sociological laboratories.”

Sreenath Sreenivasan, a professor specializing in new media at the Columbia Journalism School, said the research was an ambitious effort to measure a social phenomenon that is not easily quantified. “To the extent this kind of approach could open the door to a new understanding of the news cycle, that is very interesting,” he said.

A challenge in this kind of research, Mr. Sreenivasan said, will be to account for and model how quickly online news sources and distribution networks are changing. Mr. Sreenivasan pointed to social media, especially the rapidly rising Twitter, as an informal but highly influential news recommendation and distribution network.

Walter Cronkite, who pioneered and then mastered the role of television news anchorman with such plain-spoken grace that he was called the most trusted man in America, died Friday, his family said. He was 92.

From 1962 to 1981, Mr. Cronkite was a nightly presence in American homes and always a reassuring one, guiding viewers through national triumphs and tragedies alike, from moonwalks to war, in an era when network news was central to many people’s lives.

He became something of a national institution, with an unflappable delivery, a distinctively avuncular voice and a daily benediction: “And that’s the way it is.” He was Uncle Walter to many: respected, liked and listened to. With his trimmed mustache and calm manner, he even bore a resemblance to another trusted American fixture, another Walter — Walt Disney.

Along with Chet Huntley and David Brinkley on NBC, Mr. Cronkite was among the first celebrity anchormen. In 1995, 14 years after he retired from the “CBS Evening News,” a TV Guide poll ranked him No. 1 in seven of eight categories for measuring television journalists. (He professed incomprehension that Maria Shriver beat him out in the eighth category, attractiveness.) He was so widely known that in Sweden anchormen were once called Cronkiters.

Yet he was a reluctant star. He was genuinely perplexed when people rushed to see him rather than the politicians he was covering, and even more astonished by the repeated suggestions that he run for office himself. He saw himself as an old-fashioned newsman — his title was managing editor of the “CBS Evening News” — and so did his audience.

“The viewers could more readily picture Walter Cronkite jumping into a car to cover a 10-alarm fire than they could visualize him doing cerebral commentary on a great summit meeting in Geneva,” David Halberstam wrote in “The Powers That Be,” his 1979 book about the news media.

On the day President John F. Kennedy was assassinated, Mr. Cronkite briefly lost his composure in announcing that the president had been pronounced dead at Parkland Memorial Hospital in Dallas. Taking off his black-framed glasses and wiping away a tear, he registered the emotions of millions.

It was an uncharacteristically personal note from a newsman who was uncomfortable expressing opinion.

“I am a news presenter, a news broadcaster, an anchorman, a managing editor — not a commentator or analyst,” he said in an interview with The Christian Science Monitor in 1973. “I feel no compulsion to be a pundit.”

But when he did pronounce judgment, the impact was large.

In 1968 he visited Vietnam and returned to do a rare special program on the war. He called the conflict a stalemate and advocated a negotiated peace. President Lyndon B. Johnson watched the broadcast, Mr. Cronkite wrote in his 1996 memoir, “A Reporter’s Life,” quoting a description of the scene by Bill Moyers, then a Johnson aide.

Mr. Cronkite sometimes pushed beyond the usual two-minute limit to news items. On Oct. 27, 1972, his 14-minute report on Watergate, followed by an eight-minute segment four days later, “put the Watergate story clearly and substantially before millions of Americans” for the first time, the broadcast historian Marvin Barrett wrote in “Moments of Truth?” (1975).

Mr. Cronkite began: “Watergate has escalated into charges of a high-level campaign of political sabotage and espionage apparently unparalleled in American history.”

In 1977, his separate interviews with President Anwar al-Sadat of Egypt and Prime Minister Menachem Begin of Israel were instrumental in Sadat’s visiting Jerusalem. The countries later signed a peace treaty.

“From his earliest days,” Mr. Halberstam wrote, “he was one of the hungriest reporters around, wildly competitive, no one was going to beat Walter Cronkite on a story, and as he grew older and more successful, the marvel of it was that he never changed, the wild fires still burned.”

Walter Leland Cronkite Jr. was born on Nov. 4, 1916, in St. Joseph, Mo., the son of Walter Leland Cronkite Sr., a dentist, and the former Helen Lena Fritsche. His ancestors had settled in New Amsterdam, the Dutch colony that became New York. As a boy, Walter peddled magazines door to door and hawked newspapers. As a teenager, after the family had moved to Houston, he got a job with The Houston Post as a copy boy and cub reporter, inspired to go into the news business by a high school journalism teacher. At the same time, he had a paper route delivering The Post to his neighbors.

“As far as I know, there were no other journalists delivering the morning paper with their own compositions inside,” he wrote in his autobiography.

When he was 16, Mr. Cronkite went with friends to Chicago for the 1933 World’s Fair. He volunteered to help demonstrate an experimental version of television.

“I could honestly say to all of my colleagues, ‘I was in television long before you were,’ ” he said in an interview with CBS News in 1996.

Mr. Cronkite attended the University of Texas for two years, studying political science, economics and journalism, working on the school newspaper and picking up journalism jobs with The Houston Press and other newspapers. He also auditioned to be an announcer at an Austin radio station but was turned down. He left college in 1935 without graduating to take a job as a reporter with The Press.

While visiting Kansas City, Mo., he was hired by the radio station KCMO to read news and broadcast football games under the name Walter Wilcox. (Radio stations at the time wanted to “own” announcers’ names so that popular ones could not be taken elsewhere.)

He was not at the games but received cryptic summaries of each play by telegraph. These provided fodder for vivid descriptions of the action. He added details of what local men in the stands were wearing, which he learned by calling their wives. He found out in advance what music the band would be playing so he could describe halftime festivities.

At KCMO, Mr. Cronkite met an advertising writer named Mary Elizabeth Maxwell. The two read a commercial together. One of Mr. Cronkite’s lines was, “You look like an angel.” They were married for 64 years until her death in 2005.

Mr. Cronkite is survived by his daughters, Nancy Elizabeth and Mary Kathleen; his son, Walter Leland III; and four grandsons.

After being fired from KCMO in a dispute over journalism practices he considered shabby, Mr. Cronkite in 1939 landed a job at the United Press news agency, now United Press International. He reported from Houston, Dallas, El Paso and Kansas City.

The stint ended when he briefly returned to radio and then took a job with Braniff International Airways in Kansas City, selling tickets and doing public relations.

Returning to United Press after a few months, he became one of the first reporters accredited to American forces with the outbreak of World War II. He gained fame as a war correspondent, crash-landing a glider in Belgium, accompanying the first Allied troops into North Africa, reporting on the Normandy invasion and covering major battles, including the Battle of the Bulge, in 1944.

In 1943, Edward R. Murrow asked Mr. Cronkite to join his wartime broadcast team in CBS’s Moscow bureau. In “The Murrow Boys: Pioneers on the Front Lines of Broadcast Journalism” (1996), Stanley Cloud and Lynne Olson wrote that Murrow was astounded when Mr. Cronkite rejected his $125-a-week job offer and decided to stay with United Press for $92 a week.

That year Mr. Cronkite was one of eight journalists selected for an Army Air Forces training program that took them on a bombing mission to Germany aboard B-17 Flying Fortresses. Mr. Cronkite manned a machine gun until he was “up to my hips in spent .50-caliber shells,” he wrote in his memoir.

After covering the Nuremberg war-crimes trials and then reporting from Moscow from 1946 to 1948, he again left print journalism to become the Washington correspondent for a dozen Midwestern radio stations. In 1950 Murrow successfully recruited him for CBS.

Though he wanted to cover the Korean War, Mr. Cronkite was assigned to develop the news department of a new CBS station in Washington. Within a year he was appearing on nationally broadcast public affairs programs like “Man of the Week,” “It’s News to Me” and “Pick the Winner.”

In February 1953 he narrated the first installment of his long-running series “You Are There,” which recreated historic events like the Battle of the Alamo or the Hindenburg disaster and reported them as if they were breaking news. Sidney Lumet, soon to become a well-known filmmaker, directed the series, which included top actors like E. G. Marshall and Paul Newman.

“What sort of day was it?” Mr. Cronkite said at the end of each episode. “A day like all days, filled with those events that alter and illuminate our times. And you were there.”

In 1954, when CBS challenged NBC’s popular morning program “Today” with the short-lived “Morning Show,” it tapped Mr. Cronkite to be the host. Early on he riled the sponsor, the R. J. Reynolds Tobacco Company, by grammatically correcting its well-known advertising slogan, declaring, “Winston tastes good as a cigarette should.”

When not interviewing guests in his role as host, he mulled over the news with a witty and erudite puppet lion, Charlemagne. Occasionally he ventured outside the studio — using a tugboat, for example, to meet luxury liners so he could get interviews with celebrities before they landed.

In 1952, the first presidential year in which television outshined radio, Mr. Cronkite was chosen to lead the coverage of the Democratic and Republican national conventions. By Mr. Cronkite’s account, it was then that the term “anchor” was first used — by Sig Mickelson, the first director of television news for CBS, who had likened the chief announcer’s job to an anchor that holds a boat in place. Paul Levitan, another CBS executive, and Don Hewitt, then a young producer, have also been credited with the phrase.

The 1952 conventions made Mr. Cronkite a star. Mr. Mickelson, he recalled, told him: “You’re famous now. And you’re going to want a lot more money. You’d better get an agent.”

Mr. Cronkite went on to anchor every national political convention and election night until 1980, with the exception of 1964. That year he was replaced at the Democratic convention in Atlantic City by Roger Mudd and Robert Trout in an effort to challenge the Huntley and Brinkley team on NBC, which had won the ratings battle at the Republican convention in San Francisco earlier that summer.

In 1961 Mr. Cronkite replaced Murrow as CBS’s senior correspondent, and on April 16, 1962, he began anchoring the evening news, succeeding Douglas Edwards, whose ratings had been low. As managing editor, Mr. Cronkite also helped shape the nightly report.

The evening broadcast had been a 15-minute program, but on Sept. 2, 1963, CBS doubled the length to a half-hour, over the objections of its affiliates. Mr. Cronkite interviewed President Kennedy on the first longer broadcast, renamed the “CBS Evening News With Walter Cronkite.” He also broadcast from a real newsroom and not, as Edwards had done, from a studio set.

At the time the broadcast was lengthened, Mr. Cronkite inaugurated his famous sign-off, “And that’s the way it is.” The original idea, he later wrote, had been to end each broadcast with a quirky news item, after which he would recite the line with humor, sadness or irony.

Richard S. Salant, the president of CBS News, hated the line from the beginning — it ate up a precious four seconds a night — and the offbeat items were never done.

“I began to think Dick was right, but I was too stubborn to drop it,” Mr. Cronkite wrote.

Starting with Herbert Hoover, Mr. Cronkite knew every president, not always pleasantly. A top aide to President Richard M. Nixon, Charles Colson, harangued the network’s chairman, William S. Paley, after Mr. Cronkite’s 14-minute Watergate broadcast. The next segment was shortened.

In 1960, during the Wisconsin primary, Mr. Cronkite asked Kennedy, then a senator, about his Roman Catholic religion. As Mr. Cronkite recalled in his memoir, Kennedy called Frank Stanton, CBS’s president, to complain that questions about the subject had earlier been ruled out of bounds. He then reminded Mr. Stanton that if he were elected he would be appointing members of the Federal Communications Commission. Mr. Stanton “courageously stood up to the threat,” Mr. Cronkite wrote.

By contrast, Mr. Cronkite’s relations with President Dwight D. Eisenhower were so cordial that President Kennedy incorrectly assumed Mr. Cronkite, a political independent, was a Republican. (Eisenhower let Mr. Cronkite stay at a castle in Scotland to which the president had been given lifetime use.)

Mr. Cronkite also enjoyed the company of President Ronald Reagan, with whom he exchanged often off-color jokes. And he whimsically competed with his friend Johnny Carson to see who could take the most vacation time without getting fired.

Mr. Cronkite raced sports cars but switched to sailing so he could spend more time with his family. He liked old-time pubs and friendly restaurants; there was even one in Midtown Manhattan where his regular chair was marked with his initials.

In an interview with The New York Times in 2002, Mr. Cronkite scrunched his eyes and lowered his voice into a theatrical sob when asked if he regretted missing out on the huge salaries subsequent anchors had received.

Mr. Cronkite retired in 1981 at 65. He had repeatedly promised to do so, but few had either believed him or chosen to hear. CBS was eager to replace him with Dan Rather, who was flirting with ABC, but both Mr. Cronkite and the network said he had not been pushed.

After his retirement he continued to be seen on CBS as the host of “Walter Cronkite’s Universe,” a half-hour science series that began in 1980 and ran until 1982. The network also named him a special correspondent; the position turned out to be largely honorary, though news reports said it paid $1 million a year. But after he spent 10 years on the board of CBS, where he chafed at the cuts that the network’s chairman, Laurence A. Tisch, had made in a once generous news budget, more and more of his broadcast work appeared on CNN, National Public Radio and elsewhere, not CBS.

By the time Mr. Rather was leaving the “CBS Evening News” in 2005 after 24 years at the anchor desk, Mr. Cronkite had abandoned mincing words. He criticized his successor as “playing the role of newsman” rather than being one. Mr. Rather should have been replaced years earlier, he said.

When Katie Couric took over the job in September 2006, Mr. Cronkite introduced her on the air and praised her in interviews.

His long “retirement” was not leisurely. When Senator John Glenn went back into space on the shuttle Discovery in 1998, 36 years after his astronaut days, Mr. Cronkite did an encore in covering the event for CNN. He made some 60 documentaries. And among many other things, he was the voice of Benjamin Franklin on the PBS cartoon series “Liberty’s Kids,” covered a British general election for a British network and for many years served as host of the annual Kennedy Center Honors.

He had already won Emmy Awards, a Peabody and the Presidential Medal of Freedom (in 1981), and he continued to pile up accolades. Arizona State University named its journalism school after him.

In July 2006, PBS broadcast a 90-minute “American Masters” special on Mr. Cronkite’s career, narrated by Ms. Couric. Mr. Lumet, the filmmaker, appeared and said, “He seemed to me incorruptible in a profession that was easily corrupted.”

On his 90th birthday, Mr. Cronkite told The Daily News, “I would like to think I’m still quite capable of covering a story.”

But he knew he had to stop sometime, he allowed in his autobiography. He promised at the time to continue to follow news developments “from a perch yet to be determined.”

China has banned electro-shock therapy as a treatment for Internet addiction, citing uncertainty in the safety and effectiveness of the practice after criticism in the local media.

The Ministry of Health announcement followed recent media reports about a controversial psychiatrist in Linyi, Shandong Province, who administered electric currents to nearly 3,000 teenagers in an attempt to rid them of their Internet habit.

The Chinese government has led a campaign for over a year against Internet addiction, saying young people's excessive time in Internet cafes, known as Web bars in Chinese, is hurting their studies and damaging family life.

"Electroshock therapy for Internet addiction...has no foundation in clinical research or evidence and therefore is not appropriate for clinical application," read the notice, posted on the ministry website (www.moh.gov.cn).

The world's most populous country also has the world's largest Internet population, with almost 300 million users at the end of last year, according to the China Internet Network Information Center.

Problems caused by Internet over-use are also on the rise, especially among young Chinese seeking an escape from the heavy burden of parental expectations. There are over 200 organizations offering treatment for Internet disorders in China.

The developer of the "electric impact therapy" is Doctor Yang Yongxin, also known as "Uncle Yang," who runs a boot camp called the Internet Addiction Treatment Center at Linyi Mental Hospital, the China Youth Daily said.

Patients are given psychotropic drugs as well as electro-shocks, at a cost of 5,500 yuan ($805) a month.

Strictly trained in military ways and accompanied by their parents, the young patients are prohibited from outside contact.

Most of them were sent to the hospital by force, the China Youth Daily added.

Neither Yang nor his six colleagues at the camp were qualified psychotherapists, it said.

They may be the most efficient workers in the world. But in the global downturn, they are having a tough time finding jobs.

Japan’s legions of robots, the world’s largest fleet of mechanized workers, are being idled as the country suffers its deepest recession in more than a generation as consumers worldwide cut spending on cars and gadgets.

At a large Yaskawa Electric factory on the southern Japanese island of Kyushu, where robots once churned out more robots, a lone robotic worker with steely arms twisted and turned, testing its motors for the day new orders return. Its immobile co-workers stood silent in rows, many with arms frozen in midair.

They could be out of work for a long time. Japanese industrial production has plummeted almost 40 percent and with it, the demand for robots.

At the same time, the future is looking less bright. Tighter finances are injecting a dose of reality into some of Japan’s more fantastic projects — like pet robots and cyborg receptionists — that could cramp innovation long after the economy recovers.

“We’ve taken a huge hammering,” said Koji Toshima, president of Yaskawa, Japan’s largest maker of industrial robots.

Profit at the company plunged by two-thirds, to 6.9 billion yen, about $72 million, in the year ended March 20, and it predicts a loss this year.

Across the industry, shipments of industrial robots fell 33 percent in the last quarter of 2008, and 59 percent in the first quarter of 2009, according to the Japan Robot Association.

Tetsuaki Ueda, an analyst at the research firm Fuji Keizai, expects the market to shrink by as much as 40 percent this year. Investment in robots, he said, “has been the first to go as companies protect their human workers.”

While robots can be cheaper than flesh-and-blood workers over the long term, the upfront investment costs are much higher.

In 2005, more than 370,000 robots worked at factories across Japan, about 40 percent of the global total, representing 32 robots for every 1,000 manufacturing employees, according to a report by Macquarie Bank. A 2007 government plan for technology policy called for one million industrial robots to be installed by 2025. That will almost certainly not happen.

“The recession has set the robot industry back years,” Mr. Ueda said.

That goes for industrial robots and the more cuddly toy robots.

In fact, several of the lovable sort have already become casualties of the recession.

The robot maker Systec Akazawa filed for bankruptcy in January, less than a year after it introduced its miniature PLEN walking robot at the Consumer Electronics Show in Las Vegas.

Roborior by Tmsuk — a watermelon-shape house sitter on wheels that rolls around a home and uses infrared sensors to detect suspicious movement and a video camera to transmit images to absent residents — has struggled to find new users. A rental program was scrapped in April because of lack of interest.

Though the company won’t release sale figures, it has sold less than a third of the goal, 3,000 units, it set when Roborior hit the market in 2005, analysts say. There are no plans to manufacture more.

That is a shame, Mariko Ishikawa, a Tmsuk spokesman, says, because busy Japanese in the city could use the Roborior to keep an eye on aging parents in the countryside.

“Roborior is just the kind of robot Japanese society needs in the future,” Ms. Ishikawa said.

Japan’s aging population had given the development of home robots an added imperative. With nearly 25 percent of citizens 65 or older, the country was banking on robots to replenish the work force and to help nurse the elderly.

But sales of a Secom product, My Spoon, a robot with a swiveling, spoon-fitted arm that helps older or disabled people eat, have similarly stalled as caregivers balk at its $4,000 price.

Mitsubishi Heavy Industries failed to sell even one of its toddler-size home-helper robots, the Wakamaru, introduced in 2003.

Of course, less practical, novelty robots have fallen on even harder times in the downturn. And that goes for robot makers outside Japan, too.

Ugobe, based in Idaho, is the maker of the cute green Pleo dinosaur robot with a wiggly tail; it filed for bankruptcy protection in April.

Despite selling 100,000 Pleos and earning more than $20 million, the company racked up millions of dollars in debt and was unable to raise further financing.

Sony pulled the plug on its robot dog, Aibo, in 2006, seven years after its introduction. Though initially popular, Aibo, costing more than $2,000, never managed to break into the mass market.

The $300 i-Sobot from Takara Tomy, a small toy robot that can recognize spoken words, was meant to break the price barrier. The company, based in Tokyo, has sold 47,000 since the i-Sobot went on sale in late 2007, a spokeswoman, Chie Yamada, said, making it a blockbuster hit in the robot world.

But with sales faltering in the last year, the company has no plans to release further versions after it clears out its inventory of about 3,000.

Kenji Hara, an analyst at the research and marketing firm Seed Planning, says many of Japan’s robotics projects tend to be too far-fetched, concentrating on humanoids and other leaps of the imagination that cannot be readily brought to market.

“Japanese scientists grew up watching robot cartoons, so they all want to make two-legged companions,” Mr. Hara said. “But are they realistic? Do consumers really want home-helper robots?”

Robot Factory, once a mecca for robot fans in the western city of Osaka, closed in April after a plunge in sales. “In the end,” said Yoshitomo Mukai, whose store, Jungle, took over some of Robot Factory’s old stock, “robots are still expensive, and don’t really do much.”

Of course, that is not true for industrial robots — at least not when the economy is booming.

Fuji Heavy Industries argues its robots are practical and make economic sense. The company sells a giant automated cleaning robot that can use elevators to travel between floors on its own. The wheeled robot, which resembles a small street-cleaning car, already works at several skyscrapers in Tokyo.

Companies can recoup the 6 million yen investment in the cleaner robot in as quickly as three years, a Fuji spokesman, Kenta Matsumoto, said. The manufacturer has rented out about 50 so far.

“A robot will work every day and night without complaining,” Mr. Matsumoto said. “You can even save on lights and heating, because robots don’t need any of that.”

Need more capacity? Want more hard drive performance? Knowing that hard drive prices are about to drop below $80 for a 1 TB drive, we decided to create the ultimate RAID array, one that should be able store all of your data for years to come while providing much faster performance than any individual drive could. Twelve Samsung 1 TB hard drives helped us to reach speed records and an impressive 10 TB net capacity.

Some of you may want to argue over this performance statement. After all, doesn’t everyone know that hard drives don’t stand a chance against solid state drives (SSDs)? It’s true. More and more high-end SSDs can now exceed 200 MB/s read and 100 MB/s write throughput with virtually zero access time—numbers that are becoming standard for more and more high-end SSDs. However, lofty SSD costs remain an issue, which is where good old hard drives kick in.

While hard drives can’t match an SSD’s quick access times, higher throughput can be achieved by using more than one drive in a striping RAID mode—and throughput is still the top characteristic people care about on their desktop systems. In addition, hard drive capacities exceed SSD capacities by many times over and also beat SSDs in terms of cost per gigabyte. For example, $1,000 won’t buy you more than 1 TB in SSD capacity, and even to get close requires taking a step or two down in performance. Meanwhile, with hard drives, we had 12 x 1 TB at our disposal. The only reason we didn’t use larger hard drives was constrained availability in quantities of ten or more.

The Idea: Massive Hard Drive Storage Within a $1,000 Budget

The prospect of using up to 12 3.5” hard drives in RAID certainly isn’t very applicable for desktop PCs. Twelve drives require a lot of space, a suitable SATA RAID controller, and they produce a noticeable amount of heat, noise, and vibration, as well. Still…it’s cool, and we’ll soon see what a massive RAID array using conventional hard drives can actually do.

We used twelve of Samsung’s first-generation terabyte hard drives, the Spinpoint F1 HD103UJ. Although the product is more than a year old, it’s still holds its own against some of its newer competition, including the Hitachi Deskstar 7K1000.B, Seagate Barracuda 7200.12, and WD’s Caviar Black. The F1’s 115 MB/s maximum read throughput continues to impress, and Samsung’s data density is so high that it can cram a full terabyte into only three platters. The drives spin at 7,200 RPM, use a SATA/300 interface, and come with 32 MB of buffer memory. Part of our decision to use the Samsung F1 drives was based on availability. Some of our units were spares from our Overdrive Overclocking Championship. Finding ten or more new drives from scratch would have been more difficult.

Samsung is about to release the high-performance Spinpoint F2. While F2 EcoGreen drives have been available at up to 1.5 TB for some months, the new F2 will spin at 7,200 RPM and reach up to 2 TB in the second half of the year. Hitachi and Seagate will likely follow as soon as it makes sense, as the top capacities aren’t sold in large quantities and hence represent only a small fraction of the market.

Other Drive Options?

The 1.0 TB capacity point isn’t particularly exciting anymore, but it is close to providing the highest capacity per dollar. In addition, high-performance 7,200 RPM drives still deliver higher throughput than the lower-power 1.5 TB hard drives by Samsung, Seagate, or WD. Using 2.0 TB hard drives would double the gross capacity of our array from 12 to an amazing 24 TB, but it will more than double the cost for the drives. You can get a 1.0 TB drive starting at approximately $85 while a 2.0 TB drive still is almost three times more expensive.

We wanted to build an array with at least a 10 TB capacity, and with 12 drives, we were able to reach a total gross capacity of 12 TB. We decided to run both RAID 0 for maximum performance and RAID 5 to balance performance with data protection. While a RAID 0 configuration distributes data evenly across all drives using so-called stripe sets, RAID 5 adds parity information on one of the drives. Parity is also distributed across the drives to avoid one drive becoming a parity bottleneck.

RAID 0

In RAID 0, the total capacity equals the capacity of one Spinpoint F1 drive times the number of drives. Each drive has a net capacity of 1,000 GB, if one kilobyte equals 1,000 bytes, or 931.32 GB if one kilobyte is defined as 1,024 bytes. The latter is the way Windows handles storage capacity. Twelve times this capacity results in 11,175.87 GB.

RAID 5

RAID 5 requires at least three hard drives, and it provides the total capacity of all array member drives minus one drive. This type of array will maintain data integrity in case one drive fails. If you want an array to remain operational with two failed drives, then you need to run RAID 6. For our test array, the total RAID 5 capacity was 10,244.54 GB.

Controller: Areca ARC-1680iX-20

ZoomWe chose a 16-port combined SAS/SATA controller from Areca, the 20-port ARC-1680iX. The full-sized card is based on a x8 PCI Express interface and includes an Intel IOP348 processor at 1,200 MHz, which provides a good basis for serious XOR acceleration and RAID 5 performance. The card comes with a DDR2 DIMM socket and can hold anything between 512 MB and 4 GB; we used the default 512 MB module. Be sure to purchase ECC memory if you decide to install a larger cache capacity. Areca is among the few controller vendors to support RAID 6.

This card comes with a network port, which exclusively serves as a enabler of out-of-band management. Hence it’s possible to configure the card via the built-in Web server independent from the host PC’s operating system. The 20 SAS ports are available through multi-lane connectors (4 internal, 1 external), which is why we used 4-to-1 SAS fanout cables to attach the Samsung drives. As you might recall, SAS is fully SATA-compatible thanks to STP, the SATA Tunneling Protocol.

Both RAID arrays shorten the average access time by a significant amount. While an individual Samsung Spinpoint F1 1 TB drive averages a 13.8 ms access time, the RAID 0 and RAID 5 arrays drop access times to 10.1 and 10.4 ms.

Since none of the standard benchmark tools allow serious benchmarking on partitions larger than 2 TB, we had to move to HD Tach and IOmeter to get decent results. HD Tach could only be used as long as there was no GPT or MBR on the partition.

As mentioned above, 115 MB/s is the maximum read throughput for an individual F1 drive. With 12 drives tethered to the Areca ARC-1680iX card, our RAID 0 config returned almost 1 GB/s on average. Even in RAID 5, we saw 910 MB/s average throughput!

Write throughput is a bit slower, but we still observed 800 MB/s or more. Keep in mind that these are the average results. Peak numbers are higher, minimum transfer rates somewhat slower.

In RAID 0, the array with 12 drives mostly maintains read performance of almost 900 MB/s, but there are some negative peaks. Write throughput is more constant at 600 to 700 MB/s.

RAID 5 performance is slightly lower, as it drops below 600 MB/s once most of the 11 TB capacity is filled.

Since XOR calculation for data parity imposes a significant performance penalty, I/O performance is far superior in RAID 0, which has no parity. Still, any decent SSD, such as Intel’s X25-E (enterprise) or X25-M (consumer), can beat even our 12-drive RAID’s results on IOPS.

File server performance involves larger blocks, making the difference between the individual drive and the arrays less significant but still large.

The Web server profile is entirely based on read operations, and it only requests the kind of small files commonly found on HTML pages. Using 12 drives, we achieved greater than four times the performance of a single drive. However, a fast SSD still scores up to ten times faster.

Most people probably don’t want to install more than a few hard drives into their PC, as it requires a massive case with sufficient ventilation as well as a solid power supply. We don’t consider this project to be something enthusiasts should necessarily reproduce. Instead, we set out to analyze what level of storage performance you’d get if you were to spend the same money as on an enthusiast processor, such as a $1,000 Core i7-975 Extreme. For the same cost, you could assemble 12 1 TB Samsung Spinpoint F1 hard drives. Of course, you still need a suitable multi-port controller, which is why we selected Areca’s ARC-1680iX-20.

These are our findings: The 12 hard drives…

* still cannot reach the I/O performance and access time of a single Intel X25-E flash SSD (thousands of I/O operations per second)
* require careful system configuration (staggered spin-up)
* require a powerful RAID controller with sufficient ports
* aren’t convenient for desktop users
* are still subject to issues when using 2+ TB partitions

* deliver 6 to 8 times more throughput than an individual drive: almost 1,000 MB/s
* deliver 3 to 7 times better I/O performance than an individual drive
* result in 11 GB net capacity in RAID 5 or 10 GB in RAID 6
* deliver excellent cost per gigabyte, especially with the 1 TB Samsung drives we used (2 TB drives are still too expensive)
* still beat a flash SSD array in terms of throughput even if you keep two or three hard disks as spares

You've spent years compiling your family history, scanning old photographs, copying ancestral journals and writing biographies of your parents. Completing each project, you store the information on a CD or DVD disk. Mission accomplished. The data will be there for generations to come.

Or will it?

Fast-forward five years. Aunt Emily calls to ask if you've still got that 1939 picture of her in Yosemite National Park with her first husband. She's lost the original print. You offer gentle reassurance that you can make a new one.

Confidently, you sit down at your computer, insert the right DVD and listen while it spins up. You click on the desired file, remembering the image of young Uncle Carroll with knickers and a walking stick, with Half Dome in the background.

The photo begins to display on the computer screen: There's the sky. There's some treetops. Then, suddenly, the screen fills with gibberish - nothing but horizontal, colored lines. No Uncle Carroll.

The digital photo is corrupt, victim of a storage technology that any professional archivist could have told you not to trust more than three to five years. Perhaps you didn't know that libraries, universities and other institutions make periodic copies of their digital collections to prevent the loss of important information due to data corruption.

Disks go bad for many reasons, even if they're not used. Sadly, now, a priceless bit of family history is lost to future generations, and Aunt Emily has cut you out of her will.

Now for a happy ending. On Sept. 1, Millenniata, a start-up company based in Springville, will release a new archive disk technology to preserve data at room temperature for 1,000 years. It's like writing onto gold plates or chiseling information into stone.

Dubbed the Millennial Disk, it looks virtually identical to a regular DVD, but it's special. Layers of hard, "persistent" materials (the exact composition is a trade secret) are laid down on a plastic carrier, and digital information is literally carved in with an enhanced laser using the company's Millennial Writer, a sort of beefed-up DVD burner. Once cut, the disk can be read by an ordinary DVD reader on your computer.

A number of companies hold intellectual property rights in DVD technology. One of those, Philips, manages the combined patents. Millenniata disks and disk writers will be manufactured under a license now in final negotiation.

Big potential

Millenniata, whose name merges terms for "1,000 years" and "data," plans to market initially to institutions with large digital collections, such as the LDS Church, libraries and government entities requiring long-term archiving. But it expects to be competitive in the retail market as well, with enhancements that will soon include Blu-ray format and eventually larger diameter disks and disk readers to dramatically increase data capacity for specialized applications. Current single-layer Blu-ray disks can hold about 25 gigabytes of data, more than five times the capacity of a standard DVD.

Given the choice of today's risky optical disks - with their organic dyes and layers of oxidizing metals that are prone to failure in a few years - or a disk where information is essentially carved in stone, who wouldn't pick the latter?

"In the beginning I never thought it could replace all recordable disks," said company co-founder Barry Lunt, BYU professor of information technology, who had the original idea for long-lasting data storage while on an Explorer Scout outing in Utah. But he now believes the immense consumer market will be within reach as the price of Millenniata's technology comes down "as it certainly will."

A thousand-year disk from Millinniata is expected to cost initially between $25 and $30, compared to less than $1 for a standard DVD. But the safety of important personal or institutional data will likely be worth a premium to many. Volume should drive the price down quickly, Lunt said.

Flash of genius

In 1996, Lunt went camping with an Explorer post from Provo in Nine Mile Canyon east of Price. The canyon is home to an extensive gallery of ancient rock art from the early Fremont culture and later Utes. The Fremonts, who practiced agriculture, occupied the canyon from approximately 950-1250 A.D.

"I always had the impression that petroglyphs were painted on the rock," Lunt said. "But I got up to them and could see that was clearly not the case. They had chipped away a dark layer, exposing a lighter layer, and I thought, That's permanent storage - optical contrast, light vs. dark. You could store data that way."

That observation resurfaced about five years ago when Lunt was trying to figure out a way to store digital pictures, along with music from his vinyl record collection. The lights went on. If you could cut data into "persistent" materials, like carving a petroglyph, you'd have a very valuable, long-lasting commodity.

"I needed storage, and I'm sure a few million other people had the same need," Lunt said. Contacts at BYU's Harold B. Lee Library helped him see how large that need really was. With his background in materials, he began to search for the right substances, hooking up with Matthew Linford, associate professor of chemistry at BYU. Their collaboration bore fruit and Linford joined Lunt as a co-founder of Millenniata.

Business possibilities really began to accelerate when Doug Hansen became intrigued and offered to leave his job at Orem's Moxtek (a company specializing in optics and X-ray technology) to help move things forward as chief technical officer. Other investors were added, and Henry O'Connell came on board as president and CEO to formalize the business operation.

While the exact construction and components of Millenniata's thousand-year disk is a secret, Lunt rattles off a short list of materials whose properties include sufficient longevity for 1,000 years of records storage. Mormons are familiar with one of those - gold - through the story of the Book of Mormon, which is said to have been inscribed on gold plates.

"There's a class of materials that are persistent," Lunt said. "- gold, rock, ceramics. They last forever. And we have lasers that can modify them."

Lunt and Linford found that a material similar to obsidian, a glass-like igneous rock, could be permanently bound to a reflective metal, as O'Connell explained last year to Silicon Slopes, an online tech review. This hard surface could then be etched away to record binary data.

BYU's Technology Transfer Office moved forward on initial patents and encouraged a commercial spinoff from the university.

Long data life

How does Millenniata know the disks will last for a thousand years? The assertion is based first on the nature of the basic materials.

"There are many examples of records that have lasted for thousands of years: cuneiform tablets, heiroglyphics in Egypt, the Rosetta Stone, gold plates," Lunt said, adding that such records universally are engraved. "That's exactly what we're doing on a small scale."

Further testing is currently being done to scientifically establish the longevity of the new disks more precisely. "We're testing in elevated temperature and high humidity; we soak them in salt water and conduct lots of other tests to stress the disk to establish its durability," said CTO Hansen.

Hansen noted that the 1,000 years is actually a limitation imposed by one component - the clear plastic plate, or substrate, on which the data material rides. "That plastic may limit us to a few centuries or a thousand years for now," he said.

Ironically, the same plastic carrier is the most permanent component of today's CDs and DVDs, which isn't saying much. The data-carrying material on a regular DVD is fragile and subject to easy damage, as anybody knows who plays a movie at home from a scratched disk with its skipping and stalling behavior.

"In conventional technology the plastic is the most durable component; but it's our least," Hansen said. "We've had to do that because we had to get a product out the door and get the business going."

Improvements are even now being envisioned, such as replacing the plastic with glass, which could extend the data life to many thousands of years.

Millenniata's company logo is strikingly appropriate. It's borrowed from an ancient rock art symbol found in petroglyphs across the Southwest: a simple spiral. The spiral is also known in the history of science, tracing its origins to ancient Greece. In that context it's known as an Archimedes spiral, after the 3rd century mathematician.

But here's the twist: the data track on a modern DVD is also a spiral, working from the center outward. "The spiral is exactly how you make an optical disk," Lunt said.

Storage vs. archive

Perfect safety for important data is the holy grail of archiving, according to one of Millenniata's key investors, Finis Conner, who sits on the board. Conner knows what he's talking about. He was co-founder in 1979 of Seagate Technology, the giant maker of computer hard drives, and later of Conner Peripherals. No longer with either company, Conner is now looking for the next step. He views reliable, permanent archiving as an important piece of the overall data storage puzzle.

"Cost is not the issue," Conner said at a recent luncheon in Provo. "It's the need for absolute security."

He distinguishes between short-term "storage" and long-term "archive" applications, which require different approaches. Continually revolving storage (meaning files saved, then deleted, then overwritten by other files) is provided by a computer's built-in magnetic hard drive. By contrast, archiving means removable media, which is where Millenniata comes in.

An ideal archive would be permanent - whether for the Library of Congress, a Hollywood movie maker or a writer of personal and family histories. The goal is to protect anything of high value to the owner.

The pool of such information is growing exponentially in a world increasingly dominated by computers. After just two decades, the amount of digital data being archived is already vast. For example, the U.S. government's National Archives and Records Administration saves a staggering 10,000 terabytes - or 10 million gigabytes - to its archives every year according to a published report. A terabyte is a unit of computer memory or data storage equal to 1,024 gigabytes.

"Archive presents a different class of requirements from storage," Conner said. "It's information that can never be subject to failure because of electrical or mechanical factors. ... I was fortunate enough to be exposed to the Millenniata technology, and the more I see, the more I like. It clearly is, for me, a technology that is greatly needed."

Lost data

Conner would get no argument from BYU's Lee Library, whose digital collections have now reached "dozens of terabytes," according to Chris Erickson, digital preservation officer.

"I have tens of thousands of CDs and DVDs that I manage and test every year, or every other year," Erickson said. "Those go back 10 or maybe 12 years, and most of those are really good. But we have some collections where we have been losing 1 to 2 percent per year; we have some collections where we have lost 30 percent."

BYU's poster child for data loss is the school's collection of some 20,000 images from the ancient Greek seaside resort of Herculaneum, which was buried with Pompeii when Mount Vesuvius erupted in 79 A.D. The heat of that eruption killed the inhabitants and sucked the moisture out of anything organic.

Modern archaeologists discovered black sticks in one villa that were initially thought to be charcoal or firewood. Those sticks turned out to be carbonized scrolls of papyrus, part of a library treasure trove that includes important writings from a number of Greek philosophers.

Many of the fragile papyri have been picked apart and reassembled at the National Museum of Italy in Naples, but they could not be read until a team from BYU found a new application for NASA's multi-spectral imaging technology. Beginning in 1999, the team took infrared photographs of the papyri that made the written words stand out. Those digital images were then stored on various media, including CDs, a number of which have since failed.

"We've lost 30 to 40 disks from one date range," said Erickson. "That's very concerning to me because I don't want to lose any of that data. The difficulty is that you don't know which portion of a collection will fail."

It may not be possible to re-photograph the scrolls, he said, because "they deteriorate. Things that were legible then may not be legible now. We've seen that with the Dead Sea Scrolls."

Luckily, multiple copies have been made of the Herculaneum images, so BYU has been able to resurrect the missing pieces. But restoration of data from copies of a collection remains a workaround. There has been no real solution to date to the periodic disk failure problem that plagues archivists worldwide. Until a more permanent storage solution is adopted, copying will be standard operating procedure for digital collections.

"You have to have multiple copies, on multiple media, in multiple places," Erickson said. "So we have some things that are on a server, and on CDs and DVDs, and we also may have them on external disk drives that are not all in one place, and in the granite vaults in Salt Lake City." Copies of the Herculaneum images exist in Italy as well.

(See a National Geographic video about the Herculaneum scrolls project online at scrolls.notlong.com)

Millenniata is currently in talks with BYU, the LDS Church, government agencies and others with a view toward alleviating the dangers of data loss. The technology looks promising, Erickson said. "Their disk, if it works the way they say it works, makes data loss less of a concern in the short run."

"Let's say it lasted only 50 years," Erickson said. "That means I don't have to check the disks every year. I don't have the same concern that the thing is going to deteriorate before I can get back and look at it again. Then we could check it once every 50 years, or even 20 and not worry about losing important data."

Extend the interval to a thousand years of secure data and the benefits are clear.

Millenniata's technology "appears to provide a stable medium for a longer period than anything we currently have," Erickson said. "The longest thing we have now is digital tape, which people say will last between 25 and 50 years, but there are difficulties with all of it - tape, CDs, DVDs. I've found CDs and DVDs that have gone bad in less than a year."

Conner, the disk drive entrepreneur, agrees that failure must be assumed with relatively short-term media like a computer's magnetic hard drive. The drive provides temporary storage because it has a finite life, typically measured in hours-to-failure. With a laptop, the limited life of the internal disk is part of the price you pay for portability.

By contrast the main concern in archiving is permanence, Conner said. The risk of losing data is "hugely consequential" - so great that "copying and copying and copying must be done to skirt failure."

As the sheer size of digital collections continues to mount worldwide, the difficulty of periodic copying gets greater. The beauty of chiseling data into a Millenniata disk therefore consists both in security and worry relief.

Forward spin

High-tech materials for data storage are not the only things that are persistent. Another is a question: What happens when the DVD format is supplanted by some new format?

Virtually nobody expects that DVD will be the archiving format of choice a thousand years from now; you can see advancements coming even today. Holograms, for example, are on the horizon as a means to store data, Conner said.

Will Millenniata's disks be readable in the future, or will they go the way of the floppy disk and 8-track tape? The data still exist in those media, but just try to find a device that can access it. Can Millenniata migrate forward?

The short answer is yes.

"Optical disks are the most widely adopted storage medium in the history of the world - more widely adopted than vinyl LPs, than cassette tapes, or anything in history," Lunt said. "That means there are billions of readers out there, and hundreds of billions of disks. So it's likely that the ability to read those will persist."

Put another way, say there are 40 billion disks in the world containing optical data - and this is not just any data, mind you, but by definition essential data whose loss would be catastrophic - it follows that the incentive to access it would be both immense and ongoing. Back-compatibility with earlier formats would seem assured. Back-compatibility has become standard with software upgrades, for example.

Moreover, the process of recording onto persistent media can also migrate incrementally. The sudden plunge of archivists and their vast collections over an unseen cliff of data loss seems an unlikely scenario. After all, it's still possible to play one of Thomas Edison's original audio recording cylinders; and Elvis Presley's music has long since migrated forward to digital media. There's virtually no chance of losing it.

With data captured in a medium that for all practical purposes lasts forever - like the Rosetta Stone - archivists and ordinary consumers will be presented with a pleasant choice they don't have today, Millenniata says. Instead of endless copying of huge digital archives to prevent data loss because of deteriorating disks, people will be copying to upgrade to the latest new formats. That's a whole different ball game that suggests positive forward progress rather than a static, defensive posture of data protection.

And with a thousand-year window, there's no big rush.

"The interesting thing is to let people know this is possible," said Conner - especially people like archivists who are looking for an answer to the backbreaking task of copying.

One of the founders of the Internet says network routers are too slow, costly, and power hungry-and he knows how to fix them
Lawrence G. Roberts

The Internet is broken. I should know: I designed it. In 1967, I wrote the first plan for the ancestor of today’s Internet, the Advanced Research Projects Agency Network, or ARPANET, and then led the team that designed and built it. The main idea was to share the available network infrastructure by sending data as small, independent packets, which, though they might arrive at different times, would still generally make it to their destinations. The small computers that directed the data traffic—I called them Interface Message Processors, or IMPs—evolved into today’s routers, and for a long time they’ve kept up with the Net’s phenomenal growth. Until now.

Today Internet traffic is rapidly expanding and also becoming more varied and complex. In particular, we’re seeing an explosion in voice and video applications. Millions regularly use Skype to place calls and go to YouTube to share videos. Services like Hulu and Netflix, which let users watch TV shows and movies on their computers, are growing ever more popular. Corporations are embracing videoconferencing and telephony systems based on the Internet Protocol, or IP. What’s more, people are now streaming content not only to their PCs but also to iPhones and BlackBerrys, media receivers like the Apple TV, and gaming consoles like Microsoft’s Xbox and Sony’s PlayStation 3. Communication and entertainment are shifting to the Net.

But this shift is not without its problems. Unlike e-mail and static Web pages, which can handle network hiccups, voice and video deteriorate under transmission delays as short as a few milliseconds. And therein lies the problem with traditional IP packet routers: They can’t guarantee that a YouTube clip will stream smoothly to a user’s computer. They treat the video packets as loose data entities when they ought to treat them as flows.

Consider a conventional router receiving two packets that are part of the same video. The router looks at the first packet’s destination address and consults a routing table. It then holds the packet in a queue until it can be dispatched. When the router receives the second packet, it repeats those same steps, not ”remembering” that it has just processed an earlier piece of the same video. The addition of these small tasks may not look like much, but they can quickly add up, making networks more costly and less flexible.

At this point you might be asking yourself, ”But what’s the problem, really, if I use things like Skype and YouTube without a hitch?” In fact, you enjoy those services only because the Internet has been grossly overprovisioned. Network operators have deployed mountains of optical communication systems that can handle traffic spikes, but on average these run much below their full capacity. Worse, peer-to-peer (P2P) services, used to download movies and other large files, are eating more and more bandwidth. P2P participants may constitute only 5 percent of the users in some networks, while consuming 75 percent of the bandwidth.

So although users may not perceive the extent of the problem, things are already dire for many Internet service providers and network operators. Keeping up with bandwidth demand has required huge outlays of cash to build an infrastructure that remains underutilized. To put it another way, we’ve thrown bandwidth at a problem that really requires a computing solution.

With these issues in mind, my colleagues and I at Anagran, a start-up I founded in Sunnyvale, Calif., set out to reinvent the router. We focused on a simple yet powerful idea: If a router can identify the first packet in a flow, it can just prescreen the remaining packets and bypass the routing and queuing stages. This approach would boost throughput, reduce packet loss and delays, allow new capabilities like fairness controls—and while we’re at it, save power, size, and cost. We call our approach flow management.

To understand how flow management works, it helps to describe the limitations of current packet routers. In these systems, incoming packets go first to a collection of custom microchips responsible for the routing work. The chips read each packet’s destination address and query a routing table. This table determines the packet’s next hop as it travels through the network. Then another collection of chips puts the packets into output queues where they await transmission. These two groups of chips—they include application-specific integrated circuits, or ASICs, as well as expensive high-speed memory such as ternary content-addressable memory (TCAM) and static random access memory (SRAM)—consume 80 percent of the power and space in a router.

During periods of peak traffic, a router may be swamped with more packets than it can handle. The router will then pile up more packets in its queue, establishing a buffer that it can discharge when traffic slows down. If the buffer fills up, though, the router will have to discard some packets. The lost packets trigger a control mechanism that tells the originator to slow down its transmission. This self-controlling behavior is a critical feature of the Transmission Control Protocol, or TCP, the primary protocol we rely on with the Internet. It’s kept the network stable over decades.

Indeed, during most of my career as a network engineer, I never guessed that the queuing and discarding of packets in routers would create serious problems. More recently, though, as my Anagran colleagues and I scrutinized routers during peak workloads, we spotted two serious problems. First, routers discard packets somewhat randomly, causing some transmissions to stall. Second, the packets that are queued because of momentary overloads experience substantial and nonuniform delays, significantly reducing throughput (TCP throughput is inversely proportional to delay). These two effects hinder traffic for all applications, and some transmissions can take 10 times as long as others to complete.

As I talk to network operators all over the world, I hear one story after another about how the problem is only getting worse. Data traffic has been doubling virtually every year since 1970. Thanks to the development of high-capacity optical systems like dense wave division multiplexing (DWDM), bandwidth cost has been halved every year, so operators don’t have to spend more than they did the year before to keep up with the doubling in traffic. On the other hand, routers, as pieces of computing equipment, have followed Moore’s Law, and the cost of routing 1 megabit per second has decreased at a slower pace, halving every 1.5 years. Without a major change in router design, this cost discrepancy means that every three years a network operator will have to double its spending on infrastructure expansion.

Flow management can solve this capacity crunch. The concept of data flow might be more easily understood in the case of a voice or video stream, but it applies to all traffic over the Internet. Key to our approach is the fact that each packet contains a full identification of the flow it belongs to. This identification, encapsulated by the packet’s header according to the Internet Protocol version 4, or IPv4, consists of five values: source address, source port, destination address, destination port, and protocol.

All packets that are part of the same flow carry the same five-value identification. So in flow management, you have to effectively process—or route—only the first packet. You’d then take the routing parameters that apply to that first packet and store them in a hash table, a data structure that allows for fast lookup. When a new packet comes in, you’d check if its identification is in the hash, and if it is, that means the new packet is part of a flow you’ve already routed. You’d then quickly dispatch—the more accurate term is ”switch”—the packet straight to an output port, thus saving time and power.

If traffic gets too heavy, you’ll still have to discard packets. The big advantage is that now you can do it intelligently. By monitoring the packets as they’re coming in, you can track in real time the duration, throughput, bytes transferred, average packet size, and other metrics of every flow. For example, if a flow has a steady throughput, which is the case with voice and video, you can avoid discarding such packets, protecting these stream-based transmissions. For other types of traffic, such as Web browsing, you can selectively discard just enough packets to achieve specific rates without stalling those transmissions.

This capability is especially convenient for managing network overload due to P2P traffic. Conventionally, P2P is filtered out using a technique called deep packet inspection, or DPI, which looks at the data portion of all packets. With flow management, you can detect P2P because it relies on many long-duration flows per user. Then, without peeking into the packets’ data, you can limit their transmission to rates you deem fair.

Since the early days of the ARPANET, I’ve always thought that routers should manage flows rather than individual packets. Why hasn’t it been done before? The reason is that memory chips were too expensive until not long ago. You need lots of memory to store the hash table with routing parameters of each flow. (A 1 gigabit-per-second data trunk often carries about 100 000 flows.) If you were to keep a flow table on one IMP of 40 years ago, you’d spend US $1 million in memory. But about a decade ago, as memory cost kept falling, it started to make sense economically to design flow-management equipment.

In 1999, I founded Caspian Networks to develop large terabit flow routers, which I planned to sell to the carriers that maintain the Internet’s core infrastructure. That market, however, proved hard to crack—the carriers seem satisfied with overprovisioning, as well as techniques like traffic caching and compression, which ameliorate congestion without addressing the roots of the problem. In early 2004, I decided to leave Caspian and start Anagran, focusing on smaller flow-management equipment to solve the overload and fairness problems. We designed the equipment to operate at the edge of networks, the point where an Internet service provider aggregates traffic from its broadband subscribers or where a corporate network connects to the outside world. Virtually all network overload occurs at the edge.

Anagran’s flow manager, the FR-1000, can replace routers and DPI systems or may simply be added to existing networks. It supports up to 4 million simultaneous flows—a combined 80 Gb/s in throughput. Its hardware consists of inexpensive, off-the-shelf components as opposed to ASICs, which increase development costs. We implemented our flow-routing algorithms in a field-programmable gate array, or FPGA, and the router’s memory consists of standard high-speed DRAM. The FR-1000 sells in different models, starting at less than $30 000.

Like a regular router, the FR-1000 has input and output ports. But the similarities end there. Recall that in a traditional router the routing and queuing chips consume 80 percent of the power and space. By routing only the first packet of a flow, the FR-1000’s chips do much less work, consuming about 1 percent of the power that a conventional router requires.

Even more significant, the FR-1000 does away entirely with the queuing chips. During congestion, it adjusts each flow rate at its input instead. If an incoming flow has a rate deemed too high, the equipment discards a single packet to signal the transmission to slow down. And rather than just delaying or dropping packets as in regular routers, in the FR-1000 the output provides feedback to the input. If there’s bandwidth available, the equipment increases the flow rates or accepts more flows at the input; if bandwidth is scarce, the router reduces flow rates or discards packets.

By eliminating power-hungry circuitry, the FR-1000 consumes about 300 watts, or one-fifth the total power of a comparable router, and occupies one unit in a standard rack, a tenth of the space that other routers fill. We estimate that the equipment allows network operators to reduce their operating costs per gigabit per second by a factor of 10.

How Flow Routing Works

Flow managers keep track of streams of packets and can protect voice and video transmissions while reducing peer-to-peer traffic.

Measurements of the FR-1000 in our laboratories and by customers showed that networks equipped with the flow manager were able to carry many more streams of voice and video without quality degradation.

Another important capability we tested was whether the equipment could maintain quality of transmissions during congestion. The test involved a 100-Mb/s data trunk using a conventional router and another that included the Anagran flow manager. We progressively added TCP flows and measured the time required to load a specific Web page. The conventional router began to discard packets once traffic filled the trunk’s capacity, and the time to load the Web page increased exponentially as we kept adding flows. The Anagran flow manager was able to control the rate of the flows, slowing them down to accommodate new ones, and the load time increased only linearly. The result: At 1000 flows, the flow manager delivered the page in about 15 seconds, whereas the conventional router required nearly 65 seconds.

Another capability we tested was fairness controls. Currently, P2P applications consume an excessive amount of bandwidth, because they use multiple flows per user—from 10 to even 1000. But services like cloud computing, which rely on Web applications constantly accessing servers that store and process data, are likely to expand the problem. We conducted measurements at a U.S. university whose wireless network was overwhelmed by P2P traffic, with a small fraction of users consuming up to 70 percent of the bandwidth. Early attempts to solve the problem using DPI systems didn’t work, because P2P applications often encrypt packets, making them hard to recognize. The Anagran equipment was able to detect P2P by watching the number and duration of flows per user. And instead of simply shutting down the P2P connections, the flow manager adjusted their throughputs to a desired level. Once the fairness controls were active, P2P traffic shrank to less than 2 percent of the capacity.

The upshot is that directing traffic in terms of flows rather than individual packets improves the utilization of networks. By eliminating the excessive delays and random packet losses typical of traditional routers, flow management fills communication links with more data and protects voice and video streams. And it does all that without requiring changes to the time-tested TCP/IP protocol.

Best Buy has announced the launch of the Insignia HD Radio Portable Player, a first-of-its-kind product, which is available exclusively at Best Buy for $49.99. The introduction of the NS-HD01 marks an important milestone for HD Radio technology as consumers are now able to take the extra multicast stations on the go. Features on the first-ever portable HD Radio include a rechargeable Lithium-ion battery that ensures up to 10 hours of playing time, a full-color LCD screen that displays the radio station, artist and song, a jack for headphones and car stereo, and 10 preset memory channels.

"The sound quality and LCD screen features of the Insignia HD Radio portable are phenomenal," said Mike Dahnert, Insignia Portable HD Radio product manager. "Best Buy is proud to be the first to bring such a unique and quality product to our customers."

"We applaud Best Buy for setting a precedent in the audio entertainment marketplace by offering the first-ever portable HD Radio receiver," added Bob Struble, President and CEO of iBiquity Digital Corporation, the developer of digital HD Radio technology for AM/FM audio and data broadcasting. "With new HD2/HD3 digital channels, crystal-clear sound, no subscription fees, and now, thanks to Best Buy, the ability to take digital radio on the go, it's a total win for the consumer and one more indication that the HD Radio momentum is continuing."

To date, there are over 1,000 new HD2/HD3 channels on the air. There are 13 vehicle brands backing the technology, and more than 14,000 retailers carrying 100 different HD Radio devices.http://www.fmqb.com/article.asp?id=1410533

New York Times to Sell NYC Radio Station

The New York Times Co said it will sell its New York City classical music radio station WQXR for $45 million, in a two-part sale that will help the struggling newspaper publisher pay off debt.

The station will continue to broadcast classical music, something it has done for 73 years, but at 105.9 on the FM radio dial instead of 96.3 FM, the Times said in a statement on Tuesday.

Under the terms of the deal, Spanish-language broadcaster Univision Radio will pay $33.5 million for the 96.3 slot on the FM radio dial that the Times uses to broadcast WQXR.

The Times in turn will get the U.S. government broadcast license for 105.9 FM, the slot that Univision currently owns. It plans to sell the license, its transmitting equipment and the WQXR call letters to WNYC Radio for $11.5 million.

The deal is expected to close in the second half of the year. The U.S. Federal Communications Commission, which supervises broadcast licenses, must approve the sales.

WQXR was founded in 1936 on AM radio as the first U.S. commercial classical music station, the Times said. Three years later, it debuted on FM radio. The newspaper publisher bought the stations in 1944. It sold the AM station to Radio Disney in 2006.

The Times is trying to sell off various properties, including its interest in the company that owns the Boston Red Sox baseball team, as it pays off debt and fights a steep decline in newspaper advertising revenue.

The biggest union at the company's second-largest paper, The Boston Globe, votes on July 20 on pay cuts and other concessions that the Times says are necessary to keep the paper alive. The Times is also courting offers for that paper.

New York Times shares were up 5 cents, or 1 percent, to $5 in afternoon trading on the New York Stock Exchange.

The Swiss postal service has started redirecting some mail from the letter box to the in-box.

A program introduced by the Swiss Post in June allows subscribers to receive scans of their unopened envelopes by e-mail message and then decide which ones they want opened and scanned in their entirety, to be read online.

Subscribers can also ask to have the contents archived, send unopened letters to another address or have them shredded and recycled.

The success of the program, called Swiss Post Box, will depend on how widely digital mail is accepted, said Mark Levitt, a former analyst at the International Data Corporation in Washington, a research firm.

“Even people who warmly embraced digital tools stopped short of giving up on paper,” he said. “In fact, the electronic age has generated even greater demand for printers, paper and ink because people have even more information that they feel the need to print out on paper to read.”

The program uses technology provided by Earth Class Mail, a company based in Seattle that has tens of thousands of individual subscribers worldwide, mostly in Britain, the United States, Canada and Mexico. Clients in those countries have mail sent to one of more than two dozen designated addresses for processing.

This is the first time that Earth Class Mail has licensed its technology to a postal service.

Earth Class Mail’s chairman, Ron Weiner, said the company was talking with other national postal services in Europe and Asia about similar partnerships. He would not elaborate.

Basic service for Swiss Post Box starts at 19.90 Swiss francs, which is about $18.35. In North America, clients pay $10 to $60 a month for Earth Class Mail’s service, depending on how much mail they want scanned.

Michael Laprade, who has used Earth Class Mail for two years, said he had few items forwarded to him, other than the occasional check, and he had confidential items like credit card statements shredded.

“There are very few things you get that you actually have to have in your hand,” said Mr. Laprade, who lives primarily in California but spends the winter in France.

Earth Class Mail says its users recycle 90 percent of their mail. By comparison, the United States Postal Service reported that 40 percent of the mail it processed was recycled.

The Swiss Post Box service is available in several cities in Switzerland and in Frankfurt. The postal service intends to add services in France, Italy and Austria.

At a later stage, Swiss Post expects to offer the service in all locations where Swiss Post International has a presence: Belgium, Britain, Denmark, Hong Kong, India, Malaysia, the Netherlands, Singapore, Spain, Sweden and the United States.

But Mr. Weiner said Swiss Post Box would meet more rigorous standards for data handling than those required by the European Union. Nevertheless, some experts say, digitized mail could be more prone to abuse by a rogue employee.

Mr. Weiner said that Earth Class Mail had not had any security breaches, either by employees or by hackers, since its introduction. He said operational employees did not have access to mail that had been opened and scanned and that the digital images were encrypted.

The Associated Press is proposing that publishers attach descriptive tags to news articles online in hopes of taming the free-for-all of news and information on the Web and generating more traffic for established media brands.

Tags identifying the author, publisher and other information - as well as any usage restrictions publishers hope to place on copyright-protected materials - would be packaged with each news article in a way that search engines can more easily identify.

By doing so, the AP hopes to make it easier for readers to find articles from more established news providers amid the ever-expanding pool of content online. That, in turn, could lead to more traffic and more online advertising revenue for a beleaguered news industry.

If widely adopted, the tags should help computers better understand more information about a story, allowing Google and others to develop smarter search tools.

"As things stand, an awful lot of information on a news article is completely invisible," said Martin Moore, director of Media Standards Trust, which jointly developed the new rules with the AP. "A search engine is not able to tell a byline from someone who is referred to in an article."

Although it is difficult to know how the extra information will change the way readers find online news, Google Inc. and others could conceivably develop search tools that would allow users to identify stories by a specific writer or from a specific city. Web portals could use the data to add more detailed summaries under search results.

The AP, which is already testing the tags on its own stories, says it wants to make its proposed format the industry standard, to be used by anyone producing news content, including other news outlets and bloggers.

The formatting is part of a broad effort by the AP to shift the dynamics of news on the Web.

Traditional news outlets have complained that search portals like Google are too indiscriminate in their displayed results, leaving established news brands lost in the din of press releases, advertising and outdated material. As a result, traditional media sites are getting less traffic and less advertising revenue, which many companies sorely need as the recession accelerates the decline of print revenues.

Todd Martin, the AP's vice president for development, described the new tags as "a nutritional label for your news."

And the tag identifying usage rights could allow Web sites that aggregate content to automatically sort articles by copyright terms and let publishers more easily track how their stories are being used, said Srinandan Kasi, AP's general counsel.

Still, it is unclear exactly how publishers would use the extra information to enforce copyright terms. It's a touchy subject that has sometimes pitted news outlets and blogs against one another.

Robert Cox, president of Media Bloggers Association in New Rochelle, N.Y., said that tagging articles in general poses no threat to blogs, but that any copyright provisions will be viewed warily by sites that have already clashed with the AP.

In one case last year, the AP demanded blogger Rogers Cadenhead remove several postings that the AP claimed violated its terms of use. Other bloggers rallied behind Cadenhead, arguing his activity fell under the "fair use" provision of copyright law.

With any new proposal from the AP on copyright protections "the perception in the blogosphere is going to be that this is one more way for the AP to go to war," Cox said.

Kasi would not elaborate on how the tags might someday protect unauthorized use of copyright-protected content.

Known as microformatting, the tags provide a prepackaged set of information that can be read by computers but would be largely invisible to users.

Rather than simply letting Web crawlers try to sift through pages to find the information on their own, the formatting would show where and when an article was published, what the article is about and what terms of use are attached. It could also carry a statement about the news and ethical policies of different outlets.

The standards the AP is proposing were developed with the Media Standards Trust, a British nonprofit aimed at supporting high journalistic standards.

Google, which licenses AP content for its Google News section, signaled recently that it would like to see more Web publishers adopting such formats. A post on a company blog said microformatting "will help people better understand the information you have on your page so they can spend more time there and less on Google."

In a statement, Google did not commit to using the tags but said it "welcomes all ideas for how publishers and search engines can better communicate about their content. We have had discussions with The Associated Press, as well as other publishers and organizations, about various formats for news. We look forward to continuing the conversation."http://www.washingtonpost.com/wp-dyn...071002862.html

AP Settles Lawsuit with AHN Media
AFP

The US news agency the Associated Press announced Monday that it had settled an intellectual property lawsuit against AHN Media, an online company accused of misappropriating AP articles.

"AP is pleased to have successfully resolved the litigation through a principled settlement," Laura Malone, AP associate general counsel for intellectual property governance, said in a statement.

"AP invests hundreds of millions of dollars to gather and to distribute essential breaking news worldwide that customers legitimately access and use by payment of a license fee," Malone said.

"Unauthorized use of these proprietary news reports by copying or rewriting published AP news stories is inimical to the interests of AP and its legitimate licensees," she added.

The AP, a cooperative owned by 1,500 US daily newspapers, filed suit against AHN Media in January 2008 seeking unspecified damages and a permanent injunction against misappropriation of AP stories.

The AP alleged that AHN had instructed its staff to rewrite their stories and published them without crediting the agency.

The AP said the settlement includes payment by AHN to the agency of an "unspecified sum" and an agreement by AHN that it "would not make competitive use of content or expression from AP stories."

As a technology writer, I don’t tend to run into the Treaty of Versailles in my work.

But the agreement that ended World War I was among the myriad topics raised at a meeting held by the Internet Corporation for Assigned Names and Numbers in New York Monday to discuss its plan to add new top-level domains (the part of an Internet address after the period, such as the proposed .eco and .nyc)

Most of the time was spent on a proposal to help manage disputes when a new domain name contains what some company considers its trademark. Like much that Icann does, it was a raucous affair, with people from big companies complaining about the evils of cybersquatting while those representing domain name owners railed against abusive trademark lawyers. The Icann officials tried to be polite to all sides while answering all manner of trivia questions, including one stumper about how the Treaty of Versailles applies to their new plan.

Among the severe penalties the allies imposed under that treaty was a provision that stripped German companies of certain trademarks, such as Bayer’s control of the term “aspirin.” A lawyer asked in the meeting how the treaty would affect one of the proposals, a list of “globally protected marks,” i.e., well-known international brands.

To get on the list, which confers some extra protection for the trademark holder, a brand must be registered in all five of Icann’s regions. And since one region is made up of the United States and Canada, where trademarks voided by the Treaty of Versailles are not honored, some German companies might not qualify.

The Icann representatives were stumped and promised to look into the matter.

As for the central topic of the meeting — the proposal by a group of intellectual property lawyers about how to handle domains with trademarks — there was at least one agreement from nearly all sides: The current approach, which is based on an arbitration proceeding, is broken.

Trademark owners argue that taking action against someone using their names under the existing system, called the Uniform Domain-Name Dispute-Resolution Policy, can take months and cost $10,000 in fees and legal costs. Three-quarters of the domain owners never respond, which trademark owners interpret as a sign that the respondents know they are not in compliance with Icann’s rules.

The domain owners, meanwhile, counter that the system favors big companies with armies of lawyers that time their complaints to holidays and other periods when mom-and-pop Web site owners will miss the notice and won’t respond. They point to cases where corporations try to use the trademark dispute process to get access to Web addresses they covet even if the sites are entirely unrelated to their products. (In a classic case, Hearst tried and failed to challenge a legal bulletin board called esqwire.com as infringing on its trademark for Esquire magazine.)

The domain owner constituency, by the way, seems to be made up of two rather different groups. There are the democratic idealists who want to preserve the rights of the little guy to express opinions and do business on the Web. And there are organized “domainers” who see buying portfolios of Web sites as a digital form of investing in real estate. (Much of the value of that real estate comes from exploiting typing errors.)

In broad strokes, the new proposal would make it easier and cheaper for companies to stake claim to their trademarks. And thus not surprisingly, the corporations were more supportive than the domain owners.

Part of the proposal is meant to make sure that companies aren’t forced to buy their trademarks in every new domain that is started in order to keep someone else from registering them and then holding their brands hostage. The plan is to create a big database, called an I.P. clearinghouse, where any company that has a trademark registered anywhere in the world will be able to submit proof of that claim. The companies that manage the various domains — known as registries — would check that database to find potential conflicts.

The idea isn’t to ban outright any domain that uses a trademark, but to highlight potential conflicts. If someone wanted to buy a site called apple.seattle, they would have to promise they would sell cider and tarte tatin, not iPods.

The idea of a clearinghouse wasn’t controversial. But the second database — the special list of global trademarks — was a lightning rod for criticism because some said it would grant special status to certain brands of big companies.

The other major idea is to create a fast-track way for some disputes to be resolved without going to arbitration. This process is designed for “slam dunk” cases where there are “no contestable issues”.

The proposal includes a variety of checks and balances meant to preserve the rights of domain holders. All the fees are paid by the trademark owner. If a domain holder is found in violation, its domain is disconnected, but it isn’t given to the trademark holder. In case of an appeal, the domain is reactivated immediately. And there are penalties for trademark owners who file more than three claims that are rejected.

Those protections aren’t enough for many domain owners, who said that they believe in practice the process would be stacked against them. The two-week time period to respond is too short, they said, and the standards applied are too vague.

Icann’s said it would take all the comments Monday, as well as those from similar meetings in other cities, and publish a revised version of its plan for the new top-level domains in September. It hopes to approve a final version by the end of the year.

In late May, Microsoft unveiled Bing, its new Internet search engine, in front of an audience of skeptics: technology executives and other digerati who had gathered near San Diego for an industry conference.

To that crowd, Microsoft’s efforts to take on Google and Yahoo in the search business had become something of a laughingstock, and for good reason. Microsoft’s repeated efforts to build a credible search engine had fallen flat, and the company’s market share was near its low.

As a result, analysts say, the once-dubious prospect that Microsoft could shake up the dynamics of the search business, which is worth $12 billion in the United States alone, has become just a bit more likely.

The stakes could not be higher. With Google and others trying to challenge Microsoft’s traditional software business, Steven A. Ballmer, the chief executive, has made succeeding in search a top company priority. Last year, Mr. Ballmer bid a staggering $47.5 billion in an unsuccessful effort to take over Yahoo, the No. 2 player in search.

That defeat forced Microsoft to redouble its homegrown efforts, leading to the release of Bing. The new service received favorable write-ups from influential reviewers and technology bloggers for the quality of its results, as well as its features and design. Studies showed that many people preferred its look and feel to Google’s. And marketing experts said the Bing brand was a good choice that resonated with users.

“They have achieved a degree of respect they haven’t had,” said Danny Sullivan, a veteran search analyst and editor of the industry news site SearchEngineLand. With a tone that suggested surprise, Mr. Sullivan added: “They’ve rolled out a product that is good. When people spend time on it, they do like it.”

Anna Patterson, who helped design and build some of the foundations of Google’s search engine and later co-founded Cuil, a search start-up that has yet to attract much of an audience, said: “I think they put together something that is really compelling. They made significant progress.”

That is music to the ears of Microsoft’s long-maligned search team, which has watched the company’s market share in search fall by half, to about 8 percent in May, since it introduced its first search engine in 2005.

“We have had a great start and some good buzz,” said Yusuf Mehdi, senior vice president for Microsoft’s online audience business group. “We’re settling in for a big long run.”

But if succeeding in search is Microsoft’s Mount Everest, as some executives there have suggested, Bing’s success so far has merely put the company at base camp.

Reports from more than half a dozen companies that measure search and search advertising all point to upticks in Microsoft’s business since the release of Bing. Microsoft said on Monday that its internal numbers showed its search traffic growing 8 percent in June. (ComScore, whose reports are closely watched, is expected to release figures for June on Tuesday.)

Still, Bing remains a distant third in the search race. It would have to triple its audience to catch Yahoo — and grow eightfold to tie Google, which accounts for 65 percent of searches in the United States.

Sustaining Bing’s early momentum will be harder for Microsoft after the intense marketing campaign fades.

“It is going to be a difficult and long-term challenge,” said Scott Garell, president of Ask Networks, a subsidiary of IAC that includes the Ask.com search engine. Ask has long been praised for its innovations, and it too spent more than $100 million to market its search engine in 2006 and 2007, yet the company’s small market share has barely budged in recent years.

But analysts say that Bing’s solid start gives Microsoft a chance to finally sharpen its assault on the search business. No one suggests that Google faces any immediate threat. With many people using more than one search engine, however, some believe that Bing has a shot at dislodging Yahoo as the logical alternative to Google. (Google declined to comment for this article, other than to say in a statement that it takes all competitors seriously.)

“Yahoo doesn’t seem as aggressive as it has been in the past,” said Mark Mahaney, an analyst with Citigroup. Mr. Mahaney cautioned that whatever gains Bing achieves in coming months, he still expected Bing to trail Yahoo a year from now.

Yahoo disputed Mr. Mahaney’s characterization. Larry Cornett, the company’s vice president for consumer products for search, said that in the last year alone, Yahoo had unveiled technologies that allow publishers to better showcase their sites in search results and tools that make it easier to conduct extensive research. He said other companies were using an innovative Yahoo technology allowing them to build their own search services, which collectively garner nearly as many queries a day as Microsoft.

“What we have accomplished in the last year shows an incredible commitment and focus,” Mr. Cornett said.

Other analysts say that if Bing can sustain its early gains, it could have another important effect on the industry: Yahoo and Microsoft could be pushed into a search partnership. Since Microsoft dropped its takeover attempt more than a year ago, the two companies have discussed a more limited alliance to take on Google but have been unable to reach an agreement. The talks continue apace, according to a person briefed on them.

“If Bing can have some momentum, I think it makes a deal more likely,” said Benjamin Schachter, an analyst with Broadpoint AmTech. Mr. Schachter said continued momentum would make Bing a bigger threat and a more attractive partner for Yahoo.

For now, Microsoft continues to fight alone, but with more vigor than in years past, analysts said. Less than a month after Bing’s release, Microsoft beat Google and Yahoo to a hot new area in search: It became the first major search engine to index new postings from popular Twitter users almost immediately. The move helped amplify the buzz around Bing.

Microsoft Corp's chief executive attempted to laugh off the challenge of Google Inc's planned computer operating system on Tuesday, conceding only that it was "interesting".

"I will be respectful," Microsoft CEO Steve Ballmer said to laughs from the audience at a conference for the company's technology partners in New Orleans, which was broadcast over the Internet.

"Who knows what this thing is? To me, the Chrome OS thing is highly interesting," said Ballmer, choosing his words carefully and drawing more amusement from the largely pro-Microsoft crowd.

"It won't happen for a year and a half and they already announced an operating system," he added, referring to Google's Android system for smartphones.

Last week Google said it was planning a computer operating system based on its Chrome browser, aiming directly at the core business of Microsoft, the world's largest software company, whose Windows operating systems are used on more than 90 percent of personal computers.

Google's plan, based on the theory that access to the Internet is now the most important feature of any computing device, would be separate from its Android system already available for smartphones and soon for small PCs.

"I don't know if they can't make up their mind or what the problem is over there, but the last time I checked, you don't need two client operating systems," said Ballmer. "It's good to have one."

Despite the jovial tone of Ballmer's public remarks, Microsoft is taking Google's challenge seriously. Its new Bing search engine is a concerted attempt to take market share from dominant leader Google, and its announcement on Monday that it would offer some versions of its Office application on the Internet is a swipe back at Google's move into free, online software.

Ballmer's previous attempts to make light of new competition have not always been successful. He also derided Apple Inc's iPhone as too expensive, but it went on to take a significant share of the smartphone market.

Microsoft shares fell 15 cents to $23.08 on Tuesday afternoon on the Nasdaq.

The price of pre-ordering Windows 7 has shot up to £80 at the majority of retailers, with the promotional £50 copies already in short supply.

In order to ensure Windows 7 got off to a better start than Vista, Microsoft slashed the cost of Home and Home Professional by a third on promotional copies which were sold on a "first come, first served basis while stocks last".

The promotion ensured Windows 7 shot to the top of Amazon's charts when it was released yesterday, with the online retailer claiming that "sales in the first eight hours outstripped those of Windows Vista's entire 17 week pre-order period," according to the BBC.

It was a similar story at other retailers including Play and Tesco. However, it appears Microsoft's generosity only stretches so far, with promotion copies already in short supply within a day of launch.

Ordering from Amazon, Play or Tesco now will cost you £80 for Home Premium and £100 for Home Professional.

Six in 10 companies in a survey plan to skip the purchase of Microsoft Corp's Windows 7 computer operating system, many of them to pinch pennies and others over concern about compatibility with their existing applications.

Windows 7 will be released October 22, but has already garnered good reviews, in contrast to its disappointing current version, Windows Vista.

Many of the more than 1,000 companies that responded to a survey by ScriptLogic Corp say they have economized by cutting back on software updates and lack the resources to deploy Microsoft's latest offering.

ScriptLogic Corp, which provides help to companies in managing their Microsoft Windows-based networks, sent out 20,000 surveys to information technology administrators to learn the state of the market.

Many companies have rejected Windows Vista as unstable. For example, the chip maker Intel Corp, Microsoft's long- time partner in producing personal computers, has stayed with the older XP system.

The survey found about 60 percent of those surveyed have no plans to deploy Windows 7, 34 percent will deploy it by the end of 2010 and only 5.4 percent will deploy by year's end.

Forty-two percent said their biggest reason for avoiding Windows 7 was a "lack of time and resources."

That dovetailed with another part of the survey, which found that 35 percent had already skipped upgrades or delayed purchases to save money.

But there were reasons other than money for staying away from Windows 7. Another 39 percent of those surveyed said they had concern about the compatibility of Windows 7 with existing applications.

The survey quoted Sean Angus, a senior personal computer technician at Middlesex Hospital, as saying he would wait until the first "service pack" was released for Windows 7.

"The IT department must complete thorough testing to ensure that the applications we rely on each day, specifically radiology information systems and financial applications, will be compatible, before deploying any new platforms or software to our 1,500 desktops," he added.

In its bold march to become a credible collaboration and communication suite for businesses, Google Apps has encountered a frequent roadblock that has proven more vexing than expected to circumvent: good old Microsoft Outlook.

Google apparently underestimated how attached employees are to Outlook, the venerable e-mail program that epitomizes the "fat" collaboration and communication PC applications that Google despises and has vowed to eradicate from workplaces with its Web-hosted Apps suite.

Google announced Gmail For Your Domain -- the cornerstone for what would become Google Apps -- in February 2006, positioning its webmail service as an alternative hosted e-mail system for businesses vis-à-vis expensive and hard-to-manage internal messaging servers like Microsoft Exchange.

Although it gave Gmail support for POP3 (Post Office Protocol 3) and IMAP (Internet Message Access Protocol) so that end-users could synchronize messages with Outlook and other PC e-mail applications, Google resisted for years creating a specific Outlook synchronization tool.

For Google, adopting Apps involved accepting a new way of communicating and collaborating in the workplace, namely with Web-hosted applications, the software-as-a-service (SaaS) model that it views as the future, versus what it considers the passé, desktop-centric Outlook and Office.

In addition to POP3 and IMAP, Google also developed its Gears browser plug-in for providing offline access to Apps components like Gmail and the Docs office productivity suite.

Yet, even when given the possibility to use Gmail as an e-mail front end with and without an Internet connection, enough workplace users bristled at the thought of giving up Outlook.

Apparently, the resistance became more strident as Google has tried to market Apps to larger businesses, those with 1,000 or more end-users.

Last month, Google unveiled, rather surprisingly, its Outlook synchronization tool for Apps, spinning the occasion as a happy one, when in fact it could as well be viewed as a capitulation, a concession of defeat.

Google has found out that, yes, many companies are happy to ditch Exchange for Gmail if it means saving money and eliminating the grief of maintaining Exchange in-house.

However, and maybe to a degree unexpected by Google, it also discovered that many companies consider it a deal-breaker to lose the functionality that the Outlook-Exchange combo provides, thanks to the deep links that exist between this client-server tandem.

So Google embarked -- probably grudgingly -- down the path that other e-mail vendors have traveled with little success: trying to replicate the Outlook-Exchange experience with their back-end e-mail server and Outlook. Here was Google apparently getting dragged into the Microsoft way of doing things, creating -- gasp! -- a piece of PC software: an Outlook plug-in. The problems and complaints started immediately.

Right away, industry analysts cautioned CIOs and IT managers to examine the Google tool closely, warning them that it couldn't fully replicate in Gmail the functionality of the Outlook-Exchange combination, lacking basic features like the ability to synchronize Outlook notes and tasks, for example.

Barely a week after the tool's announcement, Google acknowledged it had several embarrassing bugs, including that it broke Windows Desktop Search, which is used to search Outlook data.

While Google scrambled for a fix, the Windows Desktop Search workaround sounded like an IT manager's nightmare: Uninstall the Google tool, unless you had version 1.0.22.1945, in which case you had to first install the latest version and then uninstall it to re-enable indexing.

Over at Redmond, Microsoft posted its own take on the problem in an official blog, characterizing the issue as "a serious bug / flaw" and overruling Google's workaround remedy. Uninstalling the Google tool wouldn't solve the problem, Microsoft said, providing step-by-step instructions for adjusting affected registry keys.

It took Google two long weeks to deliver the fixes for the search problem and other bugs.

Bill Pray, a Burton Group analyst, thinks it was a strategic mistake for Google to build the Outlook sync tool. Google will never be able to offer full parity with Outlook-Exchange, so die-hard Outlook holdouts will never be happy, he said.

Meanwhile, Google will spend significant resources and effort not only to increase the plug-in's capabilities, but to also keep it up-to-date with the latest Outlook patches and upgrades, Pray said.

"It will take Google a lot of time, maintenance and continued effort to maintain the interoperability," Pray said.

A better strategy for Google would have been to play to the strengths of the Apps suite and of Gmail in particular, betting on winning the support of the new generation of enterprise end-users, he said.

"Strategically, it costs more than it is worth to keep that Outlook connector working well than it is to compete on the strength of your own e-mail client [software] alternative," Pray said.

Prior to the launch of Apps Sync for Microsoft Outlook, Google was on the offensive, finding new ways to compete against Outlook by highlighting the differences between the Microsoft fat client and the server-centric, hosted Gmail.

"What Google will find is that while it will initially satisfy some demand with the Outlook connector, the connector will ultimately fail against the enterprise expectation that it work perfectly," Pray said.

People joining the workforce increasingly are comfortable and familiar with webmail services like Gmail, a trend that is organically reducing Outlook's appeal, he said.

By bending over backwards to accommodate Outlook holdouts, Google is weakening its case for the use of Gmail and Apps, Pray said.

Google holds a different view. The goal for the first version of Apps Sync for Microsoft Outlook was to meet "90 percent to 95 percent" of Outlook users' needs, which was accomplished, said Rajen Sheth, Google Apps senior product manager.

That includes the synchronization of e-mail, calendar items and contacts between Outlook and Gmail in "much the same way" as it works between Outlook and Exchange, he said.

While acknowledging that the tool doesn't offer full feature parity right now, Sheth promised that Google will extend its functionality aggressively.

"As you know, with Google, our first release is never our last release. We have a strong philosophy of getting something out there in the market that is strong and meets the needs, but then continue to iterate on it to add more and more functionality," Sheth said.

"You're going to see us do that aggressively with this product, just like we do with everything else. We'll continue to add releases to it, to add features, to make it better and better and go from 90 [percent] to 95 percent to close to 100 percent of the use cases," Sheth added.

Many people embrace Gmail's end-user interface when their companies adopt Apps, but companies of all sizes have vocal contingents of workers with a deep attachment to Outlook, for which the IMAP synchronization falls short, he said.

"There's a specialized experience that Outlook users have when using it with Exchange Server, and we wanted to make their experience with Google Apps to be as close as possible to that experience," Sheth said.

Google remains convinced that what it views as the benefits of the Web-based Gmail user interface will continue to be recognized in workplaces and will win converts even among Outlook die-hards.

Outlook loyalists can be found even in places that help companies adopt Google Apps, such as systems integrator Epicentro in Milan, Italy.

Epicentro provides Apps implementation services and has also used the suite internally since 2006, when it migrated from Exchange to Gmail. After several years, it still has "a few active users" of Outlook, said Mauro Ginelli, a Google Enterprise Applications specialist at Epicentro. After testing the plug-in, Epicentro plans to install it on its Outlook users' computers.

Ginelli thinks Apps Sync for Outlook will help convince potential customers to switch from Exchange to Apps. It gives Outlook users synchronization for e-mail, calendar and contacts, and can smooth out a progressive migration to the Gmail interface, as opposed to sudden and forced transition, he said via e-mail.

Rob Ardill, IT consultant at Metro Wireless and Networking in Adelaide, South Australia, also foresees that Apps Sync for Outlook will help him convince clients to give up Exchange for Apps, but believes that Google needs to take the product farther.

"If Google is serious about stealing Exchange customers, then they must offer native 1:1 features. This offering really targets those who have considered defecting to Google Apps and didn't want to jump without basic Outlook integration," he said via e-mail. "Those looking for more advanced features will certainly stay with Exchange."

Microsoft Corp warned that cybercriminals have attacked users of its Office software for Windows PCs, exploiting a programing flaw that the software giant has yet to repair.

The world's largest software maker issued the warning on Tuesday as it released patches to address nine other security holes in its software.

"Despite today's fixes, Windows users continue to be under attack. Microsoft is taking two steps forward, while attackers are putting it one step back," said Dave Marcus, McAfee Inc's Avert Labs director of security research.

Hackers booby-trap websites with malicious code that loads onto computers running the vulnerable Office software. Infected PCs are commandeered into a botnet, a network of hijacked computers. They are used for identity theft, spamming and other cybercrimes.

Microsoft did not say how many machines were attacked. It estimates that some 500 million people use its Office suite, which includes Word, Excel and PowerPoint software.

The software maker said in a security bulletin that it has developed a temporary workaround for the problem, which users must manually install on PCs to protect them from attack.

A company spokeswoman said that program would soon be available through Microsoft's website. Office XP, 2003 and 2007 are vulnerable to the attacks.

Sean McManus, the president of CBS News, learned of Walter Cronkite’s death while he was at the dinner table on Friday evening, sharing a meal with his two children, ages 8 and 10.

After taking the phone call, he tried to explain to his children — who have grown up bombarded with news and information — the value of Mr. Cronkite’s once-a-day news updates.

“There probably will never be anybody who has the presence and the stature and the importance that Walter Cronkite had in this country,” Mr. McManus said in a telephone interview, recalling what he told his children.

“I tried to explain to them that most people in America expected to get both good and bad news from one man, and that was Walter Cronkite,” he said. “That will never be duplicated again,” because of the fragmentation of the media.

Mr. McManus sensed that his children had a hard time comprehending what he meant.

“It’s really hard,” he acknowledged, “to remember just how influential and important he was.” He cited Mr. Cronkite’s famous declaration that the Vietnam war would end in a stalemate.

Viewers and Web readers now, he said, “are so used to being assaulted by so many streams of media that it’s hard for them to imagine that there were only three or four ways to get news and information on TV.”

On an evening when Mr. Cronkite was on the minds of the television industry, Mr. McManus sounded a sad note about the splintered media environment. TV executives are always looking for the next Cronkite, he said, “but I don’t think anybody will be in that position of prominence again.”

CBS News still operates out of the same building on West 57th Street in Manhattan where Mr. Cronkite anchored the “CBS Evening News.”

While he had not visited recently, Mr. McManus said, “his presence really is palpable in the halls of CBS News.” On Friday evening, the news division felt numb, even though Mr. Cronkite was known to be in ill health for some time.

A little more than a year ago, Mr. Cronkite paid a surprise visit to the news headquarters. Even the interns who weren’t yet born when Mr. Cronkite was anchor were “literally looking up to him,” Mr. McManus said.