After the ACS:Law debacle, one might think that potential claimants would be deterred from taking legal action against alleged file-sharers in the UK, or at least learned lessons. Alas, no. His Honour Judge Birss QC, the judge who brought ACS:Law’s scheme to its knees, now has to deal with three cases filed on behalf of a UK porn outfit who, in common with the doomed law firm, tried to back out at the last minute.

As reported by TorrentFreak in March 2010, Golden Eye (International) Ltd, a company connected with the Ben Dover porn brand in the UK and one that had already been pursuing alleged file-sharers in Germany, decided to import their file-sharing settlement scheme to the UK.

Golden Eye, having already obtained the identities of file-sharers and retained the services of lawyers Tilly Bailey & Irvine (TBI), must’ve thought that the cash would come rolling in, but things quickly went wrong. Alongside client MediaCAT, the now-defunct ACS:Law was in the process of destroying the marketplace for would-be profiteers, and it all got too much for TBI, who announced they were pulling out.

But what do you do if you’re Golden Eye, and you’re in possession of a paid-for list of names of people who may, given the right pressure, send you several hundred pounds to settle a file-sharing case? The answer appears to be “keep trying”.

Readers will be familiar with His Honour Judge Birss QC of the Patents Country Court, the judge who narrowly missed becoming 2011′s Internet Hero after bringing ACS:Law’s scheme to an end. According to court documents, a trio of cases filed by Golden Eye against alleged file-sharers have caught his attention.

The first, filed in the Northampton County Court against a Mr Maricar, sets out the picture as follows:

On 27th November 2009 the Claimant [Golden Eye] believes the defendant unlawfully made all or part of the film [Fancy an Indian?] available from his IP address for downloading by third parties.

On 29th September 2010 the Claimant sent a letter before action to the Defendant setting out in full its claim for breach of copyright. The Defendant failed to reply. The Claimant sent another letter to the Defendant on 8th November 2010 to which no response was received. The Claimant claims £700 for breach of copyright.

Judge Birss notes that the particulars in a second case against a Mrs Vithlani (whose identity was handed over by her ISP BSkyB following a Norwich Pharmacal order made by Mr Justice Vos on 4th February 2010) are essentially the same, some details aside.

“Right away it will be seen that these claims bear some striking similarities to the claims in the litigation concerning the company Media CAT Ltd, the subject of my judgment Media CAT v Adams [2011]..,” Birss writes.

“However I should also make clear that there may very well be important differences between the present cases and the Media CAT cases. At this stage I do not know.”

Although Birss did not elaborate, one of the key reasons that the ACS:Law/MediaCAT cases collapsed was that MediaCAT were not the rightsholders of the copyright material in question. From the details available it seems that Golden Eye aren’t the rightsholders of ‘Fancy an Indian?’ either, Ben Dover Productions are. The latter would need to be joined with Golden Eye as claimants for the case to proceed.

But, yet again, the path to judicial scrutiny in these file-sharing cases has been hindered by the claimants. After discovering that against their will the case would not be heard in a county court but at the Patents County Court under Judge Birss, on 8th August 2011 Golden Eye tried to discontinue the case against Mrs Vithlani. The status of the case against Mr Maricar is uncertain, but it too has been transferred to the Patents County Court.

Defendant Mrs Vithlani has now applied for Ben Dover Productions to be joined to the proceedings and then allow the case to be struck out – but with costs awarded to her. In this event, those costs, and they could already be substantial, would have to be met by Ben Dover Productions.

When Sheptonhurst/Darker Enterprises, the original copyright holders in the ACS:Law/MediaCAT partnership refused to join their proceedings, MediaCAT went bankrupt to avoid picking up the bill. ACS:Law decided on the same fate.

Judge Birss also notes there is a third case on file against a Mr Rajan. After travelling through the county courts, it too ended up at the Patents County Court during March 2011. Shortly after Golden Eye filed to discontinue the case.

“Finally, since it is apparent that the claimant has commenced and is pursuing copyright infringement proceedings in the county courts arising presumably from information provided as a result of the order of Mr Justice Vos, the claimant is invited to consider and make submissions as to how any other of its pending cases arising from that order might be dealt with conveniently,” Judge Birss concludes.

So, cards on Judge Birss’ table then for Golden Eye and Ben Dover, for all subscriber identities obtained so far.

There’s a sneaky feeling that, given Judge Birss’ experience in this field, the outcome won’t be favorable for Golden Eye and it will be them and their settlement partners who will have to bend over, not the three targets of their scheme. https://torrentfreak.com/theyre-back...haring-110927/

Can't Stop the Music ... or that Brand New Episode
Chris Gardner

A new law introducing fines of up to $15,000 for people who illegally download movies and music from the internet has so far proven ineffective.

Internet usage dropped by about 10 per cent the week the Copyright (Infringing File Sharing) Amendment Act 2011, aimed at a practice known as bittorrenting, came into effect on September 1.

Files containing movies and music are spread between different computers on the internet and bittorrent software is used to find the file parts and reassemble them. Some files, such as the open source Linux operating system, have no copyright, while files of music, movies and television shows belong to copyright holders.

The new law requires copyright holders to monitor bittorrenting services and send copyright infringement notices to internet service providers (ISPs), who can identify offenders through their internet protocol (IP) address.

The infringement notices must follow a set format and include a $25 fee but none of the internet service providers spoken to by the Waikato Times have received any that complied with the law.

Any ISP caught being used to download the files can have their internet connection terminated after three infringements, and the account holder fined.

This would make the providers of free internet services liable for actions of anyone who logs into their networks.

But one 26 year-old Hamilton man – who did not want to be named – said he and other tech-savvy users had found ways around the legislation.

"If you really want to get around the law you could download an IP scrambler or hider which can change your IP address. You can also use a proxy, which re-routes all your traffic through a server overseas so they can't trace it."

The problem, Telecommunications Users Association of New Zealand chief executive Paul Brislen says, is an out-of-date distribution system in which some television shows and films air in New Zealand months or even years after first release overseas.

Mr Brislen said he downloaded the latest episodes of Breaking Bad and Doctor Who after they had screened overseas because he couldn't wait for them to screen in New Zealand.

He'd happily pay to download the episodes, but there is currently no mechanism to let him do that.

Neither Telecom nor Vodafone New Zealand could provide any information around the level of infringements since the introduction of the law.

Orcon chief executive Scott Bartlett said his company had noticed a fall in New Zealand internet traffic in the first week.

"We have definitely seen a decrease of a little bit more than 10 per cent in relation to people using the internet.

"I don't think 10 per cent would be illegal traffic. There's a lack of understanding out there and it is having an impact. The people that are really infringing copyright, they are the ones that are getting around it."

He agreed with Mr Brislen's position.

"People want to pay for content and they can't get it. To me that's a bit perverse."

Inspire Net spokesman Dave Mill said copyright holders were not following the proper process.

"We're currently rejecting all automated notice emails we receive, and sending back information about how we're asking for notices to be submitted as covered under the law's regulations.

"Every notice we receive, we reply with a standard email which includes full instructions for how a rights holder can set up an account with us to lodge notices and be billed the $25 fee. No one has done this yet."

Mr Mill said it was probably too hard and expensive for copyright holders to follow the process outlined by the law.

"The rights holders aren't fully up to date with the new New Zealand laws."

Stu Fleming, who runs WIC NZ Ltd, said his company had received one allegation of copyright infringement via bittorrent.

TV and movies are not Reuben Austin's poison – the Hamiltonian downloads up to four albums a week through file sharing and the legislation hasn't changed his habits.

"I had a bit of a blitz on downloads last week because there wasn't anything on my shelves or hard drive that I really wanted to listen to, and I was in the mood to try new things. I may have downloaded between two and three dozen albums recently."

Mr Austin, 25, who works in retail, says he is willing to fight the law in court if necessary.

Downloading music from other people's music collections was like buying second-hand albums, he said.

"Pat Benatar, Dire Straits and Saxon have never received a cent from me, even though I have a number of LPs from each," he said. "Should I be prosecuted for that? The logic is much the same."

Mr Austin said he knew it was a cliched excuse, but downloading fostered a "try before you buy" mentality.

"I paid 60 (NZ$100) for a sweatshirt of a favourite band I discovered by blindly downloading an EP. Since postage was so high I made the most of it and got other stuff by bands on the same label too, who I had also downloaded for free.

"If I hear of an international band touring here soon I will often download a recent album of theirs. If I like it I will then spend $60 on a ticket, the same in gas to get up to Auckland, beers at the bar, maybe $45 on an overpriced T-shirt. The band are getting most of the door take and the cost of printing T-shirts is minimal. I am sure they wouldn't begrudge me the $2 of lost income from not buying the album in the first place."

$10 Music Piracy Fine: A Fair Deal Or Just Another Cheap Trick?
enigmax

Following a report yesterday that an anti-piracy company has been sending out emails asking that people pay a $10 fine after allegedly being caught sharing copyright material, we decided to take a closer look. Isn’t this tiny fine a good idea? Isn’t paying $10 literally 300 times better than paying $3000 to other companies in the same area?

However, rather than asking for around $3,000 like many in this field, Digital Rights Corp are strictly at the budget end of the market. When they contact you there’s no need to panic since they request a measly $10.00 to settle their complaint. It’s a system that’s been used before by PayArtists.com.

To the die-hard file-sharer, the fact Digital Rights Corp (DRC) have asked for just $10.00 will probably carry some comedy value. By only asking for such a small amount the company sends the message that they aren’t serious about pursuing infringers.

And the reality is, they aren’t.

First off, DRC have no idea who the recipients of their claims are and have absolutely no intention of finding out. Instead of going through the lengthy and expensive process of going to a court to force ISPs to hand over the names and addresses of their customers, DRC short-cut the system.

DRC contacts ISPs with a DMCA takedown notice (which they are obliged to pass on to subscribers) which contains a link to their website. Follow that and the target gets an offer to settle for $10.00, payable by credit card. It is only after people have responded to their email that DRC even know who they are.

DigitalRights

But if we look at the current landscape, scarred by the punitive actions of the U.S. Copyright Group and the dozens of porn companies and their aggressive lawsuits demanding several thousands dollars in settlements, perhaps this $10.00 deal doesn’t look so bad, at least in comparison. It’s a couple of beers, a couple of sandwiches. What it clearly is not is a life ruiner.

Nevertheless, DRC have to go and spoil it.

The rhetoric in their emails and on their website consists of the same old anti-piracy scare tactics. Even though the company have absolutely no intention of suing, they give the impression they will, stating that: “The user who receives the notice, is liable for $150000 in damages, but if they click on the link supplied, they can enter a credit card and we will settle the matter.”

Now, bearing in mind that there is already a deal with the major US ISPs and the big music and movie studios to begin sending warnings which may, possibly, after more than half a dozen strikes, lead to the suspension of an Internet account, DRC lay it on thick in this department too.

Their initial email warnings state that the recipient risks having their internet cut off but their FAQ section on their website takes it a whole lot further.

“My Internet service has been shut off, how do I get it restored?” says the page’s first question. The supplied answer is simply ridiculous.

“Once you pay your settlement fee on this website or over the phone, we will notify the ISP that you have settled the matter with the copyright owner and they will restore your service,” is the response.

The notion that an ISP will cut off a subscriber based on the allegations of company like this following a simple, unsubstantiated DMCA notice, is unlikely to say the least. To suggest that the ISP would then switch that service back on after being notified that a $10.00 fine had been paid takes the statement to the absurd.

Another attempt at misdirection comes with the final statement on the page which declares: “Your ISP has verified that at the time your computer was used for copyright infringement, it was using the IP Address stated in the notice.” The implication here is that the ISP has verified that the email recipient has been infringing. They haven’t, they have simply forwarded an email.

The other problem is that these settlement companies, whether they ask for $10.00 or $3,000, is that they always try to give the impression their work is about reducing piracy. According to figures quoted by PaidContent, “…unauthorized sharing of one client’s song decreased from 20,000 to 4,000 in the month after its settlement offers were issued.”

How is that achieved with a system like this? Until now there has been almost zero publicity for this company or its business model. So, how does sending emails quietly to individuals that have already supposedly shared or downloaded the material in question reduce the uptake of new people doing the same? In fact, the entire model relies on new people coming aboard or the revenue simply dries up.

So if there is no reason to pay these people, why are people doing so?

Well, as shown by the type of artist in the DRC client list, it could very well be that the older, more-easily scared generation is being targeted here, rather than the young and tech-savvy. The good news is, however, that most of the artists being ‘protected’ by DRC won’t have been hurt by any infringements.

Downloading content illegally online is no different from shoplifting or buying bootleg movies on the street, according to U.S. Immigration and Customs Enforcement (ICE) Director John Morton.

That’s music to the ears of firms that are struggling to protect their property from rampant online copyright violations.

The recording, film, software and video game industries have made fighting digital piracy and counterfeiting their top policy issue. Ditto for retailers and brand-name companies victimized by the knockoffs that proliferate on eBay and other websites. The U.S. Chamber of Commerce has an entire office devoted to the problem — as do the Department of Justice, the Secret Service and the State Department.

And then there’s ICE, which has made perhaps the largest impact by seizing a number of domains accused of flouting copyright laws.

The agency’s maiden domain seizures over the Thanksgiving holiday weekend last year made waves in the tech community and served notice that the government is taking a more aggressive approach to piracy.

“[Online copyright enforcement] is a very high priority for us. It’s a very high priority for the administration generally,” Morton told The Hill recently.

“American industry is literally under assault from counterfeiters. That’s not anything new, but the sheer size and sophistication of it have grown to levels that are really disconcerting for industries across the American economic perspective.”

While Morton is quick to note the leadership of Vice President Biden and the involvement of Attorney General Eric Holder and Homeland Security Secretary Janet Napolitano, he has become the face of the administration’s hawkish campaign against online copyright violations.

“I’m not trying to increase the number of cases by 20 percent and call that success,” Morton said. “I’m trying to change the face of IP enforcement. I’m trying to make a difference.”

A career law enforcement official and prosecutor, Morton was plucked from the Justice Department in 2009 and nominated by President Obama to lead a nascent agency (founded in 2003) more frequently associated with managing the many immigrant detainees across the U.S.

“It was my judgment when I first came into office that we needed to do much more. And I’d heard all of the criticisms, that it’s whack-a-mole, that there’s nothing you can do about it, and I just disagreed with that fairly negative assessment,” Morton said. “I thought there was a lot more we could do. I frankly don’t think we have a choice.”

Morton says the government must act to stop counterfeiting and piracy because they undermine innovation and threaten domestic job creation. He warned that drug companies and other firms would be reluctant to invest billions in research and development if their products are stolen before they turn a profit.

“We want people to shoot for the stars. We want people to be innovative. We want people to make America a great place for the latest cutting-edge idea or advance, and the only way you do that is have a system that protects people’s intellectual property investments,” Morton said, noting that such protections are a constitutional right.

But Morton and the entertainment industry must also contend with the still-prevailing impression that downloading or streaming copyrighted content online isn’t a crime. Despite the fact the sites linking to pirated content are, in Morton’s words, “criminal organizations,” the public still largely perceives the issue as a victimless crime.

Morton rejected that characterization, pointing to the severe decline in the recording industry over the past 10 years. He argued the piracy of music online has had a “very tangible effect on real people” working for recording studios, advertisers and the rest of the industry.

“The whole industry, one of America’s great industries, has seen a significant decline directly as a result of criminal behavior,” Morton said. Worse still, in his view, is that copyright violators are increasingly trying to mirror legitimate online retailers, increasing the amount they collect in fraud and lessening trust in the digital economy.

That’s where Operation In Our Sites comes in. Morton said the idea was to come up with an effective law enforcement strategy for sites based outside of the U.S. that are defrauding domestic intellectual property rights holders. The answer came in the form of domain names, which he said provide a tangible link to some of the worst offenders. The resulting seizures of over 140 domain names have made a significant dent in the online piracy market as other sites have gone offline or underground to avoid similar prosecution.

“We’ve taken care to focus on worst of the worst,” Morton said, adding that industry has been helpful in part because it’s the first time they’ve had a single government agency to whom they can report serial violators.

“John is an imaginative leader who thinks outside the box about new ways to address a complex enforcement problem,” said Recording Industry Association of America CEO Cary Sherman. “He’s not afraid to rock the boat to get results. I’m a big fan.”

“John is one of the best. He really gets it — he’s about change,” said Mike Robinson, the executive vide president of worldwide content protection for the Motion Picture Association of America. “He thinks outside the box and he’s very sincere.”

As for Operation In Our Sites, Robinson said, “It’s not an end-all, be-all, but it’s a step in the right direction.” He agreed that since Morton has taken over, it has become much easier to coordinate with government to combat piracy.

The son of an American father and a British mother, Morton grew up in Alexandria, Va., where he attended Episcopal High School before moving on to the Peace Corps. After graduating from the University of Virginia Law School, he joined the Justice Department, where he said he “never looked back” until the White House called with a job offer.

“It’s a good time for intellectual property rights enforcement,” Morton said. “There’s a lot going on; there has been and there will be in the coming years.”

With several bills in play on Capitol Hill that would significantly expand the government’s authority to go after violators, his prediction is likely to be true on a number of fronts going forward. While some privacy advocates may question the government’s methods, there’s no disputing that under Morton, ICE has managed to turn heads in the online world.

“Anyone who thinks this is about small-time crime on the corner of Fourth and Main is sadly mistaken,” Morton said, noting ICE and DOJ have never been more active on the issue than at present.

The Anti-Counterfeiting Trade Agreement (ACTA) will finally be signed this Saturday, October 1, in Japan.

The agreement has been years in the making, but its final passage comes only after a vociferous campaign by civil society and digital rights groups demanding an end to the secrecy, a place at the negotiating table, and a scaled-back set of copyright and patent provisions. They did pretty well—as we previously noted, US negotiators on ACTA were pushing for some of the toughest language on DRM, Internet disconnections, and more, but had to climb down in the face of international resistance and public pressure.

The secrecy was so intense—despite a blizzard of statements about transparency—that leaked diplomatic cables showed other countries objecting. An Italian official complained that it was "impossible for member states to conduct necessary consultations with IPR stakeholders and legislatures under this level of confidentiality." In Sweden, an ACTA negotiator told the US embassy that "the secrecy issue has been very damaging to the negotiating climate in Sweden."

Gigi Sohn, head of Public Knowledge, is still venting her discontent with the process. "Although the final version of the Agreement was an improvement from earlier versions, we continue to believe that the process by which it was reached was extremely flawed," she said in a statement today. "ACTA should have been considered a treaty, and subject to public Senate debate and ratification or, in the alternative, debated in an open and transparent international forum such as the World Intellectual Property Organization (WIPO). Instead, public interest groups and the tech industry had to expend enormous resources to force the process open to permit public views to be presented and considered."

But existing institutions with worldwide memberships wouldn't have gone along with increased intellectual property protections, so the ACTA countries—including the US, EU states, Mexico, Australia, Japan, and Canada—went it alone.

The milder ACTA won't be treated like a treaty—which requires Senate ratification in the US—but like an "executive agreement" that cannot alter US law. The US is sending Ambassador Miriam Sapiro, the deputy US Trade Representative, to Tokyo this weekend to sign the final document, though it's not yet clear how many of the countries that negotiated ACTA will actually sign it right away. Signing will be held open until May 1, 2013. http://arstechnica.com/tech-policy/n...s-saturday.ars

Last week's behind the scenes of Bill C-32 post focused on the Ministerial Q & A prepared for the joint appearance of Canadian Heritage Minister James Moore and then-Industry Minister Tony Clement. With the next copyright bill coming very soon - possibly this week - today I am posting the more detailed clause-by-clause document provided to the Ministers that reviews every provision in the bill, explains it rationale, and identifies changes to the current law.

There are few surprises here as the document provides a helpful analysis of the bill from the government's perspective. The exhaustive review provides a striking reminder that the government is extending liability under the Copyright Act for activities that may not even infringe copyright, thereby raising questions about the constitutionality of some provisions. This is the result of the digital lock rules, which necessitated a change in the infringement provision. The rationale notes (page 708):

The Bill introduces new causes of action (such as those relating to TPMs and RMIs) that could be used in civil lawsuits regardless of whether or not there has been an infringement of copyright.

The discussion on the digital lock provisions also emphasize that the defences to copyright infringement are not available for circumvention of a digital lock (page 718):

Generally, an owner of copyright in a work or other subject matter for which this prohibition has been contrevened has the same remedies as if this were an infringement of copyright (proposed s.41(2)). However, a contravention of this prohibition is not an infringement of copyright and the defences to infringement of copyright are not defences to these prohibitions.

The government's own words on the digital lock provision confirm that they may be unconstitutional since they fall outside the boundaries of copyright.

The constitutionality of digital lock legislation has been examined in two articles by Canadian law professors. Both conclude that the provisions are constitutionally suspect if they do not contain a clear link to conventional copyright law. Their reasoning is that the constitution grants jurisdiction over copyright to the federal government, but jurisdiction over property rights is a provincial matter. Digital lock legislation that is consistent with existing copyright law - ie. one that factors in existing exceptions - is more clearly a matter of copyright. The C-32 provisions are arguably far more about property rights since the provisions may be contained in the Copyright Act, but they are focused primarily on the rights associated with personal property and expressly exclude copyright defences.

My colleague Jeremy deBeer conducted a detailed analysis of this issue in his article, Constitutional Jurisdiction over Paracopyright Laws. Many of his arguments were echoed in a 2009 article published in the Journal of Information Law and Technology by Professor Emir Aly Crowne-Mohammed and Yonatan Rozenszajn, both from the University of Windsor, which concluded that the anti-circumvention provisions found in Bill C-61 were unconstitutional. The authors argue that the DRM provisions were "a poorly veiled attempt by the Government to strengthen the contractual rights available to copyright owners, in the guise of copyright reform and the implementation of Canada’s international obligations."

The government's own analysis appears to confirm the constitutional concerns as it points to reforms that expressly create liability even in the absence of copyright infringement. The solution is an easy one - by linking circumvention to actual copyright infringement (as education, consumer groups, and technology companies have advocated), the bill would more readily withstand a constitutional challenge.http://www.michaelgeist.ca/content/view/6026/125/

The Supreme Court’s 2011-2012 term begins Monday with arguments on the docket concerning everything from television profanity to warrantless GPS surveillance.

Cases we are tracking also surround whether Congress may place public-domain works into copyright and whether “thought” can be patented.

The justices hear about six dozen cases annually and four dozen have been chosen so far. A number of crucial cases from the appellate courts are vying to be added.

The Justice Department, for instance, is asking the nine justices to review the constitutionality of a law making it a crime to lie about being a decorated military veteran. And artists want the high court to decide whether they should get “performance” royalties when a consumer purchases a digital download from iTunes. Those two petitions are pending.

Here is a summary of important cases that have been granted a hearing by the Supreme Court:

United States v. Jones
Oral Argument Nov. 8

At the Obama administration’s urging, the Supreme Court will decide whether the government, without a court warrant, may affix GPS devices on suspects’ vehicles to track their every move. The Justice Department told the court that “a person has no reasonable expectation of privacy in his movements from one place to another.” The administration is demanding that the justices undo a lower court decision that reversed the conviction and life sentence of a cocaine dealer whose vehicle was tracked via GPS for a month without a court warrant.

The issue is arguably one of the biggest Fourth Amendment case’s in a decade — one weighing the collision of privacy, technology and the Constitution.

In 2001, the justices said thermal-imaging devices used to detect marijuana-growing operations inside a house amounted to a search requiring a court warrant.

The justices accepted the government’s petition to clear conflicting lower-court rulings on when warrants are required for GPS tracking. The administration, in its petition to the justices, said the U.S. Court of Appeals for the District of Columbia Circuit was “wrong” in August when it reversed the drug dealer’s conviction, which was based on warrants to search and find drugs in the locations where defendant Antoine Jones had traveled.

The government told the justices that GPS devices have become a common tool in crime fighting. An officer shooting a dart can affix them to moving vehicles, and recently, a student in California found a tracking device attached to the underside of his car, which the FBI later demanded back.

Three other circuit courts of appeal have already said the authorities do not need a warrant for GPS vehicle tracking.

Golan v. Holder
Oral Argument Oct. 5

The top court has agreed to rule on a petition by a group of orchestra conductors, educators, performers, publishers and film archivists about whether Congress may take works out of the public domain and grant them copyright status. A federal appeals panel, reversing a lower court, ruled against the group, which has relied on artistic works in the public domain for their livelihoods. The 10th U.S. Circuit Court of Appeals set aside arguments that their First Amendment rights were breached because they could no longer exploit those works without paying royalties.

For a variety of reasons, the works at issue, which are foreign and were produced decades ago, became part of the public domain in the United States but were still copyrighted overseas. In 1994, Congress adopted legislation to move the works back into copyright, so U.S. policy would comport with an international copyright treaty known as the Berne Convention.

Some of the works at issue include:
*H.G. Wells’ Things to Come
*Fritz Lang’s Metropolis
*The musical compositions of Igor Fydorovich Stravinsky

The government argued in the long-running case that Congress adopted what was known as “Section 514″ for its “indisputable compliance” with the convention and to remedy “historic inequities of foreign authors who lost or never obtained copyrights in the United States.”

“In other words, the United States needed to impose the same burden on American reliance parties that it sought to impose on foreign reliance parties. Thus, the benefit that the government sought to provide to American authors is congruent with the burden that Section 514 imposes on reliance parties. The burdens on speech are therefore directly focused to the harms that the government sought to alleviate,” the appeals court wrote.

Eric Schwartz, an intellectual property attorney with Mitchell Silberberg & Knupp in Washington, D.C., said the case boils down to whether Congress has the power under the Copyright Act to do what it did, and whether it was consistent with the First Amendment rights of the plaintiffs.

“I think the answer is ‘yes’ to both questions,” said Schwartz, former acting general counsel for the U.S. Copyright Office who helped draft the congressional legislation.

Anthony Falzone, executive director of the Fair Use Project at Stanford University and a plaintiff’s lawyer in the case, urged the justices to take the case.

“The point of copyright protection is to encourage people to create things that will ultimately belong to the public. While the scope and duration of copyright protection has changed over time, one aspect of the copyright system has remained consistent: once a work is placed in the public domain, it belongs to the public, and remains the property of the public – free for anyone to use for any purpose,” he wrote.

The justices have agreed to hear the government’s appeal of a lower court ruling invalidating the Federal Communication’s broadcast decency rules. The 2nd U.S. Circuit Court of Appeals ruled last year that the regulations were “unconstitutionally vague” and produced a “chilling effect” on First Amendment speech.

The facts concern FCC rulings that “fleeting expletives” uttered during the 2002 and 2003 Billboard Music Awards were indecent. First Cher then Nicole Richie cursed during the shows aired on Fox. In the other dispute, the FCC said ABC violated decency standards when the network aired a brief nude shot of Charlotte Ross’ buttox in NYPD Blue.

The FCC’s decency regulations are not enforced between 10 p.m. and 6 a.m., and only affect broadcast networks, not cable or internet programming.

The broadcasters claim the rules, which the government announced in 2004 would be strictly enforced, are so broad and vague that it’s unclear what is allowed, a position the government said was ridiculous. The appeals court in the Fox issue ruled that the FCC’s policy was unconstitutionally vague because “broadcasters are left to guess whether an expletive will be deemed ‘integral’ to a program or whether the FCC will consider a particular broadcast a ‘bona fide news interview.’”

In the ABC case, in which the FCC fined its affiliates $27,500 each, the appeals court said there was no “significant distinction” between the ABC and Fox cases, despite the ABC case dealing with scripted nudity. That’s because the appellate court said the FCC rules were “impermissibly vague.”

The government on appeal argues that “the court of appeals never asked what should have been the dispositive question: Whether Fox and ABC had fair notice that the expletives and nudity in the broadcasts under review could violate the commission’s indecency standards.”

Dennis Wharton, a vice president for the National Association of Broadcasters, said the government should not regulate broadcasters’ content.

“Responsible programming decisions by network and local station executives, coupled with program-blocking technologies like the V-chip and proper guidance of children by parents and caregivers, are far preferable to government regulation of program content,” Wharton said in a statement.

A highly nuanced and technical dispute between Mayo and Prometheus begs the question of whether “thought” is patentable. The issue surrounds a Prometheus patent concerning, in part, doctors’ subjective observations on how patients react to synthetic drug dosages to treat auto-immune diseases.

Prometheus holds patents to methods that assist doctors in figuring out — through observation and testing — the effective dosage of synthetic drugs to administer. The method includes performing drug tests with a prometheus-patented kit.

Prometheus sued Mayo, arguing its use of the kits was patent infringement. The U.S. Federal Circuit Court of Appeals sided with Prometheus, saying the patents were valid because they outlined methods of altering a patient’s body chemistry with specific drugs.

Mayo claims that the patents, ultimately, are an observation of naturally occurring phenomenon — the body’s reaction to dosing levels.

Mayo told the Supreme Court that the patents at issue should be nullified. “The Prometheus patents claim a monopoly over consideration of a naturally occurring correlation between metabolites of a drug and the toxicity or efficacy of that drug,” the clinic said.

Steven Shapiro, the legal director for the American Civil Liberties Union, said Mayo should prevail.

“What they’re claiming a patent on is how you think about whether or not a drug is working. You can’t patent thought,” he said.

The government weighed in, too, arguing “provisions of the Patent Act permit the nuanced, fact-intensive distinction necessary to separate patentable from un-patentable inventions.”

Here is a summary of important cases awaiting the high court’s decision on whether to grant review:

American Society of Composers v. United States

The court has been petitioned to decide whether downloading a song from iTunes, for example, is a public performance that requires that artists get paid additional royalties — just as the rock band Queen gets extra royalties each time We Are the Champions is blasted over the public-address system at a football stadium.

The American Society of Composers, Authors and Publishers, better known as ASCAP, is asking the justices to review lower court decisions that said downloading songs from iTunes, Amazon, eMusic or even music-sharing services do not count as a public performance, and hence artists are not entitled to additional royalties.

The 2nd U.S. Circuit Court of Appeals ruled against ASCAP more than two years ago. The group, with 400,000 members, maintains that the Copyright Act demanded the extra royalties, which could amount to tens of millions of dollars in revenue annually. The appeals court said that downloading a music file is more aptly characterized as “reproducing” that file, and not subject to performance rights.

The appeals court said “perform,” as outlined in Section 101 of the Copyright Act, means to “recite, render, play, dance or act it either directly or by means of any device or process.”

ASCAP licenses the right to perform publicly the musical works of its members to a diverse array of music users, including internet and network-based sites and services, television and radio stations, restaurants, hotels and sports arenas.

The artists told the justices in their petition that the case was of “vital importance.”

“If the Second Circuit’s decision stands, songwriters and music publishers across the nation will be denied their statutory right to receive royalties for public performances when their works are downloaded over the internet — which is already one of the most prevalent means for the dissemination of copyrighted musical works,” they wrote.

The government, backed by Solicitor General Donald Verrilli Jr., a former Recording Industry Association of America attorney, urged the justices to reject ASCAP’s petition.

“Because the download itself involves no dancing, acting, reciting, rendering, or playing of the musical work encoded in the digital transmission, it is not a performance of that work,” the government told the justices.

United States v. Alvarez

The Justice Department is asking the justices to decide the constitutionality of 2006 law making it a criminal offense to lie about being decorated for military service.

The Stolen Valor Act makes it unlawful to falsely represent, verbally or in writing, to have been “awarded any decoration or medal authorized by Congress for the Armed Forces of the United States, any of the service medals or badges awarded to the members of such forces, the ribbon, button, or rosette of any such badge, decoration, or medal, or any colorable imitation of such item.”

A federal appeals court declared the law unconstitutional last year. The measure imposes penalties of up to a year in prison.

The issue before the justices comes from the 9th U.S. Circuit Court of Appeals, which ruled if it were to uphold the law, “then there would be no constitutional bar to criminalizing lying about one’s height, weight, age, or financial status on Match.com or Facebook, or falsely representing to one’s mother that one does not smoke, drink alcoholic beverages, is a virgin, or has not exceeded the speed limit while driving on the freeway.”

The case concerns defendant Xavier Alvarez. In 2007, he claimed falsely that as a Marine, he had won the Medal of Honor. He made that public statement during a local Los Angeles suburban water board meeting, in which he had just won a seat on its board of directors.

The government said Alvarez should be prosecuted because the speech fits into the “narrowly limited” classes of speech, such as defamation, that is historically unprotected by the First Amendment. In its petition, it told the justices that the act “plays a vital role in safeguarding the integrity and efficacy of the government’s military honors system.”

Congress, when adopting the law, said fraudulent claims about military honors “damage the reputation and meaning of such decorations and medals.”

Alvarez was the first person ever charged and convicted under the act, which has ensnared dozens of defendants. Alvarez pleaded guilty, was fined $5,000 and ordered to perform 416 hours of community service. He appealed his conviction to the 9th Circuit.

While making a security-themed presentation on the new Copyright (Infringing File-Sharing) Amendment Act in Wellington, lawyer Michael Wigley was brief beyond the expectations of his audience.

If you are the account holder for an organisation running a network, he says, there are two things you can do to protect yourself from penalties under the Act. “Get all your IP addresses from APNIC [the Asia-Pacific Network Information Centre] or stop all peer-to-peer traffic,” he said. “That’s my talk, thank you very much.”

As APNIC is an overseas organisation rather than a local ISP, it will not be obliged by the law to act on a request made by a content owner (typically a music publisher or film company) about allegedly copy-right-infringing downloads or uploads, Wigley said, elaborating on his simple message.

If three notices are served alleging offences and are not convincingly rebutted, because the company cannot trace the offender, then five-figure fines can be levied. The penalty provisions also include a clause allowing an organisation’s network to be shut down, if that provision is activated at some stage by an Order in Council.

“And it’s all about the customer of the ISP, which in our case is the corporation, not the end user,” Wigley said. “So you get stuck with [the consequences of infringing actions by] the ratbags in your office.”

But with an APNIC address, “for various technical reasons connected with the legislation, you’re not at risk of getting caught out.

“If you’re a city council or a university, with transient users or students using your network, this is the only way you can deal with it,” he told the meeting.

There was some discussion on whether a content-owner, even a major international movie or music company, would risk the bad publicity consequent on fining or disconnecting a local government body or university.

That was an unknown quantity, said Wigley.

Other attendees suggested that since the notice procedure hides the identity of an alleged offender from the accuser in the initial stages of an action, the content owner will not know it is dealing with a high-profile organisation until it is some way down the track towards prosecution.

Wigley acknowledged to Computerworld that IPv6 may offer a third solution, by allowing every staff member to be assigned an individual address, so the real culprit can be tracked and internal action taken before the matter gets to the Copyright Tribunal or court. http://computerworld.co.nz/news.nsf/...le-sharing-act

Judge: Righthaven Lacked Standing, Abused Copyright Act
Steve Green

Righthaven LLC of Las Vegas lacked standing to file copyright infringement lawsuits in Colorado under its lawsuit contract with the Denver Post and abused the Copyright Act in doing so, a federal judge ruled Tuesday.

Senior U.S. District Judge John L. Kane in Denver granted summary judgment for Righthaven lawsuit defendant Leland Wolf and the It Makes Sense Blog against Righthaven.

"In light of the need to discourage the abuse of the statutory remedies for copyright infringement, I exercise my discretion under the Copyright Act and order that Righthaven shall reimburse Mr. Wolf's full costs in defending this action, including reasonable attorney's fees," Kane's order said.

A Las Vegas attorney representing Righthaven, Shawn Mangano, said the company would appeal. Righthaven seems to be pinning its hopes on the fact that copyright case law isn't well established in the 10th U.S. Circuit Court of Appeals, which includes Colorado, something Kane noted in his ruling.

As a test case, the Wolf case was the lone active Righthaven case in Colorado among 34 still open there. Kane's ruling indicates the other 33 will now be dismissed as well.

In total, Righthaven had filed 57 lawsuits over Denver Post material in Colorado since Jan. 20. These suits alleged websites, message board posters and bloggers had used a Post TSA pat-down photo without permission -- even though the photo had gone viral on the Internet, was distributed to news outlets by The Associated Press and many defendants said they had no idea it came from Denver or the Denver Post and was subject to copyright protection.

Twenty-three of the lawsuits had been closed prior to Kane's ruling after they were settled, voluntarily dismissed or otherwise closed under undisclosed terms.

The best known of the settling defendants in Colorado was white supremacist David Duke. He settled under undisclosed terms.

Kane's order Tuesday suggests Righthaven had lacked standing in those 23 cases as well.

Righthaven, which also sues over Las Vegas Review-Journal material, has filed 275 lawsuits alleging copyright infringement since March 2010. The lawsuit campaign stalled this summer over problems with its standing to sue in Nevada and charges its no-warning lawsuits were abusing the court system as they involved dubious legal claims, were filed without warning and generally were filed against defendants who had no idea they might be infringing on copyrights.

Righthaven, however, has insisted the suits are needed to crack down on extensive ongoing infringements of newspaper content.

"The issue presented in this case (is) whether a party with a bare right to sue has standing to institute an action for infringement under federal copyright law," Kane's ruling said. "I hold that the answer to that question is a forceful, yet qualified, 'no.'"

This is the same problem Righthaven had with its lawsuits over Review-Journal material -- while Righthaven claimed to own the material it was suing over, the Review-Journal and the Denver Post maintained control of the content and were the true owners, five judges have now ruled.

Four federal judges in Las Vegas have dismissed seven Righthaven lawsuits because, under precedent in the 9th U.S. Circuit Court of Appeals including Nevada, copyright infringement plaintiffs must have actual ownership of the material they sue over, not just the bare right to sue.

Defense attorneys say this legal concept recognizes that copyrights have a special place in the law encouraging and protecting creativity and fostering the arts and political discussion, and shouldn't be bartered for lawsuit purposes. Under the concept of fair use, which has doomed three Righthaven lawsuits so far, people can use the copyrighted works of others within limits.

Righthaven says it has corrected its standing problem for the Review-Journal lawsuits with a new lawsuit contract with R-J owner Stephens Media LLC, though two judges have expressed skepticism it currently has the right to sue and there have been no definitive rulings on the issue.

Righthaven has not amended its lawsuit contract with the Denver Post, whose owner has chosen not to renew the contract for Righthaven's copyright protection services.

In looking at the Copyright Act, Righthaven's copyright assignment agreement with MediaNews Group Inc., owner of the Denver Post, and their copyright assignments, Kane found language in the agreement about a "purported assignment of 'rights requisite'" is "meaningless."

"MediaNews Group retained all rights to exploit the work; no legal interest ever changed hands," Kane wrote in his ruling. "Thus, when read together, the assignment and the copyright assignment agreement reveal that MediaNews Group has assigned to Righthaven the bare right to sue for infringement – no more, no less. Although the assignment of the bare right to sue is permissible, it is ineffectual.

"Righthaven is neither a ‘legal owner’ or a ‘beneficial owner’ ... and it lacks standing to institute an action for copyright infringement," his ruling said.

Copyright law expert Eric Goldman, associate professor at the Santa Clara University School of Law and director of its High Tech Law Institute, said Kane "did a careful `first-principles' review of the law on copyright assignment" rather than simply relying on precedent in the 9th U.S. Circuit Court of Appeals -- precedent that has been used against Righthaven in its Nevada cases.

"This makes the opinion more likely to survive an appeal. In addition to completely rejecting Righthaven's assignment agreement, the opinion bristles with hostility towards Righthaven's basic business model. It's hard to imagine a more resounding judicial rejection of Righthaven's efforts," Goldman said.

Tuesday's ruling by Kane suggests that Righthaven also lacked standing to file four more lawsuits over Denver Post content in Nevada – including two that were settled under undisclosed terms against deep-pocketed defendants Matt Drudge and Citadel Broadcasting Corp. of Las Vegas.

Kane’s ruling also suggested Righthaven didn’t have standing to sue Tea Party activist Dana Eiser in federal court in South Carolina over a Denver Post column. His ruling is not binding on that court, and her dismissal motion is pending.

"This ruling (by Kane) was even worse for Righthaven than many of the earlier rulings. And every court that has issued a final ruling has ruled against Righthaven. At some point you have to wonder how much longer they will continue fighting," said Todd Kincannon, Eiser's attorney who also has been agitating against Righthaven in its Colorado and Nevada cases.

It hasn't been determined whether defendants that settled with Righthaven -- under its now-invalidated Denver Post lawsuit contract -- have any recourse against Righthaven or the Post.

Kane's order that Righthaven pay Wolf's legal fees could turn out to be expensive for Righthaven, especially if it's expanded to include other Colorado cases where defendants had hired attorneys to fight Righthaven.

In the Wolf case alone, his attorneys have already asked Kane for an injunction barring Righthaven from disgorging assets until they receive their legal fees for representing him.

They've demanded Righthaven put up a $25,000 bond to ensure they get paid, something Kane has not yet ruled on.

Wolf was represented by Randazza Legal Group of Las Vegas as well as Denver attorney Andrew John Contiguglia.

Also fighting Righthaven in the case were friends of the court the Electronic Frontier Foundation of San Francisco and Kincannon's group Citizens Against Litigation Abuse.

Two counterclaims had been filed against Righthaven in the Colorado cases. It's unclear whether they'll be able to proceed after Kane's ruling Tuesday.

One was filed by defendant BuzzFeed. That claim sought to represent all the Colorado defendants in a class-action charging Righthaven’s lawsuits there were part of an extortion litigation business model.

The other was filed by Freedom Force Communications in a lawsuit involving the TSA pat-down photo allegedly showing up on the Minot, N.D.-based sayanythingblog.com website. That counterclaim charged Righthaven didn’t have standing to assert the copyright infringement claims against Freedom Force and its codefendants.

Kane's ruling Tuesday affects the open Righthaven cases in Denver against Wolf along with suits against David Rozzell, Shaquan Shamar Brown, Tripso Inc., Glenn Church and Ron Eldridge.

Also, A Small Corner of Sanity, BuzzFeed Inc. and Iconix Brand Group Inc.

(This list includes just the first named defendant in each case. In several cases, there are multiple defendants).

In another Righthaven development Tuesday, Righthaven filed an "urgent motion" with the 9th U.S. Circuit Court of Appeals asking it to stay action in Righthaven's lawsuit against Kentucky message board poster Wayne Hoehn over a Review-Journal column.

After Hoehn prevailed in the suit with standing and fair use victories, U.S. District Judge Philip Pro through Tuesday had not acted on Righthaven's request that he stay his order that Righthaven pay Hoehn's $34,045 in legal fees while Righthaven appeals.
Attorneys for Hoehn, in the meantime, have asked that Pro find Righthaven in contempt for not paying, appoint a receiver over Righthaven and that Pro or the court clerk issue an order allowing the U.S. Marshals Service to seize its assets.

"These sweeping contempt and judgment enforcement efforts unquestionably subject Righthaven to the immediate threat of irreparable harm by seeking to appoint a receiver over its affairs, as well as to seize and liquidate its tangible and intangible assets, which include the company’s intellectual property rights in and to copyright protected content that is directly at issue in this case, as well as those at issue in several other appeals pending before this court along with content at issue numerous cases pending in the district of Nevada and the district of Colorado," Mangano argued in Tuesday's motion.

In a new legal argument, he said that staying the Hoehn case while it's appealed is in the "public interest."

"Resolution of these issues impacts not just Righthaven and Hoehn, but it impacts a vast array of businesses and individuals utilizing the Internet on a daily basis. For instance, Righthaven’s appeal implicates the parameters under which non-content generating copyright holders can enforce rights in and to assigned content. Assignment of copyright-protected content occurs throughout the country on a daily basis. The public would unquestionably benefit from additional case law that sets forth the requirements for properly conveying ownership in and to copyright protected content together with the right to sue for accrued infringement claims," Mangano wrote in his order. "Granting a stay ensures that these issues will be presented to this court for a decision.

"Denying stay relief, however, necessarily raises the possibility that Righthaven may be forced to file bankruptcy to protect is intellectual property and propriety assets from seizure and liquidation, which would have grave implications for the company’s ability to prosecute the appeal in this case as well as its appeal in other cases from the District of Nevada," Mangano wrote in his brief.

Righthaven has now twice threatened bankruptcy even though it's financially backed by Arkansas investment banking billionaire Warren Stephens and his family, who also own the Review-Journal.

The company's financial prospects appear to have dimmed with action on its lawsuits stalled this summer, meaning Righthaven hasn't collected much settlement revenue. And now, it appears its entire investment in the 34 open Colorado cases was wiped out by Tuesday's ruling by Kane. http://www.vegasinc.com/news/2011/se...ed-copyright-/

Best-Selling Author Gives Away His Work
Julie Bosman

A publishing industry that is being transformed by all things digital could learn some things from Paulo Coelho, the 64-year-old Brazilian novelist. Years ago he upended conventional wisdom in the book business by pirating his own work, making it available online in countries where it was not easily found, using the argument that ideas should be disseminated free. More recently he has proved that authors can successfully build their audiences by reaching out to readers directly through social media. He ignites conversations about his work by discussing it with his fans while he is writing.

That philosophy has helped him sell tens of millions of books, most prominently “The Alchemist,” an allegorical novel that has been on the New York Times best-seller list for 194 weeks and is still a regular fixture in paperback on the front tables of bookstores.

This week Mr. Coelho releases his latest novel, “Aleph,” a book that tells the story of his own epiphany while on a pilgrimage through Asia in 2006 on the Trans-Siberian Railway. (Aleph is the first letter of the Hebrew alphabet, with many mystical meanings.) While Mr. Coelho spent four years gathering material for the book, he wrote it in only three weeks.

Spreading the word about the book should be easy; he has become a sort of Twitter mystic, writing messages in English and his native Portuguese and building a following of 2.4 million people. (A recent example: “When your legs are tired, walk with your heart.”) In 2010 Forbes named him the second-most-influential celebrity on Twitter, behind only Justin Bieber.

Mr. Coelho continues to give his work away free by linking to Web sites that have posted his books, asking only that if readers like the book, they buy a copy, “so we can tell to the industry that sharing contents is not life threatening to the book business,” as he wrote in one post.

From his home in Geneva, Mr. Coelho spoke about his new book, his feeling of connection to Jorge Luis Borges and his leisure time spent networking with his fans on Facebook and Twitter. Following are edited excerpts.

Q. The protagonist of your new novel, “Aleph,” sounds familiar: best-selling author, world traveler, spiritual seeker. How autobiographical is this book?

A. One hundred percent. These are my whole experiences, meaning everything that is real is real. I had to summarize much of it. But in fact I see the book as my journey myself, not as a fiction book but as a nonfiction book.

Q. The title of the book, “Aleph,” mirrors the name of a short story by Borges. Were you influenced by him?

A. He is my icon, the best writer in the world of my generation. But I wasn’t influenced by him, I was influenced by the idea of aleph, the concept. In the classic tradition of spiritual books Borges summarizes very, very well the idea of this point where everything becomes one thing only.

Q. When did you decide to become a writer?

A. It took me 40 years to write my first book. When I was a child, I was encouraged to go to school. I was not encouraged to follow the career of a writer because my parents thought that I was going to starve to death. They thought nobody can make a living from being a writer in Brazil. They were not wrong. But I still had this call, this urge to express myself in writing.

Q. Your most famous book, “The Alchemist,” has sold 65 million copies worldwide. Does its continuing success surprise you?

A. Of course. It’s difficult to explain why. I think you can have 10,000 explanations for failure, but no good explanation for success.

Q. You’ve also had success distributing your work free. You’re famous for posting pirated version of your books online, a very unorthodox move for an author.

A. I saw the first pirated edition of one of my books, so I said I’m going to post it online. There was a difficult moment in Russia; they didn’t have much paper. I put this first copy online and I sold, in the first year, 10,000 copies there. And in the second year it jumped to 100,000 copies. So I said, “It is working.” Then I started putting other books online, knowing that if people read a little bit and they like it, they are going to buy the book. My sales were growing and growing, and one day I was at a high-tech conference, and I made it public.

Q. Weren’t you afraid of making your publisher angry?

A. I was afraid, of course. But it was too late. When I returned to my place, the first phone call was from my publisher in the U.S. She said, “We have a problem.”

Q. You’re referring to Jane Friedman, who was then the very powerful chief executive of HarperCollins?

A. Yes, Jane. She’s tough. So I got this call from her, and I said, “Jane, what do you want me to do?” So she said, let’s do it officially, deliberately. Thanks to her my life in the U.S. changed.

Q. And now you’re a writer with one of the most prominent profiles online. Are you a Twitter addict?

A. Yes, I confess, in public. I tweet in the morning and the evening. To write 12 hours a day, there is a moment when you’re really tired. It’s my relaxing time.

Q. That seems to be the opposite approach of writers like Jonathan Franzen who blindfold themselves and write their books in isolation.

A. Back to the origins of writing, they used to see writers as wise men and women in an ivory tower, full of knowledge, and you cannot touch them. The ivory tower does not exist anymore. If the reader doesn’t like something they’ll tell you. He’s not or she’s not someone that is isolated.

Once I found this possibility to use Twitter and Facebook and my blog to connect to my readers, I’m going to use it, to connect to them and to share thoughts that I cannot use in the book. Today I have on Facebook six million people. I was checking the other day Madonna’s page, and she has less followers than I have. It’s unbelievable.

The ever-rising costs of textbooks is an unavoidable nightmare for many students and hot-topic to those who see the system as corrupt. Now, a site with a mission to dismantle what they say amounts to a publishing monopoly has come up with another solution to bring cheap and free textbooks to students. The publishers are going to hate it but the site doesn’t care. They insist that it’s students that are being abused by publishers, not the other way round.

During August, just before the start of the new school term, we reported on LibraryPirate, a site with a mission of providing college students with an alternative to continuously rising textbook prices. Bemoaning what he sees as greedy profiteering, LibraryPirate’s admin says the year-old site’s aim is clear.

“Our mission is simple and specific,” he told TorrentFreak. “To revolutionize the digital e-textbook industry and change it permanently.”

Now the site is stepping up its assault against “textbook monopolists” by offering a brand new service to not only reduce the costs of digital textbook rentals, but to turn that temporary access to an educational necessity into permanent ownership.

Library Pirate

The initiative the site is running is called “Hire-a-Pirate” and the publishers aren’t going to like it one bit. Many students, on the other hand, won’t share their view. This is how it works.

First, the student lets LibraryPirate know the title of the book they’re looking for. Then, site staff locate the product on eTextbook rental services and advise the student of the current rental price. An example shown to us was a book costing $200, but with a time-limited digital rental copy also available at $118.50.

Participating students are then asked to purchase a gift certificate from the official seller for the full amount ($118.50 in our example) and send the gift code to LibraryPirate. Site staff then rent the book on the student’s behalf.

“After a little bit of this and a little of that, we strip the DRM from the PDF and contact the user letting them know the book is ready via torrent,” says LP’s admin. “The student can now carry the textbook with them anywhere for as long as they want, allowing the PDF to be easily read on any device.”

The idea is that not only does a rental copy get turned into the unrestricted real thing, but students can choose to split the cost of obtaining a book between friends – 10 friends contributing means just $11.85 each. For future students, however, the cost of obtaining the same book reduces to zero.

“Every textbook purchased through the Hire-a-Pirate program will be added to the LibraryPirate torrent database. If you do not have time to scan books, this is an excellent way to help the cause and save money at the same time,” adds LP.

However, for those who have hard copies already, the time to take a few photographs and a desire to share, LibraryPirate have just released a new tool to make eBook creation a lot more simple.

LPBR is a piece of software created by LP member RiddleRiot which turns any digital camera into “a lean mean textbook scanning machine.”

After placing the book on a black background and photographing its pages, a couple of clicks later and an eBook comes out the other end.

“LPBR will crop, sharpen and re-size the entire folder of camera scan images into one easily readable PDF book,” says TP’s admin. “It’s so easy to scan a textbook now, even a college student can do it. During our testing, we were able to scan and convert one 500 page book in under 2 hours.”

Of course, with both the Rent-a-Pirate service and the LPBR software, what we’re looking at here is copyright infringement, but LP’s admin insists that since students are being abused by a broken education system that leaves them no other option than to spend ridiculous sums of money on textbooks, there is only “one path to moral high ground.”

The “private theft of education” must be combated, he concludes, and that can only come about by striking the monopolies where it counts – in their pocketbooks.

So, is ripping DRM from textbooks and sharing them for the purposes of gaining an education more morally acceptable than doing the same with movies, music and games? Or is it just an elaborate excuse to frame copyright infringement in a righteous manner?

What comes first, the rights of the publishers or the need for a fairer system towards educational enlightenment?

As e-books continue to grow in popularity, there's a seemingly unwinnable debate over which is better: digital books or their paper-based counterparts. Both have their advantages and it'll likely be quite awhile before paper books come close to disappearing. In addition all of the benefits that e-books bring, is it possible that they may also make it more difficult for books to be banned in the future?

Out of the American Library Association's top 10 most-banned books of last year, all but three of them are available for purchase from Amazon's Kindle Store, notes a blog post from Beyond Black Friday.

Every year, the ALA publishes a list of the books that are most frequently "challenged" in public schools and libraries. Most of the objections are for sexual content or offensive language, while a few hundred are made simply for references to homosexuality or content considered by some groups to be "anti-family."

Last year's list includes mostly books that are readily available on Kindle and any platform for which there is a Kindle app, such as iOS or Android. The only ones that aren't are a children's story about gay penguins, an a "multicultural queer youth anthology" (which appears to be rare in paperback as well) and a controversial book about a Native American boy. A few of the books included on the list are actually best-sellers in the Kindle Store, including Twilight and The Hunger Games.

Of course, the banned book list applies primarily to public schools and libraries. Most books forbidden from these venues could always be purchased elsewhere in paper format, but their inclusion on the Kindle platform makes them even more widely available, often at a lower price than the paperback or hardcover version. Presumably, Amazon's new Kindle library book lending program won't necessarily make a huge difference by itself, since those e-books are chosen by the library itself. https://www.readwriteweb.com/archive..._ban_books.php

Frequently Challenged Books of the 21st Century

Each year, the ALA's Office for Intellectual Freedom compiles a list of the top ten most frequently challenged books in order to inform the public about censorship in libraries and schools. The ALA condemns censorship and works to ensure free access to information.

A challenge is defined as a formal, written complaint, filed with a library or school requesting that materials be removed because of content or appropriateness. The number of challenges reflects only incidents reported. We estimate that for every reported challenge, four or five remain unreported. Therefore, we do not claim comprehensiveness in recording challenges.
Background Information from 2001 to 2010

Over the past ten years, American libraries were faced with 4,660 challenges.

1,536 challenges due to “sexually explicit” material;
1,231 challenges due to “offensive language”;
977 challenges due to material deemed “unsuited to age group”;
553 challenges due to “violence”
370 challenges due to “homosexuality”; and

Further, 121 materials were challenged because they were “anti-family,” and an additional 304 were challenged because of their “religious viewpoints.”

1,720 of these challenges (approximately 37%) were in classrooms; 30% (or1,432) were in school libraries; 24% (or 1,119) took place in public libraries. There were 32 challenges to college classes; and 106 to academic libraries. There are isolated cases of challenges to materials made available in or by prisons, special libraries, community groups, and student groups. The majority of challenges were initiated by parents (almost exactly 48%), while patrons and administrators followed behind (10% each).

2004: 1) The Chocolate War, by Robert Cormier; 2) Fallen Angels, by Walter Dean Meyers; 3) Arming America: The Origins of a National Gun Culture, by Michael A. Bellesiles; 4) Captain Underpants (series), by Dav Pilkey; 5) The Perks of Being a Wallflower, by Stephen Chbosky; 6) What My Mother Doesn’t Know, by Sonya Sones; 7) In the Night Kitchen, by Maurice Sendak; 8) King & King, by Linda deHaan; 9) I Know Why the Caged Bird Sings, by Maya Angelou; 10) Of Mice and Men, by John Steinbeck

Students at Silver Creek High School in Longmont, Colo., held a “graffiti debate” on censorship on Wednesday: Should schools block Web sites? On sheets of white butcher paper hanging in the library, they wrote lists of the pros and cons of online access.

New Trier High School in the Chicago suburbs surveyed students about blocked Web sites after loosening its own Internet filters this year. And in New York City, students and teachers at Middle School 127 in the Bronx sent more than 60 e-mails to the Department of Education to protest a block on personal blogs and social media sites.

These were some of the efforts marking the first Banned Websites Awareness Day, organized by the American Association of School Librarians as an offshoot of Banned Books Week.

Carl Harvey, the association’s president, said that as more schools had embraced online technologies, there had been growing concern over schools that block much of the Internet.

But some school leaders and education advocates have argued that the Internet can be a distraction in the classroom, and that blocking social media is also a way to protect students from bullying and harassment at school.

“I think students should have unfettered access to the library,” said William Fitzhugh, editor of The Concord Review, which publishes history papers written by high school students, adding that many children already spend too much time on the Internet.

Phil Goerner, a Silver Creek librarian, said the focus on banned Web sites encouraged students to wrestle with the thornier issues of censorship. He asked his students to consider whether schools should block sites espousing neo-Nazi or racist ideas. “It makes them think about it in deeper ways than if they were just to say, ‘No, don’t block it,’ ” he said.

Mr. Goerner said he decided to organize the graffiti debate as a reminder to students that censorship takes away a person’s voice or, in this case, online privileges. Silver Creek unblocked many social media sites, including Facebook and Twitter, two years ago after recognizing that they could provide learning opportunities, he said.

Similarly, New Trier High School stopped blocking many sites this year after teachers voiced concerns that the filtering had grown oppressive.

Entire categories of Web sites had been blocked, including those that involved games, violence, weapons, even swimsuits, said Judy Gressel, a librarian. “It just got to the point that it became hard to conduct research,” she said, adding that students could not read sites about, say, military weapons for a history paper.

Deven Black, a librarian at Middle School 127 in the Bronx, also said that filters had blocked a range of useful Web sites. YouTube and personal blogs where educators share resources can have value, he said. “Our job is to teach students the safe use of the Internet. And it’s hard to do that if we can’t get to the sites.”

New Canaan High School, in Connecticut, cut off all access to Facebook, YouTube and Twitter just for the day to show solidarity with schools without access.

“It’s not even lunchtime, and I’m already dying,” said Michael DeMattia, 17, a senior, who carries a laptop to school.

In his Advanced Placement Biology class, where lab groups have created a Facebook thread to collaborate and share data, he could not log in. In honors comparative literature, his classmates were unable to show a YouTube video during a presentation.

Amid a tabloid phone hacking scandal that has shaken Britain, the country's main opposition Labour Party on Tuesday suggested reporters guilty of malpractice could be banned from working as media professionals.

Proposing tough reforms of the news industry's system of self governance, Labour lawmaker Ivan Lewis said Britain's media needed new scrutiny similar to the rules which govern medicine.

"As in other professions, the industry should consider whether people guilty of gross malpractice should be struck off," Lewis told the party's annual convention.

The phone hacking scandal has already forced the closure of the 168-year-old News of the World tabloid, and seen the resignation of top police officers and media executives.

A police inquiry has so far arrested 16 people — though one person has already been exonerated — and seen media mogul Rupert Murdoch brought before a parliamentary committee to explain the actions of his employees.

Lewis said the crisis over reporters and private detectives illegally obtaining cell phone voicemail messages had exposed the limitations of current press scrutiny.

Britain's media watchdog, the Press Complaints Commission, is funded by the industry itself and can demand a newspaper publishes an apology, but has no power to issue fines.

"We need a new system of independent regulation including proper like-for-like redress which means mistakes and falsehoods on the front page receive apologies and retraction on the front page," Lewis told the rally in the northern England city of Liverpool.

Critics said that in order to "strike off" reporters, the media industry would need a professional register like the list held by the General Medical Council, which licenses and monitors the performance of doctors.

"This is another half-baked idea from a weak Labour leadership — we need a free, fair press, not some state registry for journalists," said Louise Mensch, a lawmaker with the governing Conservative Party and member of Parliament's culture, media and sport committee.

In his major address to the Labour Party convention, leader Ed Miliband told delegates that media malpractice was part of a wider problem in British society.

Bankers pursuing risky trades, rioters who joined looting during unrest in England last month and lawmakers exposed in 2009 over their wild expense claims all shared the same lack of regard for others, he said.

Miliband described what he termed a "something for nothing culture — take what you can, fill your boots, who cares as long as you can get away it?"

He said Britain too often rewarded "not the right people with the right values, but the wrong people with the wrong values."

In addition to his party's plan to reform the media, Miliband vowed to overhaul his party's poor record on the economy and to offer an alternative plan to stimulate growth.

"I am determined to prove to you that the next Labour Government will only spend what it can afford. That we will live within our means, that we will manage your money properly," Miliband pledged.

The Labour Party was ousted from office in May 2010 after 13 years in power, wounded by the global financial crisis, public resentment over the wars in Iraq and Afghanistan and ex-prime minister Gordon Brown's unpopularity.

Miliband replaced Brown as party leader following a surprise narrow victory over his elder brother David — Britain's former foreign secretary — in a leadership contest last year.

He told the convention he would remold his party to represent ordinary people who were appalled by greedy legislators, sneaky journalists or rampaging rioters.

Suppression of Dead Sea Scrolls, Anti-Hacker Mentality Have More in Common Than You Think

Control of information, more than any other issue, defines our ethics, societies and chance to improve
Kevin Fogarty

I'm all for putting every scrap of arcane information online for the masses to enjoy. You never know when you'll find a Weiner Tweet or something really salacious about Afghan police trainers hiring "dancing boys" to entertain at banquets, or something of equal global importance.

The Dead Sea Scrolls – the oldest surviving copies of biblical text – were posted today in the most complete version ever published anywhere for any reason – though that's not difficult with this particular set of cables.

The scrolls have been translated and published before – any number of times, in academic journals, popular books and everywhere else.

They've even been translated and available online before. Here is the Gnostic Society's version; this one from the Library of Congress includes photos, translations and a lot of background on the scrolls, history and archeology of the region, especially the "settlement at Qumran," the designation that will stand in for the name of the town the Dead Sea Scroll authors lived in until archeologists find out what the settlers called it.

Control of content is the key to control

The content and decisions about who has access to the original copies of the scrolls – and how much of them – has always been tightly controlled. Some concerns were legitimate ones about preserving fragile documents from the damage they'd suffer from repeated bright-light photography, x-ray and other tests, not to mention constant pawing by visiting scholars and VIPs who wanted to drop by the lab just to satisfy their own curiosity.

The biggest controversy has been the fight over access rights that have been warped by fights between Israel and Jordan over who actually owns the scrolls, and by the scholars given control over it, who used national rivalries, rivalries between Christian and Jew and eventually even rivalries among different interpretations of Judaism to keep the scrolls in an environment that was highly defensive, both physically and intellectually.

The project that debuted today – a collaboration between Google and the Israel Museum – puts images of all of the available text online, along with translations in English.

At the time the scrolls were written, sometime around 70 A.D. the Temple in Jerusalem was not only the most holy place in Judea, it was also the center of religious, civil and divine power for Jews worldwide.

When the occupying Romans got sick of their version of the Middle Eastern Question and pulled the Temple down, they didn't just inconvenience worshipers who had to go find a local synagogue. They destroyed the place Jews considered the residence of god on Earth.

It could have killed Judaism.

Instead, as they were driven into exile everywhere but Jerusalem, Jews reinvented Jewishness as a culture, an educational process, a way to think analytically to discover Morals beneath the Laws, and ecumenically, centering their worship and societies around local synagogues and teachers rather than sacrifices of animals at one particular spot in one hot, small, dusty city.

Before the Temple was destroyed, the only place to make a legitimate sacrifice or worship was at the Temple, among the various levels of priesthood created by Moses during the Exodus, each of whom had specific rights and privileges – especially of access to the Temple's holy places – that no one else could violate "lest ye die."

As in every other oral culture – which Israel was before settling down to learn Aramaic – the rules were learned by rote and the word of the priest, Bard or law-reciter was, literally, law.

The adoption of a written testament disrupted that a little, but it was still pretty easy to keep the holy writings out of the hands of heretics, who would have to accept the word of the priests or be, well, heretics.

It worked in Judea for a while, but the Christian Church made it work so well for so long that when Gutenberg printed the first bible in 1456 – in Latin, which only the educated and extremely patient could read – it wasn't just the beginning of a technical revolution, it touched off religious and civil revolutions as well.

(This link goes to a good museum presentation of the Gutenberg, but don't bother unless you read Latin written in fancy script; the graphics in it contribute nothing.)

The first Information Technology revolution involved a lot more dying

Books of the Christian bible were first written in local languages – mostly Aramaic, though with a spattering of Latin, Greek and a spattering of other Mediterranean staples were among those in which the early Christian Word was expressed, though the early church dropped most of the languages and most of the text when it decided what Christians would consider "true" from then on, at the Council of Nicea in 325. (There were several towns named "Nicea," but the conference was in one that is now the modern Turkish city of Iznik – a sleepy kind of place that was off the hook for Christian bishops when Roman Emperor Constantine sent them there to hammer together a dogma that could be enforced.)

The gospels they kept in, the ones they left out, the ones they repeated, changed, reinterpreted, twisted and selected among on the basis of credibility (some of the originals were kind of whacko), sense, historical significance, accuracy and adherence to a mainstream belief that may have had little to do with the preaching of the actual Jewish troublemaker they worshipped, made Christianity a straight and narrow path on which the Christian Church could require believers to walk, lest they die.

It's not too hard to keep control of your sources of information when they all have to be written by hand using pens made from dead animal parts.

Pope Innocent III (kind of a misnomer, honestly), banned unauthorized, localized translations of the bible in 1199, as a way to control the message while chopping up and burning those who carried the wrong one.

Translating into a language you could already understand the words of the being that created the world in which you live, whose principles you must live by or risk eternal damnation became heresy punishable by death.

It was a crazy decision, but hardly unique. Even now some Islamic governments and many conservative Muslim scholars consider it to be heresy (on pain of death, seriously) to translate the Quran from Arabic into other languages. If you look at translation as messing with the direct word of the being that created the universe, it probably makes sense.

If you figure the only reason to write a book is to make it possible for others to read what's in it, calling it a sin to distribute or translate a book is an even bigger sin.

That's certainly the conclusion of other schools of Islam, who can quote verse and verbally flower with the best of them in arguing that any that keeps people away from the words in the Quran is sinful.

Either way, the lesson of the Dead Sea Scrolls – when they were written by outsiders denied entrance to the halls of power and during the 50 years since, when access to them was controlled by those who valued the power of telling other scholars "no" more than they valued the information they protected.

Don't Hack holy data, lest ye die

By now there are few surprises left in the Scrolls. Those there were managed to avoid popping any modern religions like a balloon. That could only be a risk if you believe the Dan Brown/ Da Vinci Code view of history as a stack of lies that will collapse if an earlier lie is revealed to be less truthy that it was supposed to be.

That doesn't keep anyone from an overexuberant sense of righteousness like the protesters who wanted a Swedish cartoonist killed for drawing Mohammed's face, keep Turkey from trying to ban YouTube from Turkey because of a single video considered to be insulting to the founder of Turkey, Mustafa Kemal Ataturk, or, you know, that whole putting-heretics-to-death thing both Catholic and Protestant Christians used to do. A lot.

Putting the Dead Sea Scrolls online isn't going to change anyone's beliefs any more than the rest of the convoluted history of how both Old and New Testaments were cobbled together over generations has shaken anyone's faith.

Most people don't have the language, historical or religious knowledge to read the scrolls, let alone contribute new interpretations to them. Lowering the bar to access makes it a lot more likely it will happen, though.

It will also give Google a little more credibility in the book-publishing industry – which still considers it a pirate for putting out-of-copyright books online, because publishers have learned nothing from what digital downloads have done to the physical-storage-medium-distribution business referred to as the Music Industry.

The important thing about the Scrolls isn't what's in them. It's in the story about how they were locked up, controlled and used as leverage to impose a set of interpretations and conditions on anyone outside the holy circle who wanted access.

The principle is the same – though the legalities and context are different – in controversies over WikiLeaks, Anonymous and the unauthorized release (outright theft) of information that is struggling to be free.

Information doesn't want to be free, but laws protecting all of us from some of us don't want information locked up solely for the benefit of those who can squeeze the most money or power from it.

That's why we have free libraries in the U.S. And free schools.

Information isn't free; it's valuable. Sometimes it's dangerous, so releasing it without proper care – publishing all the Cablegate cables without excising the names of secret sources who could be killed if they're discovered, for example – can be a disaster.

Information doesn't want to be free …

That tendency toward selfish secretiveness, not the thrill of seeing diplomats write in un-stilted language or read about the private lives of foreign dignitaries, is as good a justification for WikiLeaks and Anonymous as any other argument.

I can't agree with many of the methods of crackers and hactivists and fraudsters who have made cyberwar more of a reality than the people supposed to be fighting it want to admit.

Most of it is just criminal sabotage or theft or espionage.

Trying to counter it by spending so much energy trying to put whistleblowers like Julian Assange in prison rather than doing more mundane thing – like keeping East European Mafiosi and Chinese spies from walking into your data farm and snacking on everything they want – is counterproductive.

It's silly to try to urge balance in a crackdown against hackers that has barely started, but we don't do things to half measure in this country. Nothing counts unless it's extreme.

Shutting down every hacker and every whistleblower by imposing the same harsh sentences on the kid going self-serve Freedom of Information request fulfillment by discovering the illegal, un-Constitutional dirty tricks being planned by a security company employed by the government is a lot different than stealing a million credit card numbers from Sony.

…it wants to shine with use

I don't think all information wants to be free, as Hacker jingoist .sigs phrase it.

Not all information can be free, or should.

It doesn't want to be shut up in one scholar's lab for 50 years like the Dead Sea Scrolls, though, either.

By the time it comes out, no one can read it or even really cares much what it says. Then what good is it?

That doesn't mean it should be taken and used whenever and however it can be. No one but predators benefit when script kids post 50,000 credit-card numbers just to show they can run a canned SQL injection against a soft site.

It does mean we shouldn't spend all our federal enforcement efforts trying to punish Julian Assange for embarrassing the State Dept. , or ignore the implications of leaked information and spend all our time trying to track down the whistleblower.

Information is trouble, there's no question about that.

That's why priests and kings and, sometimes, hermit communities in the desert went to great lengths to control and correct and disseminate it correctly before, ultimately, saving it by burying it in the dirt where it could simply have died.

In one case, at least, it didn't, leading to a day that will undoubtedly thrill Aramaic speakers worldwide.

In too many cases it does, sometimes because the whistleblower is afraid to blow, the leaker is plugged or the site or service or hacker who was going to reveal it was prevented for reasons that seem trivial to the rest of us – like insulting Turkishness, or translating the words of the almighty into a language we can read.

Hackers have broken into the cellphones of celebrities like Scarlett Johansson and Prince William. But what about the rest of us, who might not have particularly salacious photos or voice messages stored in our phones, but nonetheless have e-mails, credit card numbers and records of our locations?

A growing number of companies, including start-ups and big names in computer security like McAfee, Symantec, Sophos and AVG, see a business opportunity in mobile security — protecting cellphones from hacks and malware that could read text messages, store location information or add charges directly to mobile phone bills.

On Tuesday, McAfee introduced a service for consumers to protect their smartphones, tablets and computers at once, and last week the company introduced a mobile security system for businesses. Last month, AT&T partnered with Juniper Networks to build mobile security apps for consumers and businesses. The Defense Department has called for companies and universities to come up with ways to protect Android devices from malware.

In an indication of investor interest, one start-up, Lookout, last week raised $40 million from venture capital firms, including Andreessen Horowitz, bringing its total to $76.5 million. The company makes an app that scans other apps that people download to their phones, looking for malware and viruses. It automatically tracks 700,000 mobile apps and updates Lookout whenever it finds a threat.

Still, in some ways, it’s an industry ahead of its time. Experts in mobile security agree that mobile hackers are not yet much of a threat. But that is poised to change quickly, they say, especially as people increasingly use their phones to exchange money, by mobile shopping or using digital wallets like Google Wallet.

“Unlike PCs, the chance of running into something in the wild for your phone is quite low,” said Charlie Miller, a researcher at Accuvant, a security consulting company, and a hacker who has revealed weaknesses in iPhones. “That’s partly because it’s more secure but mostly because the bad guys haven’t gotten around to it yet. But the bad guys are going to slowly follow the money over to your phones.”

Most consumers, though they protect their computers, are unaware that they need to secure their phones, he said, “but the smartphones people have are computers, and the same thing that can happen on your computer can happen on your phone.”

Cellphone users are more likely than computer users to click on dangerous links or download sketchy apps because they are often distracted, experts say. Phones can be more vulnerable because they connect to wireless networks at the gym or the coffee shop, and hackers can surreptitiously charge consumers for a purchase.

There have already been harmful attacks, most of which have originated in China, said John Hering, co-founder and chief executive of Lookout.

For example, this year, the Android market was hit by malware called DroidDream. Hackers pirated 80 applications, added malicious code and tricked users into downloading them from the Android Market. Google said 260,000 devices were attacked.

Also this year, people unwittingly downloaded other malware, called GGTracker, by clicking on links in ads, and on the Web site to which the links led. The malware signed them up, without their consent, for text message subscription services that charged $10 to $50.

Lookout says that up to a million people were afflicted by mobile malware in the first half of the year, and that the threat for Android users is two and a half times higher than it was just six months ago.

Still, other experts caution that fear is profitable for the security industry, and that consumers should be realistic about the small size of the threat at this point. AdaptiveMobile, which sells mobile security tools, found that 6 percent of smartphone users said they had received a virus, but that the actual number of confirmed viruses had not topped 2 percent.

Lookout’s founders are hackers themselves, though they say they are the good kind, who break into phones and computers to expose the risks but not to steal information or behave maliciously. “It’s very James Bond-type stuff,” Mr. Hering said.

A few years ago, he stood with a backpack filled with hacking gear near the Academy Awards red carpet and discovered that up to 100 of the stars carried, in their bejeweled clutches and tuxedo pockets, cellphones that he could break into. He did not break into the phones, but publicized his ability to do so.

He started Lookout in 2007, along with Kevin Mahaffey and James Burgess, to prevent such intrusions. It has free apps for Android, BlackBerry and Windows phones, but not for iPhones. They are less vulnerable to attacks, security experts say, because Apple’s app store, unlike Android’s, screens every app before accepting it. Also, Android is the fastest-growing mobile platform, so it is more attractive to hackers.

Google says it regularly scans apps in the Android Market for malware and can rapidly remove malicious apps from the market and from people’s phones. It prevents Android apps from accessing other apps and alerts users if an app accesses its contact list or location, for instance.

Lookout also sells a paid version for $3 a month, which scans apps for privacy intrusions like accessing a user’s contact list, alerts users if they visit unsafe mobile Web sites or click on unsafe links in text messages, backs up a phone’s call history and photos, and lets people lock or delete information from lost devices.

T-Mobile builds Lookout into its Android phones, Verizon uses its technology to screen apps in its app store and Sprint markets the app to customers. The cellphone carriers and Lookout share the revenue when a user upgrades to the paid version.

“In mobile security circles, you never wait on it to become a problem and it’s too late,” said Fared Adib, vice president of product development at Sprint.

Meanwhile, because mobile phone attacks are still relatively rare, Lookout’s free app includes tools, including a way to back up a user’s contacts and a feature that enables users to turn on an alarm on their phone when it is lost.

“You’re way more likely to just leave it in a cab than you are going to be attacked by a hacker,” said Mr. Miller, the security researcher.

And in addition to collecting money from paying subscribers, Lookout plans to sell the service to businesses. It has a chance because consumers are increasingly bringing their own technologies into the workplace, and Lookout’s app is consumer-friendly, said Chenxi Wang, a security analyst at Forrester Research.

“It’s something a lot of I.T. guys are worried about because they have no control over what consumers are doing and what these apps are doing,” Ms. Wang said.

Giovanni Vigna, a professor at the University of California, Santa Barbara who studies security and malware, said it was only a matter of time before mobile security was as second nature to consumers as computer security.

Stuxnet, the cyberweapon that attacked and damaged an Iranian nuclear facility, has opened a Pandora's box of cyberwar, says the man who uncovered it. A Q&A about the potential threats.
Mark Clayton

One year ago a malicious software program called Stuxnet exploded onto the world stage as the first publicly confirmed cyber superweapon – a digital guided missile that could emerge from cyber space to destroy a physical target in the real world.

It took Ralph Langner about a month to figure that out.

While Symantec, the big antivirus company, and other experts pored over Stuxnet's inner workings, it was Mr. Langner, a industrial control systems security expert in Hamburg, who deciphered and tested pieces of Stuxnet's "payload" code in his lab and declared it a military-grade cyberweapon aimed at Iran's nuclear facilities.

Days later, he and other experts refined that assessment, agreeing Stuxnet was specifically after Iran's gas centrifuge nuclear fuel-enrichment program at Natanz.

After infiltrating Natanz's industrial-control systems, Stuxnet automatically ordered subsystems operating the centrifuge motors to spin too fast and make them fly apart, Langner says. At the same time, Stuxnet made it appear random breakdowns were responsible so plant operators would not realize a nasty software weapon was behind it.

In the end, Stuxnet may have set back Iran's nuclear ambitions by years. But it also could prove a Pyrrhic victory for its still-unknown creator – a sophisticated cyberweapons nation state that Langner argues could be the US or Israel. Like the Hiroshima bomb, Stuxnet demonstrated for the first time a dangerous capability – in this case to hackers, cybercrime gangs, and new cyberweapons states, he says in an interview.

With Stuxnet as a "blueprint" downloadable from the Internet, he says, "any dumb hacker" can now figure out how to build and sell cyberweapons to any hacktivist or terrorist who wants "to put the lights out" in a US city or "release a toxic gas cloud."

What follows are excerpts of Langner's comments from an extended interview:

CSM: How would you characterize the year since Stuxnet – the response by nations, industry and government?

LANGNER: Last year, after Stuxnet was identified as a weapon, we recommended to every asset owner in America – owners of power plants, chemical plants, refineries and others – to make it a top priority to protect their systems.... That wakeup call lasted only about a week. Thereafter, everybody fell back into coma. The most bizarre thing is that even the Department of Homeland Security (DHS) and Siemens [maker of the industrial control system targeted by Stuxnet] talked about Stuxnet being a wakeup call, but never got into the specifics of what needed to be done.

CSM: What do you think has been the most important or dangerous development to emerge since you identified Stuxnet as a weapon?

LANGNER: The most dangerous development is that DHS and asset owners completely failed to identify and address the threat of copycat attacks.... With every day [that] cyber weapon technology proliferates; the understanding of how Stuxnet works spreads more and more. All the vulnerabilities exploited on the [industrial control system] level and [programmable logic controller] level are still there. Nobody cares.

CSM: How should nations and critical infrastructure owners deal with the threat of Stuxnet-like attacks or deter them?

LANGNER: There is no way to prevent the production and transfer of bits and bytes that can be transferred anywhere in the world by Internet. Arms control with satellite surveillance is impossible.... So I'm afraid cyber-arms control won't be possible. That's why the best option we have to start to counter this threat is to start protecting our systems – control systems, especially – in important facilities like power, water, and chemical facilities that process poisonous gases. Funny thing is, all these control systems, if compromised, could lead to mass casualties, but we still don't have any significant level of cybersecurity for them.

CSM: What's the hold up?

LANGNER: It will be costly to fix the vulnerabilities in industrial-control systems. But it will be definitely more costly if we wait until organized crime, terrorists, or nation states make their move first. Most engineers are aware of the problem, it's just that they don't get the budget to fix the problem. The risk is just discounted. As long as management doesn't see an immediate threat, there is a tendency to ignore it because it costs money to fix.

CSM: You warned a year ago that hackers would begin to explore how to modify Stuxnet – are you still worried about that? Should we be concerned about a "son of Stuxnet"?

LANGNER: Son of Stuxnet is a misnomer. What's really worrying are the concepts that Stuxnet gives hackers. The big problem we have right now is that Stuxnet has enabled hundreds of wannabe attackers to do essentially the same thing. Before, a Stuxnet-type attack could have been created by maybe five people. Now it's more like 500 who could do this. The skill set that's out there right now, and the level required to make this kind of thing, has dropped considerably simply because you can copy so much from Stuxnet.

CSM: But we haven't seen a follow-up to Stuxnet yet?

LANGNER: Not yet. But the clock is ticking. Parts of Stuxnet can simply be copied now. A cybersecurity researcher named Dillon Beresford this summer described to a hacker conference an industrial control system exploit that involved copying. His findings confirm my view that you don't have to be a genius to create a program that works on a control system exactly the way Stuxnet does. You just have to know how to copy parts of it. After that, you just need a little more knowledge to make a simple but effective digital dirty bomb. It may not be nearly as powerful as Stuxnet on a single system, but it could have a far broader effect on many systems. That's a digital dirty bomb.

CSM: But you yourself recently decided to demonstrate how simple a Stuxnet attack could be – just four lines of code – to make an industrial system freeze. A time bomb, really. Why did you do that?

LANGNER: I couldn't stand it any longer. We wasted a full year because nobody was listening. We published last September that parts of Stuxnet could be copied and that such a weapon would require zero insider knowledge. Nobody listened. What you still hear today from all kinds of people is how a Stuxnet-type attack requires so much insider knowledge. I finally had to publish this four-line attack just to make sure no smart-guy tells his boss that this is impossible. I left out some key parts of it so it could not be used.

CSM: Some describe Stuxnet as a "game changer" – do you think that's true?

LANGNER: It's certainly going to change the world. It already has in ways that not many people would recognize. The bottom line is that now we have a much better idea of what the future of war will look like – and what it would look like if certain military systems were a primary target.

CSM: What are the questions that Stuxnet has left behind?

LANGNER: It raises, for one, the question of how to apply cyberwar as a political decision. Is the US really willing to take down the power grid of another nation when that might mainly affect civilians? Could or should military contractors, instead of soldiers, wage cyberwar? What happens when cyberweapons dealers start selling sophisticated cyberweapons to terrorists? There is also the manner in which Stuxnet was used – which could be considered a textbook example of a "just war" approach. It didn't kill anyone. That's a good thing. But I am afraid this is only a short term view. In the long run it has opened Pandora's box.http://www.csmonitor.com/USA/2011/09...one-year-later

Diebold Voting Machines Can Be Hacked By Remote Control

Exclusive: A laboratory shows how an e-voting machine used by a third of all voters can be easily manipulated
Brad Friedman

It could be one of the most disturbing e-voting machine hacks to date.

Voting machines used by as many as a quarter of American voters heading to the polls in 2012 can be hacked with just $10.50 in parts and an 8th grade science education, according to computer science and security experts at the Vulnerability Assessment Team at Argonne National Laboratory in Illinois. The experts say the newly developed hack could change voting results while leaving absolutely no trace of the manipulation behind.

"We believe these man-in-the-middle attacks are potentially possible on a wide variety of electronic voting machines," said Roger Johnston, leader of the assessment team "We think we can do similar things on pretty much every electronic voting machine."

The Argonne Lab, run by the Department of Energy, has the mission of conducting scientific research to meet national needs. The Diebold Accuvote voting system used in the study was loaned to the lab's scientists by VelvetRevolution.us, of which the Brad Blog is a co-founder. Velvet Revolution received the machine from a former Diebold contractor

Previous lab demonstrations of e-voting system hacks, such as Princeton's demonstration of a viral cyber attack on a Diebold touch-screen system -- as I wrote for Salon back in 2006 -- relied on cyber attacks to change the results of elections. Such attacks, according to the team at Argonne, require more coding skills and knowledge of the voting system software than is needed for the attack on the Diebold system.

Indeed, the Argonne team's attack required no modification, reprogramming, or even knowledge, of the voting machine's proprietary source code. It was carried out by inserting a piece of inexpensive "alien electronics" into the machine.

The Argonne team's demonstration of the attack on a Diebold Accuvote machine is seen in a short new video shared exclusively with the Brad Blog [posted below]. The team successfully demonstrated a similar attack on a touch-screen system made by Sequoia Voting Systems in 2009.

The new findings of the Vulnerability Assessment Team echo long-ignored concerns about e-voting vulnerabilities issued by other computer scientists and security experts, the U.S. Computer Emergency Readiness Team (an arm of the Department of Homeland Security), and even a long-ignored presentation by a CIA official given to the U.S. Election Assistance Commission.

"This is a national security issue," says Johnston. "It should really be handled by the Department of Homeland Security."

The use of touch-screen Direct Recording Electronic (DRE) voting systems of the type Argonne demonstrated to be vulnerable to manipulation has declined in recent years due to security concerns, and the high cost of programming and maintenance. Nonetheless, the same type of DRE systems, or ones very similar, will once again be used by a significant part of the electorate on Election Day in 2012. According to Sean Flaherty, a policy analyst for VerifiedVoting.org, a nonpartisan e-voting watchdog group, "About one-third of registered voters live where the only way to vote on Election Day is to use a DRE."

Almost all voters in states like Georgia, Maryland, Utah and Nevada, and the majority of voters in New Jersey, Pennsylvania, Indiana and Texas, will vote on DREs on Election Day in 2012, says Flaherty. Voters in major municipalities such as Houston, Atlanta, Chicago and Pittsburgh will also line up in next year's election to use DREs of the type hacked by the Argonne National Lab.

Voting machine companies and election officials have long sought to protect source code and the memory cards that store ballot programming and election results for each machine as a way to guard against potential outside manipulation of election results. But critics like California Secretary of State Debra Bowen have pointed out that attempts at "security by obscurity" largely ignore the most immediate threat, which comes from election insiders who have regular access to the e-voting systems, as well as those who may gain physical access to machines that were not designed with security safeguards in mind.

"This is a fundamentally very powerful attack and we believe that voting officials should become aware of this and stop focusing strictly on cyber [attacks]," says Vulnerability Assessment Team member John Warner. "There's a very large physical protection component of the voting machine that needs to be addressed."

The team's video demonstrates how inserting the inexpensive electronic device into the voting machine can offer a "bad guy" virtually complete control over the machine. A cheap remote control unit can enable access to the voting machine from up to half a mile away.

"The cost of the attack that you're going to see was $10.50 in retail quantities," explains Warner in the video. "If you want to use the RF [radio frequency] remote control to stop and start the attacks, that's another $15. So the total cost would be $26."

The video shows three different types of attack, each demonstrating how the intrusion developed by the team allows them to take complete control of the Diebold touch-screen voting machine. They were able to demonstrate a similar attack on a DRE system made by Sequoia Voting Systems as well.

In what Warner describes as "probably the most relevant attack for vote tampering," the intruder would allow the voter to make his or her selections. But when the voter actually attempts to push the Vote Now button, which records the voter's final selections to the system's memory card, he says, "we will simply intercept that attempt ... change a few of the votes," and the changed votes would then be registered in the machine.

"In order to do this," Warner explains, "we blank the screen temporarily so that the voter doesn't see that there's some revoting going on prior to the final registration of the votes."

This type of attack is particularly troubling because the manipulation would occur after the voter has approved as "correct" the on-screen summaries of his or her intended selections. Team leader Johnson says that while such an attack could be mounted on Election Day, there would be "a high probability of being detected." But he explained that the machines could also be tampered with during so-called voting machine "sleepovers" when e-voting systems are kept by poll workers at their houses, often days and weeks prior to the election or at other times when the systems are unguarded.

"The more realistic way to insert these alien electronics is to do it while the voting machines are waiting in the polling place a week or two prior to the election," Johnston said. "Often the polling places are in elementary schools or a church basement or some place that doesn't really have a great deal of security. Or the voting machines can be tampered while they're in transit to the polling place. Or while they're in storage in the warehouse between elections," says Johnston. He notes that the Argonne team had no owner's manual or circuit diagrams for either the Diebold or Sequoia voting systems they were able to access in these attacks.

The team members are critical of election security procedures, which rarely, if ever, include physical inspection of the machines, especially their internal electronics. Even if such inspections were carried out, however, the Argonne scientists say the type of attack they've developed leaves behind no physical or programming evidence, if properly executed.

"The really nice thing about this attack, the man-in-the-middle, is that there's no soldering or destruction of the circuit board of any kind," Warner says. "You can remove this attack and leave no forensic evidence that we've been there."

Gaining access to the inside of the Diebold touch-screen is as simple as picking the rudimentary lock, or using a standard hotel minibar key, as all of the machines use the same easily copied key, available at most office supply stores.

"I think our main message is, let's not get overly transfixed on the cyber," team leader Johnston says. Since he believes they "can do similar things on pretty much every electronic voting machine," he recommends a number of improvements for future e-voting systems.

"The machines themselves need to be designed better, with the idea that people may be trying to get into them," he says. " If you're just thinking about the fact that someone can try to get in, you can design the seals better, for example."

"Don't do things like use a standard blank key for every machine," he warns. "Spend an extra four bucks and get a better lock. You don't have to have state of the art security, but you can do some things where it takes at least a little bit of skill to get in."http://www.salon.com/news/2012_elect.../27/votinghack

VPN Service Snitched on Alleged LulzSec Member
Mike Lennon

Yesterday, Cody Kretsinger, a 23-year-old from Phoenix, Arizona was arrested and charged with conspiracy and the unauthorized impairment of a protected computer, according a federal indictment.

How did the Feds track down the alleged LulzSec member? It turns out that a VPN service reportedly used to mask his online identify and location was the one who handed over data to the FBI.

According to the federal indictment (embedded below), Kretsinger registered for a VPN account at HideMyAss.Com under the user name “recursion”. Following that, the indictment said that Kretsinger and other unknown conspirators conducted SQL injection attacks against Sony Pictures in attempt to extract confidential data.

According to a blog post from HideMyAss, they realized that LulzSec members had been utilizing its service after seeing leaked IRC chat logs. The company said it took no action after discovering the hackers had been using its services to hide, saying, there was no evidence to suggest wrongdoing and nothing to identify which accounts they were using.

“At a later date it came as no surprise to have received a court order asking for information relating to an account associated with some or all of the above cases,” they wrote in the post this morning. “As stated in our terms of service and privacy policy our service is not to be used for illegal activity, and as a legitimate company we will cooperate with law enforcement if we receive a court order (equivalent of a subpoena in the US).”

The blog post, titled “Lulzsec fiasco” also added the following: “Our VPN service and VPN services in general are not designed to be used to commit illegal activity. It is very naive to think that by paying a subscription fee to a VPN service you are free to break the law without any consequences. This includes certain hardcore privacy services which claim you will never be identified, these types of services that do not cooperate are more likely to have their entire VPN network monitored and tapped by law enforcement, thus affecting all legitimate customers.”

You can be sure that HideMyAss is not the only provider to be hit with subpoenas and essentially being forced to hand over user data. It’s likely the FBI and other officials are digging deep and requesting similar information from other VPN providers and online services such as Pastebin, Twitter, and other tools and web services commonly used by hackers.https://www.securityweek.com/vpn-ser...lulzsec-member

Tor and the BEAST SSL Attack
nickm

Today, Juliano Rizzo and Thai Duong presented a new attack on TLS <= 1.0 at the Ekoparty security conference in Buenos Aires. Let's talk about how it works, and how it relates to the Tor protocol.

Short version: Don't panic. The Tor software itself is just fine, and the free-software browser vendors look like they're responding well and quickly. I'll be talking about why Tor is fine; I'll bet that the TBB folks will have more to say about browsers sometime soon.

There is some discussion of the attack and responses to it out there already, written by seriously smart cryptographers and high-test browser security people. But I haven't seen anything out there yet that tries to explain what's going on for people who don't know TLS internals and CBC basics.

So I'll do my best. This blog post also assumes that I understand the attack. Please bear with me if I'm wrong about that.

Thanks to the authors of the paper for letting me read it and show it to other Tor devs. Thanks also to Ralf-Philipp Weinmann for helping me figure the analysis out.

The attack

How the attack works: Basic background

This writeup assumes that you know a little bit of computer stuff, and you know how xor works.

Let's talk about block ciphers, for starters. A block cipher is a cryptographic tool that encrypts a small chunk of plaintext data into a same-sized chunk of encrypted data, based on a secret key.

In practice, you want to encrypt more than just one chunk of data with your block cipher at a time. What do you do if you have a block cipher with a 16-byte block (like AES), when you need to encrypt a 256-byte message? When most folks first consider this problem, they say something like "Just split the message into 16-byte chunks, and encrypt each one of those." That's an old idea (it's called ECB, or "electronic codebook"), but it has some serious problems. Most significantly, if the same 16-byte block appears multiple times in the plaintext, the ciphertext will also have identical blocks in the same position. The Wikipedia link above has a cute demonstration of this.

Okay, so nobody reasonable uses ECB. Some people, though, do use a mode called CBC, or "Cipher Block Chaining." With CBC, when you want to encrypt a message, you start your ciphertext message with a single extra random block, or IV ("initialization vector"). Now, when you go to encrypt each plaintext block to get its ciphertext block, you first xor the plaintext with the previous ciphertext. (So you xor the IV with the first plaintext block, encrypt that, and output it. Then you xor that ciphertext block with the second plaintext block, encrypt that, output it, and so on. The Wikipedia page has pretty good examples and illustrations here too.)

TLS and its earlier incarnation, SSL, are the encryption protocols that many applications, including Tor, use to send streams of encrypted data. They use CBC mode for most of their block ciphers. Unfortunately, before TLS version 1.1, they made a bad mistake. Instead of using a new random IV for every TLS message they sent, they used the ciphertext of the last block of the last message as the IV for the next message.

Here's why that's bad. The IV is not just supposed to be random-looking; it also needs to be something that an attacker cannot predict. If I know that you are going to use IV x for your next message, and I can trick you into sending a message that starts with a plaintext block of [(NOT x) xor y], then you will encrypt y for me.

That doesn't sound too bad. But Wei Dai and Gregory V. Bard both found ways to exploit this to learn whether a given ciphertext block corresponds to a given plaintext. The attacker makes the user encrypt C' xor p, where p is the guessed plaintext and C' is the ciphertext block right before where the attacker thinks that plaintext was. If the attacker guessed right, then the user's SSL implementation outputs the same ciphertext as it did when it first encrypted that block.

And even that doesn't sound too bad, even in retrospect. In order to mount this attack, an adversary would need to be watching your internet connection, and be able to force you to start your next TLS record with a given string of his choice, and be able to guess something sensitive that you said earlier, and guess which part of your TLS stream might have corresponded to it.

Nevertheless, crypto people implemented workarounds. Better safe than sorry! In version 1.1 of the TLS protocol, every record gets a fresh IV, so the attacker can't know the IV of the next message in advance. And OpenSSL implemented a fix where, whenever they're about to send a TLS record, they send an empty TLS record immediately before, and then send the record with the message in it. The empty TLS record is enough to make the CBC state change, effectively giving the real message a new IV that the attacker can't predict.

These fixes haven't gotten very far in the web world, though. TLS 1.1 isn't widely implemented or deployed, even though the standard has been out since 2006. And OpenSSL's "empty record" trick turns out to break some non-conformant SSL implementations, so lots of folks turn it off. (The OpenSSL manual page for the option in question even says that "it is usually safe" to do so.)

I suppose that, at the time, this seemed pretty reasonable. Guessing a plaintext is hard: there are something like 3.4 x 10^38 possible values. But as they say, attacks only get better.

How the attack works: What's new as of today

Juliano Rizzo and Thai Duong have two contributions, as I see it. (And please correct me if I'm getting this wrong; I have not read the literature closely!) First, they came up with a scenario to implement a pretty clever variation of Dai's original attack.

Here's a simplified version, not exactly as Rizzo and Duong present it. Let's suppose that, for some reason, the user has a secret (like a web cookie) that they send in every TLS record. And let's assume also that the attacker can trick the user into inserting any number of characters in their plaintext at the start of the record right before the secret. So the plaintext for every record is "Evil | Secret", where Evil is what the attacker chooses, and Secret is the secret message. Finally, let's suppose that the attacker can make the user send as many records as he wants.

The ability to decide where block boundaries fall turns out to be a big deal. Let's suppose that instead of filling up a full plaintext block with 16 bytes of Evil, the attacker only fills up 15 bytes... so the block will have 15 bytes the attacker controls, and one byte of the secret.

Whoops! There are only 256 possible values for a single byte, so the attacker can guess each one in turn, and use the older guess-checking attack to see if he guessed right. And once the attacker knows the first byte, he starts sending records with 14 attacker-controlled bytes, one byte that he knows (because he made a bunch of guesses and used the older attack to confirm which was right), and one byte that he doesn't. Again, this block has only 256 possible values, and so guessing each one in turn is efficient.

They then show how to extend this to cases where the attacker doesn't actually control the start of the block, but happens to know (or is able to guess) it, and make other extensions to the attack.

The second neat facet of Duong and Rizzo's work is that they actually found some ways to make this attack work against web browsers. That's no mean feat! You need to find a way to trick a web client into sending requests where you control enough of the right parts of them to mount this attack, and you need to be able to do it repeatedly. It seems to have taken a lot of HTTP hackery, but it seems they managed to do it. There's more detailed information here at Eric Rescorla's blog, and of course once Juliano and Thai have their paper out, there will be even more to goggle at. This is really clever stuff IMO.

[Disclaimer: Again, I may have made mistakes above; if so, I will correct it when I wake up in the morning, or sooner. Please double-check me if you know this work better than I do.]

So, does this attack work on Tor?

Nope.

Tor uses OpenSSL's "empty fragment" feature, which inserts a single empty TLS record before every record it sends. This effectively randomizes the IV of the actual records, like a low-budget TLS 1.1. So the attack is simply stopped.

This feature has been in OpenSSL since 0.9.6d: see item 2 in Bodo Müller's good old CBC writeup for full details on how it works. It makes our SSL incompatible with some standards-non-compliant TLS implementations... but we don't really care there, since all of the TLS implementations that connect to the Tor network are OpenSSL, or are compatible with it.

Tor requires OpenSSL 0.9.7 or later, and has since 0.2.0.10-alpha. Amusingly, we were considering dropping our use of the empty fragment feature as "probably unnecessary" in 2008, but we never got around to it. I sure don't think we'll be doing that now!

Now, it's possible that clients we didn't write might be using other TLS implementations, but the opportunity for plaintext injection on client->relay links is much lower than on relay->relay links; see below. As far as I know, there are no Tor server implementations compatible with the current network other than ours.

Also, this only goes for the Tor software itself. Applications that use TLS need to watch out. Please install patches, and look for new releases if any are coming out soon.

But what if...

Okay, but would it work if we didn't use OpenSSL's empty-fragment trick?

For fun, what would our situation be if we weren't using openssl's empty fragment trick?

I'm going to diverge into Tor protocol geekery here. All of the next several sections are probably irrelevant, since the OpenSSL trick above protects Tor's TLS usage already. I'm just into analyzing stuff sometimes... and at the time I originally wrote this analysis, we didn't have confirmation about whether the OpenSSL empty-record trick would help or not.

This part will probably be a little harder to follow, and will require some knowledge of Tor internals. You can find all of our documents and specifications online at our handy documentation page if you've got some free time and you want to come up to speed.

The attack scenario that makes sense is for an attacker to be trying to decrypt stuff sent from one Tor node to another, or between a Tor node and a client. The attacker is not one of the parties on the TLS link, obviously: if they were, they'd already know the plaintext and would not need to decrypt it.

First, let's make some assumptions to make things as easy for the attacker as possible. Let's assume that the attacker can trivially inject chosen plaintext, and can have the TLS records carve up the plaintext stream anywhere he wants.

(Are those assumptions reasonable? Well, it's easy to inject plaintext on a relay->relay link: sending a node an EXTEND cell will make it send any CREATE cell body that you choose, and if you have built a circuit through a node, you can cause a RELAY cell on that node to have nearly any body you want, since the body is encrypted/decrypted in counter mode. I don't see an obvious way to make a node send a chosen plaintext to a client or make a client send a chosen plaintext to a node, but let's pretend that it's easy to do that too.

The second assumption, about how it's easy to make the boundaries of TLS records fall wherever you want, is likely to be more contentious; see "More challenges for the attacker" below.)

Also let's assume that on the targeted link, traffic for only one client is sent. An active attacker might arrange this through a trickle attack or something.

I am assuming that the attacker is observing ciphertext at a targeted node or client, but not at two targeted places that allow him to see the same traffic enter and leave the network: by our threat model, any attacker who can observe two points on a circuit is assumed to win via traffic correlation attacks.

I am going to argue that even then, the earlier attack doesn't get the attacker anything, and the preconditions for the Duong/Rizzo attack don't exist.
CLAIM 1: The earlier (Dai/Bard) attacks don't get the attacker anything against the Tor protocol

There are no interesting guessable plaintexts sent by a well-behaved client or node[*] on a Tor TLS link: all are either random, nearly random, or boring from an attacker's POV.
[*] What if a node or client is hostile? Then it might as well just publish its plaintext straight to the attacker.

Argument: CREATE cell bodies contain hybrid-encrypted stuff that (except for its first bit) should be indistinguishable from random bits, or pretty close to indistinguishable from random bits. CREATED cell bodies have a DH public key and a hash: also close to indistinguishable from random. CREATE_FAST cell bodies *are* randomly generated, and CREATED_FAST cell bodies have a random value and a hash.

RELAY and RELAY_EARLY cell bodies are encrypted with at least one layer of AES_CTR, so they shouldn't be distinguishable from random bits either.

DESTROY, PADDING, NETINFO, and VERSIONS cells do not have interesting bodies. DESTROY and PADDING bodies are either 0s or random bytes, and NETINFO and VERSIONS provide only trivial information that you can learn just by connecting to a node (the time of day, its addresses, and which versions of the Tor link protocol it speaks).

That's cell bodies. What about their headers? A Tor cell's header has a command and a circuit ID that take up only 3 bytes. Learning the command doesn't tell you anything you couldn't notice by observing the encrypted link in the first place and doing a little traffic analysis. The link-local 2-byte circuit ID is random, and not
interesting per se, but possibly interesting if you could use it to demultiplex traffic sent over multiple circuits. I'll discuss that more below at the end of the next section.

CLAIM 2: You can't do the Duong/Rizzo attack against the Tor protocol either

The attack requires that some piece of sensitive information M that you want to guess be re-sent repeatedly after the attacker's chosen plaintext. That is, it isn't enough to get the target to send one TLS record containing (evil | M) -- you need to get it to send a bunch of (evil | M) records with the same M to learn much about M.

But in Tor's link protocol, nothing sensitive is sent more than once. Tor does not retry the same CREATE or CREATE_FAST cells, or re-send CREATED or CREATED_FAST cells. RELAY cells are encrypted with at least one layer of AES_CTR, and no RELAY cell body is sent more than once. (The same applies to RELAY_EARLY.)

PADDING cells have no interesting content to learn. VERSIONS and NETINFO cells are sent only at the start of the TLS connection (and their contents are boring too).

What about the cell headers? They contain a command and a circuit ID. If a node is working normally, you can already predict that most of the commands will be RELAY. You can't predict that any given circuit's cells will be sent reliably after yours, so you can't be sure that you'll see the same circuitID over and over... unless if you've done a trickle attack, in which case you already know that all the cells you're seeing are coming from the same place; or if you've lucked out and one circuit is way louder than all the others, but you would already learn that from watching the node's network. So I think that anything that is sent frequently enough to be decryptable with this method is not in fact worth decrypting. But see the next section.

Hey, what if I'm wrong about those cell headers?

Let's pretend that I flubbed the analyis above, and that the attacker can read the command and circuitID for every cell on a link. All this allows the attacker to do is to demultiplex the circuits on the link, and better separate the traffic pattern flows that the link is multiplexing for different clients... but the attacker can't really
use this info unless the attacker can correlate it with a flow somewhere else. This requires that the attacker be watching the same traffic at two points. But if the attacker can do that, the attacker can already (we assume) do a passive correlation attack and win.

And what if I'm wrong entirely?

Now let's imagine that I am totally wrong, and TLS is completely broken, providing no confidentiality whatsoever? (Assume that it's still providing authenticity.)

Of course, this is a really unlikely scenario, but it's neat to speculate.

The attacker can remove at most one layer of encryption from RELAY cells in this case, because every hop of a circuit (except sometimes the first) is created with a one-way-authenticated Diffie-Hellman handshake. (The first hop may be created with a CREATE_FAST handshake, which is not remotely secure against an attacker who can see the plaintext inside the TLS stream.) Anonymized RELAY traffic is encrypted in AES_CTR mode with keys based on at least one CREATE handshake, so the attacker can't beat it. Non-anonymized RELAY traffic (that is, tunneled directory connections) don't contain sensitive requests or information.

So if the attacker can use this attack to completely decrypt TLS at a single hop on a Tor circuit, they still don't win. (All they can do is demultiplex circuits, about which see above.) And for them to do this attack at multiple points on the circuit, they would have to observe those points, in which case they're already winning according to our threat model.

More challenges for the attacker

It's actually even a little harder to do this attack against Tor than I've suggested.

First, consider bandwidth-shaping: If a node or a link is near its bandwidth limit, it will split cells in order to stay under that limit.

Also, on a busy node, you can't really send a cell and expect it to get decrypted and put into the next output record on a given link: there are likely to be other circuits queueing other cells for that link, and by the time your cell is sent, it's likely that they'll have sent stuff too. You can get increased priority by making a very quiet circuit, though.

We dodged an interesting bullet on this one: part of it was luck, and part of it was a redundant protocol design. Nevertheless, let's "close the barn doors," even though the horse is still tied up, chained down, and probably sedated too.

First, as soon as OpenSSL 1.0.1 is out and stable, we should strongly suggest that people move to it. (It provides TLS 1.1.) We should probably build all of our packages to use it. But sadly, we'll have to think about protocol fingerprinting issues there, in case nobody else jumps onto the TLS 1.1 bandwagon with us. :/

We should include recommended use of the OpenSSL empty-record trick in our spec as a requirement or a strong recommendation. I'll add a note to that effect.

In future designs of replacements for the current RELAY and CREATE format, we should see if there are ways to prevent them from being used for chosen plaintext injection. This might not be the last attack of its kind, after all.

Perhaps we should put an upper bound on the circuit priority algorithm if we haven't already, so that no circuit can achieve more than a given amount of priority for being quiet, and so that you can't reliably know that your cell will be sent next on a link. But that's pretty farfetched.

And in closing

longpost is loooooooong

Thanks to everybody who followed along with me so far, thanks to Juliano and Thai for letting me read their paper ahead of time, and thanks to everybody who helped me write this up. And thanks especially to my lovely spouse for proofreading.

Firefox developers searching for a way to protect users against a new attack that decrypts sensitive web traffic are seriously considering an update that stops the open-source browser from working with Oracle's Java software framework.

The move, which would prevent Firefox from working with scores of popular websites and crucial enterprise tools, is one way to thwart a recently unveiled attack that decrypts traffic protected by SSL, the cryptographic protocol that millions of websites use to safeguard social security numbers and other sensitive data. In a demonstration last Friday, it took less than two minutes for researchers Thai Duong and Juliano Rizzo to wield the exploit to recover an encrypted authentication cookie used to access a PayPal user account.

Short for Browser Exploit Against SSL/TLS, BEAST injects JavaScript into an SSL session to recover secret information that's transmitted repeatedly in a predictable location in the data stream. For Friday's implementation of BEAST to work, Duong and Rizzo had to subvert a safety mechanism built into the web known as the same-origin policy, which dictates that data set by one internet domain can't be read or modified by a different address.

The researchers settled on a Java applet as their means to bypass SOP, leading Firefox developers to discuss blocking the framework in a future version of the browser.

“I recommend that we blocklist all versions of the Java Plugin,” Firefox developer Brian Smith wrote on Tuesday in a discussion on Mozilla's online bug forum. “My understanding is that Oracle may or may not be aware of the details of the same-origin exploit. As of now, we have no ETA for a fix for the Java plugin.”

“In the interest of keeping this bug updated with the latest status, this morning I asked Johnath for some help in understanding the balance between the horrible user experience this would cause and the severity/prevalence of the security issue and am waiting to hear back. We also discussed this in the Products team meeting today and definitely need better understanding of that before putting the block in place.”

He went on to say that Firefox already has a mechanism for “soft-blocking” Java that allows users to re-enable the plugin from the browser's addons manager or in response to a dialogue box that appears in certain cases.

“Click to play or domain-specific whitelisting will provide some measure of benefit, but I suspect that enough users will whitelist, e.g., facebook that even with those mechanisms (which don't currently exist!) in place, we'd have a lot of users potentially exposed to java weaknesses.”

The Draconian move under consideration is in stark contrast to the approach developers of Google's Chrome browser have taken. Last week, they updated the developer and beta versions of Chrome to split certain messages into fragments to reduce the attacker's control over the plaintext about to be encrypted. By adding unexpected randomness to the encryption process, the new behavior in Chrome is intended to throw BEAST off the scent of the decryption process by feeding it confusing information.

The update has created incompatibilities between Chrome and at least some websites, as this Chromium bug report shows. Google has yet to push out the update to the vast majority of Chrome users who rely on the stable version of the browser.

Microsoft, meanwhile, has recommended that users apply several workaround fixes while it develops a permanent patch. The company hasn't outlined the approach it plans to take.

The prospect of Firefox no longer working with Java could cause a variety of serious problems for users, particularly those in large corporations and government organizations that rely on the framework to make their browsers work with virtual private networks, intranet tools, and web-conferencing applications such as Cisco Systems' WebEx.

Presumably, Java would be killed by adding it to the Mozilla Blocklisting Policy.

How long does your cell phone carrier retain information about your calls, text messages, and data use? According to data gathered by the Department of Justice, it can be as little as a few days or up to seven years, depending on your provider.

AT&T, for example, retains information about who you are texting for five to seven years. T-Mobile keeps the same data for five years, Sprint keeps it for 18 months, and Verizon retains it for one year. Verizon is the only one of the top four carriers that retains text message content, however, and it keeps that for three to five days.

Call detail records, meanwhile, are retained for one year by Verizon, five years for T-Mobile (two years for pre-paid), five to seven years for AT&T, and 18 to 24 months for Sprint.

The data was made public after the American Civil Liberties Union (ACLU) filed a Freedom of Information Act (FOIA) request related to an investigation into cell phone location tracking by police. In August, 35 ACLU affiliates filed 381 requests in 32 states with local law enforcement agencies, and the ACLU of North Carolina obtained the cell phone data retention document (click below for larger image).
Carrier Data Retention

"All too often, the government is taking advantage of outdated privacy laws to get its hands on this valuable private information by demanding it without a warrant," the ACLU said in a statement. "The public has a right to know how and under what circumstances their location information is being accessed by the government – and that is exactly what we hope our information requests will uncover."

To that end, the carriers do retain data about the cell towers to which your phone has been associated: Verizon keeps it for one year; T-Mobile retains it for one year or more; AT&T has retained all data from July 2008; and Sprint keeps it for 18 to 24 months.

On the data front, Verizon also keeps your IP session information for one year. T-Mobile does not keep this information, AT&T only retains non-public IPs for 72 hours, and Sprint keeps it for 60 days. Similar policies are in place for IP destination information, except for Verizon, which keeps the information for 90 days.

According to Verizon's privacy policy, "sensitive records are retained only as long as reasonably necessary for business or legal purposes." At Sprint, the carrier holds on to data in order to "respond to legal process and emergencies" as well as "monitor, evaluate or improve our Services, systems, or networks." AT&T said the data it collects enables it to "address network integrity, quality control, capacity, misuse, viruses, and security issues, as well as for network planning, engineering and technical troubleshooting purposes," and well as "take action regarding illegal activities." Finally, when it comes to T-Mobile, the company "will retain Your Personal Information for as long as necessary to fulfill the purpose(s) for which it was collected and to comply with applicable laws."

The data reveal comes several weeks after an appeals court ordered the DOJ to let the ACLU examine certain cell phone records obtained without a warrant. The U.S. Court of Appeals for the D.C. Circuit upheld a lower court ruling that requires the DOJ to turn over the names and docket numbers in numerous cases where the government accessed cell phone location data without a warrant.

At this point, access to tech-based records are governed by the Electronic Communications Privacy Act (ECPA). But the bill was first enacted in 1986, well before the Internet, email, or smartphones. As a result, it is "significantly outdated and out-paced by rapid changes in technology and the changing mission of our law enforcement agencies after September 11," according to Sen. Patrick Leahy, a Vermont Democrat who introduced a bill in May that would update ECPA.

Leahy's update would apply to technologies like email, cloud services, and location data on smartphones. If the government wanted an ISP to hand over emails on a particular customer, for example, they would need to first obtain a warrant. At this point, the government abides by a rule that provides access to email after 180 days, depending on the circumstance. Leahy's bill would also extend to location-based data, and allow private companies to collaborate with the government in the event of a cyber attack.

The Electronic Frontier Foundation is calling on major tech companies to back an overhaul of ECPA. It recently got Apple and Dropbox to sign on, and the effort has also received support from Amazon, AT&T, Facebook, Google, and Microsoft. http://www.pcmag.com/article2/0,2817,2393887,00.asp

Logging Out of Facebook is Not Enough
Nik Cubrilovic

Dave Winer wrote a timely piece this morning about how Facebook is scaring him since the new API allows applications to post status items to your Facebook timeline without a users intervention. It is an extension of Facebook Instant and they call it frictionless sharing. The privacy concern here is that because you no longer have to explicitly opt-in to share an item, you may accidentally share a page or an event that you did not intend others to see.

The advice is to log out of Facebook. But logging out of Facebook only de-authorizes your browser from the web application, a number of cookies (including your account number) are still sent along to all requests to facebook.com. Even if you are logged out, Facebook still knows and can track every page you visit. The only solution is to delete every Facebook cookie in your browser, or to use a separate browser for Facebook interactions.

Here is what is happening, as viewed by the HTTP headers on requests to facebook.com. First, a normal request to the web interface as a logged in user sends the following cookies:

Note: I have both fudged the values of each cookie and added line wraps for legibility

To make it easier to see the cookies being unset, the names are in italics. If you compare the cookies that have been set in a logged in request, and compare them to the cookies that are being unset in the logout request, you will quickly see that there are a number of cookies that are not being deleted, and there are two cookies (locale and lu) that are only being given new expiry dates, and three new cookies (W, fl, L) being set.

Now I make a subsequent request to facebook.com as a 'logged out' user:

The primary cookies that identify me as a user are still there (act is my account number), even though I am looking at a logged out page. Logged out requests still send nine different cookies, including the most important cookies that identify you as a user

This is not what 'logout' is supposed to mean - Facebook are only altering the state of the cookies instead of removing all of them when a user logs out.

With my browser logged out of Facebook, whenever I visit any page with a Facebook like button, or share button, or any other widget, the information, including my account ID, is still being sent to Facebook. The only solution to Facebook not knowing who you are is to delete all Facebook cookies.

You can test this for yourself using any browser with developer tools installed. It is all hidden in plain sight.

An Experiment

This brings me back to a story that I have yet to tell. A year ago I was screwing around with multiple Facebook accounts as part of some development work. I created a number of fake Facebook accounts after logging out of my browser. After using the fake accounts for some time, I found that they were suggesting my real account to me as a friend. Somehow Facebook knew that we were all coming from the same browser, even though I had logged out.

There are serious implications if you are using Facebook from a public terminal. If you login on a public terminal and then hit 'logout', you are still leaving behind fingerprints of having been logged in. As far as I can tell, these fingerprints remain (in the form of cookies) until somebody explicitly deletes all the Facebook cookies for that browser. Associating an account ID with a real name is easy - as the same ID is used to identify your profile.

Facebook knows every account that has accessed Facebook from every browser and is using that information to suggest friends to you. The strength of the 'same machine' value in the algorithm that works out friends to suggest may be low, but it still happens. This is also easy to test and verify.

I reported this issue to Facebook in a detailed email and got the bounce around. I emailed somebody I knew at the company and forwarded the request to them. I never got a response. The entire process was so flaky and frustrating that I haven't bothered sending them two XSS holes that I have also found in the past year. They really need to get their shit together on reporting privacy issues, I am sure they take security issues a lot more seriously.

The Rise of Privacy Awareness

10-15 years ago when I first got into the security industry the awareness of security issues amongst users, developers and systems administrators was low. Microsoft Windows and IIS were swiss cheese in terms of security vulnerabilities. You could manually send malformed payloads to IIS 4.0 and have it crash with a stack or heap overflow, which would usually lead to a remote vulnerability.

A decade ago the entire software industry went through a reformation on awareness of security principals in administration and development. Microsoft re-trained all of their developers on buffer overflows, string formatting bugs, off-by-one bugs etc. and audited their entire code base. A number of high-profile security incidents raised awareness, and today vendors have proper security procedures, from reporting new bugs to hotfixes and secure programming principals (this wasn't just a Microsoft issue - but I had the most experience with them).

Privacy today feels like what security did 10-15 years ago - there is an awareness of the issues steadily building and blog posts from prominent technologists is helping to steamroll public consciousness. The risks around privacy today are just as serious as security leaks were then - except that there is an order of magnitude more users online and a lot more private data being shared on the web.

Facebook are front-and-center in the new privacy debate just as Microsoft were with security issues a decade ago. The question is what it will take for Facebook to address privacy issues and to give their users the tools required to manage their privacy and to implement clear policies - not pages and pages of confusing legal documentation, and 'logout' not really meaning 'logout'.

Update: Contact with Facebook

To clarify, I first emailed this issue to Facebook on the 14th of November 2010. I also copied the email to their press address to get an official response on it. I never got any response. I sent another email to Facebook, press and copied it to somebody I know at Facebook on the 12th of January 2011. Again, I got no response. I have copies of all the emails, the subject lines were very clear in terms of the importance of this issue.

Two leading lawmakers on privacy -- a Republican and Democrat -- have asked the Federal Trade Commission to look into MSN.com and Hulu.com's installation of cookies onto users' computers that cannot be deleted.

In a letter dated on Monday, Representatives Joe Barton and Ed Markey asked FTC Chairman Jon Leibowitz what plans the agency had to probe the use of the so-called supercookies.

The supercookies are put on users' computers when they visit websites by companies that want to collect personal data, but unlike regular cookies, they cannot be deleted. And they can recreate a user's profile after less powerful cookies are deleted, the lawmakers said.

"We believe this new business practice raises serious privacy concerns and is unacceptable," they wrote. "We believe the usage of supercookies takes away consumer control over their own personal information, presents a greater opportunity for misuse of personal information, and provides another way for consumers to be tracked online."

Markey, a Democrat, and Barton, a Republican, are co-chairmen of the House Bi-Partisan Privacy Caucus.

In separate comments, Barton went further. "The constant abuse of online activity must stop," he wrote. "I think supercookies should be outlawed because their existence eats away at consumer choice and privacy."

The lawmakers urged the FTC to look into whether the practice of using supercookies is unfair or deceptive.

The FTC has backed the creation of a "do not track" option for the Internet that would limit the ability of advertisers to collect consumers' data in a preliminary staff report issued late last year. There has been legislation put forward in Congress to allow consumers to say they don't want to be tracked, but it has found little traction thus far.

The agency also proposed that company privacy policies be simpler, clearer and shorter, among other moves to strengthen consumers' clout in managing what companies know about them.

I wrote a post two days ago about privacy issues with the Facebook logout procedure which could lead to your subsequent web requests to third-party sites that integrate Facebook widgets being identifiable and linked back to your real account. Over the course of the past 48 hours since that post was published we have researched the issue further and have been in constant contact with Facebook on working out solutions and clarifying behavior on the site.

My goal was to both identify bugs in the logout process and see that they are fixed, and to communicate with Facebook in getting some of the unanswered questions answered so that the Facebook using public can be informed of how cookies are used on the site - especially with regard to third-party requests.

In summary, Facebook has made changes to the logout process and they have explained each part of the process and the cookies that the site uses in detail.
The Data

To help better understand the cookie data that we have collected, I have formatted it into a table that displays the lifetime of each cookie across a number of different web requests. The table can be found on a separate page here. You can find the raw output from my Firefox session here.

The rows of the table represent each cookie found throughout the debugging session. The first column is the name of the cookie. Each subsequent column shows how the value of the cookie was altered (or not) throughout the following four page requests:

A logged in request to facebook.com
A request to the 'logout' action within Facebook
The immediate request of the Facebook homepage
A subsequent request to the Facebook homepage after restarting the browser

The table is color coded so that it is easier to see which cookies are altered and which cookies never change. The data shows that five cookies retain value after the logout procedure and a browser restart, while a further two survive the logout procedure and remain as session cookies.
The Fix

The five cookies that persist are datr, lu, p, L and act. The two cookies that also persist after the logout procedure as session cookies are a_user and a_xs.

The most important of these is a_user, which is the users ID. As of today, this cookie is now destroyed on logout . Facebook had the following to say about the a_user cookie:

“What you see in your browser is largely typical, except a_user which is less common and should be cleared upon logout (it is set on some photo upload pages). There is a bug where a_user was not cleared on logout. We will be fixing that today.”

The other 'a' cookie, a_xs, is now also deleted on logout. a_xs is used to prevent cross-site request forgery.

The Other Cookies

This leaves a number of other cookies, and I will be explaining the purpose of each one as per information from Facebook.

The datr cookie is set when a browser first visits facebook.com. The purpose of it, as per Facebook, is:

“We set the ‘datr’ cookie when a web browser accesses facebook.com (except social plugin iframes), and the cookie helps us identify suspicious login activity and keep users safe. For instance, we use it to flag questionable activity like failed login attempts and attempts to create multiple spam accounts.”

The lu cookie is also set the first time a browser visits facebook.com and is used to identify the browser pre-fill the users email address in the login form. The purpose of it, as per Facebook again, is:

“the ‘lu’ cookie helps protect people using public computers. The data it contains is used to make subtle changes to the login form, such as prefilling your email address and unchecking the “Keep me logged in” option if we detect multiple users signing in with the same browser. If you log out, this cookie does not contain your user id and Facebook will not prefill the email field.”

These cookies, by the very purpose they serve, uniquely identify the browser being used - even after logout. As a user, you have to take Facebook at their word that the purpose of these cookies is only for what is being described. The previous a_user cookie that was fixed identified your user account and has been fixed, these cookies identify the browser and are not re-associated with your logged in account.

Most of the remaining cookies are not very interesting - they set things like the language of your browser and device dimensions. The most interesting cookie, for me (after the userid, obviously), was act. The values for this cookie for the requests I logged were 1316962370811/2;, 1316972790935/11; and 1317032073811/0;. It is a timestamp for each request, in milliseconds since UNIX epoch (1st January 1970). What interested me was that not only was the timestamp accurate to milliseconds (ie. thousandths of a second) but that an additional number was being added to it. My gut instinct was that the additional number (ie. the /11, /0 and /2 in those exaples) was being added to make the timestamp unique for each and every request. Facebook confirmed this:

“It is a monotonically increasing counter of actions since the start of logging. As we shared, this is for the collection of performance data -- nothing else.”

I understand the technical reason for that - they can store the timestamp as a primary key in their logging backend and not have to associate benchmarking of each request back to a user. I believe Facebook here when they say that although this is a unique identifier it isn't used to link back to a user id - but it is definitely being logged and it can be linked to a user.
Where Now

Facebook has changed as much as they can change with the logout issue. They want to retain the ability to track browsers after logout for safety and spam purposes, and they want to be able to log page requests for performance reasons etc. I would still recommend that users clear cookies or use a separate browser, though. I believe Facebook when they describe what these cookies are used for, but that is not a reason to be complacent on privacy issues and to take initiative in remaining safe.

I discovered a lot of other issues and interesting areas ripe for further investigation while researching the cookie logout issue - and I will be taking each one of them up on the blog here in the near future.

I must thank Gregg Stefancik, an engineer at Facebook who reached out (and also left the 'official' Facebook response as a comment on the previous post) and who worked with us on this issue. Thank you as well to other Facebook engineers who reached out. On my end Ashkan Soltani and Brian Kennish (author of the excellent disconnect browser plugins that every user should be running) were invaluable with providing tests, advice and additional sets of eyes.https://nikcub.appspot.com/facebook-...plains-cookies

General Motors Co.’s OnStar vehicle navigation service said it won’t collect data on the driving habits of customers who cancel their subscriptions, reversing a policy shift that drew protests from three U.S. senators.

OnStar told customers in an e-mail last week that it would continue collecting information from vehicles of subscribers who drop the service. Customers would have been required to contact OnStar to halt data collection under the policy change, which had been due to go into effect Dec. 1.

“We realize that our proposed amendments did not satisfy our subscribers,” OnStar President Linda Marshall said in a news release. “We listened, we responded and we hope to maintain the trust of our more than 6 million customers.”

Senator Charles Schumer, a New York Democrat, yesterday called on the Federal Trade Commission to investigate OnStar over its data-collection policy, calling it “one of the most brazen invasions of privacy in recent memory.” Senators Al Franken of Minnesota and Christopher Coons of Delaware also objected to OnStar’s revised privacy policy in a letter to the company last week.

“OnStar’s decision is the right one and sets a good precedent for the future,” Schumer said in an e-mail today. “This announcement puts decisions about personal privacy back where they belong, in the hands of individuals.”

6 Million Subscribers

“I’m glad that OnStar heard our concerns and has decided to reverse course,” Coons said in a news release. “OnStar’s announcement today is an important step toward restoring the trust consumers had placed in it, and I hope that other companies learn from this experience.”

OnStar delivers navigation and security features such as emergency assistance to GM cars and other vehicles using the global-positioning system. The service has more than 6 million subscribers in the U.S., Canada and China, according to the company’s website.

The company, under its existing privacy policy, may still sell or share “anonymized” data on current customers’ vehicle location, speed and seat-belt use with third parties.

Claudia Bourne Farrell, an FTC spokeswoman, declined to comment. Marshall said OnStar had not been contacted by the agency over its privacy policy.

"I never forget a face," goes the Marx Brothers one-liner, "but in your case, I'll be glad to make an exception."

Unlike Groucho Marx, unfortunately, the cloud never forgets. That's the logic behind a new application developed by Carnegie Mellon University's Heinz College and Google that's designed to take a photograph of a total stranger and, using the facial recognition software PittPatt, track down their real identity in a matter of minutes. Facial recognition isn't that new -- the rudimentary technology has been around since the late 1960s -- but this system is faster, more efficient, and more thorough than any other system ever used. Why? Because it's powered by the cloud.

The logic of the new application is based on a series of studies designed to test the integration between facial recognition technology and the wealth of data accessible in the cloud (by which we basically mean the Internet). Facial recognition's law enforcement uses -- to identify criminals out of a surveillance video tape, say -- have always been limited by the criminal databases available for reference. When Florida deployed Viisage facial recognition software in January 2001 to search for potential troublemakers and terrorists in attendance at Super Bowl XXXV, police in Tampa Bay were only able to extract useful information on 19 people with minor criminal records who already existed in any database they had access to. But the Internet was a much smaller place in 2001; Google was in its infancy, and the sheer volume of data available in a simple search simply didn't exist.

Often, the problems with facial recognition are rooted in the need for greater processing power, human and machine. After revelers rioted in the streets of Vancouver following the Canucks' defeat in the Stanley Cup, Vancouver police received nearly 1,600 hours of footage from bystanders furious with their fellow citizens; the department was woefully inequipped to handle the sudden influx of data, anticipating that it would take nearly two years to analyze all the information. Vancouver's Digital Multimedia Evidence Processing Lab was able to cut the processing time to a mere three weeks with a relatively small 20-workstation lab.

With Carnegie Mellon's cloud-centric new mobile app, the process of matching a casual snapshot with a person's online identity takes less than a minute. Tools like PittPatt and other cloud-based facial recognition services rely on finding publicly available pictures of you online, whether it's a profile image for social networks like Facebook and Google Plus or from something more official from a company website or a college athletic portrait. In their most recent round of facial recognition studies, researchers at Carnegie Mellon were able to not only match unidentified profile photos from a dating website (where the vast majority of users operate pseudonymously) with positively identified Facebook photos, but also match pedestrians on a North American college campus with their online identities.

The repercussions of these studies go far beyond putting a name with a face; researchers Alessandro Acquisti, Ralph Gross, and Fred Stutzman anticipate that such technology represents a leap forward in the convergence of offline and online data and an advancement of the "augmented reality" of complementary lives. With the use of publicly available Web 2.0 data, the researchers can potentially go from a snapshot to a Social Security number in a matter of minutes:

“We use the term augmented reality in a slightly extended sense, to refer to the merging of online and offline data that new technologies make possible. If an individual's face in the street can be identified using a face recognizer and identified images from social network sites such as Facebook or LinkedIn, then it becomes possible not just to identify that individual, but also to infer additional, and more sensitive, information about her, once her name has been (probabilistically) inferred.

In our third experiment, as a proof-of-concept, we predicted the interests and Social Security numbers of some of the participants in the second experiment. We did so by combining face recognition with the algorithms we developed in 2009 to predict SSNs from public data. SSNs were nothing more than one example of what is possible to predict about a person: conceptually, the goal of Experiment 3 was to show that it is possible to start from an anonymous face in the street, and end up with very sensitive information about that person, in a process of data "accretion." In the context of our experiment, it is this blending of online and offline data - made possible by the convergence of face recognition, social networks, data mining, and cloud computing - that we refer to as augmented reality.”

Naturally, the development of such software inspires understandably Orwellian concerns. Jason Mick at DailyTech notes that PittPatt started a Carnegie Mellon University research project, which spun off into a company post 9/11. "At the time, U.S. intelligence was obsessed with using advanced facial recognition to identify terrorists," writes Mick. "So the Defense Advanced Research Projects Agency (DARPA) poured millions into PittPatt." While Google purchased the company in July, the potential for such intrusive technology to be used against law-abiding citizens is cause for concern.

While private organizations may vie for a piece of PittPatt's proprietary technology for marketing or advertising purposes, the idea that such technology could be utilized by a tech savvy member of the public towards criminal, fraudulent, or extralegal ends is as alarming as the potential for governmental abuse. England saw this in the wake of the rioting, looting, and arson that swept across the country when a Google group of private citizens called London Riots Facial Recognition emerged with the aim of using publicly available records and facial recognition software to identify rioters as a form of digital vigilantism. The group eventually abandoned its efforts when its experimental app, based on the much maligned photo-tagging facial software Face.com, yielded disappointing results. "Bear in mind the amount of time and money that people like Facebook, Google, and governments have put into work on facial recognition compared to a few guys playing around with some code," the group's organizer told Kashmir Hill at Forbes. "Without serious time and money we would never be able to come up with a decent facial recognition system."

The research team at Carnegie Mellon understand the potential problems posed by this convergence of facial recognition technology and the vast Web of publicly available information. Alessandro Acquisti told Steve Hann at Marketwatch after a demonstration that the prospect of selling his new app or making it available to the public "horrifies him." And while there are certainly limits to what software like PittPatt can distill from the cloud, the closing gap between life offline and life in the cloud is becoming more observable with each progressive breakthrough:

“So far, however, these end-user Web 2.0 applications are limited in scope: They are constrained by, and within, the boundaries of the service in which they are deployed. Our focus, however, was on examining whether the convergence of publicly available Web 2.0 data, cheap cloud computing, data mining, and off-the-shelf face recognition is bringing us closer to a world where anyone may run face recognition on anyone else, online and offline - and then infer additional, sensitive data about the target subject, starting merely from one anonymous piece of information about her: the face.”

I'm reminded in particular of this quote from Google's then-CEO Eric Schmidt during a 2009 CNBC special report on the company:

“I think judgment matters. If you have something that you don't want anyone to know, maybe you shouldn't be doing it in the first place. If you really need that kind of privacy, the reality is that search engines -- including Google -- do retain this information for some time and it's important, for example, that we are all subject in the United States to the Patriot Act and it is possible that all that information could be made available to the authorities.”

The relevant point here is not Schmidt's thought on behavior and choice but the fact that, no matter what you choose to do or not do, your life exists in the cloud, indexed by Google, in the background of a photo album on Facebook, and across thousands of spammy directories that somehow know where you live and where you went to high school. These little bits of information exist like digital detritus. With software like PittPatt that can glean vast amounts of cloud-based data when prompted with a single photo, your digital life is becoming inseparable from your analog one. You may be able to change your name or scrub your social networking profiles to throw off the trail of digital footprints you've inadvertently scattered across the Internet, but you can't change your face. And the cloud never forgets a face. http://www.theatlantic.com/technolog...ifying/245867/

International Business Machines Corp. passed Microsoft Corp. to become the world’s second-most valuable technology company, a reflection of industry changes including the shift away from the personal computer.

IBM’s market value rose to $214 billion yesterday, while Microsoft’s fell to $213.2 billion, the first time IBM has exceeded its software rival based on closing prices since 1996, according to Bloomberg data. IBM is now the fourth-largest company by market value and, in technology, trails only Apple Inc., the world’s most valuable company.

Chief Executive Officer Sam Palmisano sold IBM’s PC business six years ago to focus on corporate software and services. Though Microsoft has expanded into online advertising and games, it gets most of its revenue and earnings from the Windows and Office software used primarily on PCs.

“IBM went beyond technology,” said Ted Schadler, an analyst with Forrester Research Inc. “They were early to recognize that computing was moving way beyond these boxes on our desks.”

IBM, based in Armonk, New York, has gained 22 percent this year, while Microsoft, based in Redmond, Washington, has dropped 8.8 percent. IBM rose $1.62 to $179.17 yesterday in New York Stock Exchange composite trading, and Microsoft fell 13 cents to $25.45 in Nasdaq Stock Market trading.

Apple, which long competed against IBM and Microsoft in the PC business, passed Microsoft in market value last year, on rising sales of iPhones, iPods and iPads. Apple’s market capitalization is now $362.1 billion.

Palmisano’s Strategy

Palmisano, who is also chairman, has spent his nine years at the helm sharpening the company’s focus on software and services for corporations and government. Once known as the world’s largest computer company, IBM in 2005 sold its PC unit to Lenovo Group Ltd., calling it “commoditized.’’ The company has spent more than $25 billion investing in its software, computer-services and consulting businesses.

The maneuvers have helped increase per-share profit for more than 30 straight quarters. Palmisano has boosted sales by 20 percent from 2001 through last year, while keeping the costs of the 426,000-employee company little changed. IBM pulled in more than half of its $99.9 billion in revenue last year from services and is now the world’s largest computer-services provider.

The company is betting it can add another $20 billion to revenue through 2015. Palmisano is investing in emerging markets and analytics, as well as cloud-computing and an initiative called Smarter Planet to connect roads, electrical systems and other infrastructure to the Internet.

Share Record

“Computing is now found in things that no one thinks of as ‘computers’,” said Palmisano at a trade show keynote in February. “Today, there are nearly a billion transistors per human, and each one costs one ten-millionth of a cent. Yes, some of these transistors are going into servers, PCs, smart phones, MP3 players and tablets. But an increasing number of them are going into appliances and automobiles, power grids, roadways, railways and waterways.”

IBM plans to almost double operating earnings to at least $20 a share in 2015. Investors have taken notice: Shares have climbed 35 percent since the company first announced the goal in May 2010.

Microsoft’s Slump

Microsoft, the world’s largest software company, was worth three times as much as IBM in January 2000 and hit a market capitalization of more than $430 billion in July 2000, according to Bloomberg data. Microsoft fell to about $135 billion in March 2009 during the economic downturn, before recovering with the market.

Microsoft, which had $69.9 billion in revenue for the fiscal year ending in June, got about 60 percent of its sales from the Windows and Office units in the most recent quarter.

“They were trapped in the classic innovator’s dilemma” because their software business was so good,” said Schadler. “The bet that Microsoft made in the PC business was to double down and double down and double down.”

CEO Steve Ballmer said investors may not appreciate the company’s progress in other businesses, including server software and online versions of Office, given the higher profile of its consumer businesses.

“People are saying, ‘Where do you go next?’,” said Ballmer at the company’s annual meeting in November. There probably isn’t “as much appreciation for the incredible growth and success we’ve had with enterprises since people relate better to the consumer market. But it’s great products with great earnings and particularly in some high-visibility categories.”

Xbox, Bing

The company’s server software and Office divisions boosted sales last quarter, as did the entertainment division, which includes its Xbox games business. Revenue at the online services division, including the Bing search engine, climbed to $662 million, while its operating loss widened to $728 million.

Microsoft also cut a deal with Nokia Oyj this year to make its Windows Phone the primary operating system for the company’s smartphones. The deal is designed to help both companies compete against Apple and Google Inc.’s Android operating system, which is available for free to handset makers such as Motorola Mobility Holdings Inc. and Samsung Electronics Co.

Still, mobile computing is unlikely to ever be as profitable for Microsoft as the PC business, said Forrester’s Schadler.

“They’re never going to win in that business the way they did in the PC business,” he said.

HP has delivered on its promise to offer its employees first refusal on the last ever - and we mean it this time, honest - batch of TouchPad webOS-powered tablets, and demand has been high enough to take the company's website offline.

In a small-scale mirror of the rush to buy the £89 discounted discontinued tablet as retailers dropped the price, HP's Employee Purchase Programme site - a staff-only service where discounted goods can be procured by the lucky few - has been bogged down by demand.

According to an HP staffer speaking to company watcher webOSroundup, it took almost three hours to place an order for a single TouchPad once news broke of their availability. Worse still, many users were left with error messages complaining of high traffic and were unable to order their discount tablets at all.

With demand proving as high as ever, it seems increasingly unlikely that any of the limited production run - chosen by HP as the most logical choice to dispose of unused parts in its inventory while helping to cushion the blow for its OEM manufacturing partners - will make it out into the channel for retail sale.

That's bad news for those who were left out in the cold during the original sale. With efforts to port Google's Android platform - which enjoys a far larger developer ecosystem along with the seemingly guaranteed future which the discontinued webOS lacks - nearing completion, the TouchPad is fast becoming the tablet bargain of the year even now Amazon has gone public with its Kindle Fire device. http://www.thinq.co.uk/2011/9/29/hp-...ver-touchpads/

Amazon’s Tablet Leads to Its Store
Jenna Wortham and David Streitfeld

With a glossy 7-inch color touch screen and a dual-core processor, the Kindle Fire, a new mobile device introduced by Amazon on Wednesday, sure looks like a tablet, and one not so different from the Apple iPad.

But Jeffrey P. Bezos, Amazon’s founder and chief executive, has another word for it.

“I think of it as a service,” he said in an interview on Wednesday. “Part of the Kindle Fire is of course the hardware, but really, it’s the software, the content, it’s the seamless integration of those things.”

Amazon is counting on the its vast online warehouse of more than 18 million e-books, songs, movies and television shows, as well as access to a selection of Android applications, to help it beat competitors like the iPad and the Nook from Barnes & Noble. Previous Kindles were only e-book readers with black-and-white screens.

The access to content is important as Amazon transforms its business into a digital retailer and responds to consumer demands for mobile devices, lest it winds up in a retail graveyard like Borders, a former peer.

“It will appeal to a different set of customers who are magazine readers and cinema fans,” Mr. Bezos said.

The other advantage Mr. Bezos is counting on is price: the Fire will sell for $199 while the cheapest iPad sells for $499. Amazon began taking orders for the Fire on its Web site on Wednesday; it will start shipping them Nov. 15.

Mr. Bezos took the stage on Wednesday at a news conference held in Manhattan to show off the Kindle Fire. The tablet, which weighs less than a pound and can fit comfortably in the palm of a hand, builds on the company’s popular line of e-readers.

Amazon is hoping it appeals to a broader audience that also wants to browse the Web and stream music, movies and video from a mobile device. The Kindle Fire also has access to a virtual newsstand that includes content from magazines like Wired, Vanity Fair and Cosmopolitan.

Amazon custom-built the Fire’s mobile Web browser, called Amazon Silk, so that it loads media-rich Web pages faster by shifting some of the work onto Amazon’s cloud computing engine, called EC2. “It’s truly a technical achievement,” Mr. Bezos said.

The Kindle Fire’s 8-gigabytes of memory is capable of storing 80 apps and either 10 movies, 800 songs or 6,000 books. The tablet also includes a free cloud-based storage system, meaning that no syncing with cables is necessary.

The Kindle Fire is missing some things the iPad 2 has — most notably, a camera and a microphone for video calls. The Fire can send and receive data only over Wi-Fi, not cellular networks.

The device’s $199 price tag is less than half that of many tablet computers on the market, including the HTC Flyer, which also features a 7-inch screen but sells for $499 at Best Buy. The Kindle Fire will also compete with the Color Nook e-reader, developed by Barnes & Noble, which has enjoyed healthy sales at $249.

Amazon can afford to charge less because it hopes to make up the difference by selling books, movies and popular television shows. Customers may also be more inclined to pay $79 a year for Amazon Prime, which gives them access to Amazon’s movie streaming service and free shipping, which in turn, encourages more shopping at Amazon.com.

Because Amazon sells its family of Kindle devices through its own Web site, it does not need to share revenue with another retailer. And in most states, customers do not have to pay sales tax on those devices.

“If you price your products in such a way that no one can compete with you, that has to be a good thing in the end,” said Scott Devitt, an analyst at Morgan Stanley.

On Wednesday, Mr. Bezos also introduced two new touch-screen Kindles, and a slimmer monochrome-screen Kindle, that range in price from $79 to $149.

Apple has secured a strong lead in tablets, selling more than 29 million iPads in the product’s first 15 months on the market. Mr. Bezos says that he expects shoppers will put both Kindles and iPads in their carts.

By entering the magazine-selling business, Amazon has also planted a flag in a digital marketplace that has so far been dominated by Apple.

With another player — particularly one that is as large and influential with consumers as Amazon — magazine companies could suddenly find that they have a useful bargaining chip when it comes to negotiating with Apple.

The price of magazine subscriptions on the Fire are higher than what readers would pay in print. Condé Nast, publisher of magazines like GQ, Vanity Fair and Glamour, is selling most of its publications for $20 a year, nearly twice what it charges in print.

Several magazines will be priced even higher, like The New Yorker, which will be $60 a year on the Fire. “It helps us establish that higher price point at our new benchmark,” said Bob Sauerberg, president of Condé Nast.

Mr. Bezos is confident in the company’s strategy. “Some of the tablets that have come on the market, the reason they haven’t been successful is because they weren’t services,” he said. “They were just tablets.”

Analysts say that the new family of devices will corral users into a tightly walled garden around Amazon’s content and devices and may secure a new dominance for Amazon as an online retailer and technology company. Music is streamed using Amazon’s Cloud Player, while movies and television shows are viewed through Amazon Instant Player. E-books rely on the Kindle app.

Owners will have access only to Android apps approved by Amazon and distributed through its Amazon Android Store. Even the Fire’s software, based on a Google Android framework, is disguised under a custom layer built by Amazon.

“From a customer point of view, its unrecognizable as Android,” said Mr. Bezos, who said the company chose not to work closely with Google to develop the Fire, unlike most hardware markers that build products on Android.

“The Kindle feels more locked down than the iPad,” said Ross Rubin, an analyst at the NPD Group, the market research firm.

More than most companies, Amazon thinks in terms of years and decades rather than quarters.

The original Kindle was meant to remove the retailer’s reliance on the physical book at a moment when a successful e-reader appeared inevitable. Amazon decided it was better to cannibalize its own future than let a competitor do it.

With the Fire, every dollar Amazon loses on the device could be more than made up for by the data gained. The Silk browser, by virtue of being situated in the cloud, will record every Web page that users visit. That has implications for privacy and commerce.

“Amazon now has what every storefront lusts for: the knowledge of what other stores your customers are shopping in and what prices they’re being offered there,” Chris Espinosa, an Apple engineer, wrote on his personal blog.

Amazon has created a new browser for the Kindle Fire tablet, one with a cloud-based architecture
Joab Jackson

While the Kindle Fire tablet consumed much of the focus at Amazon's launch event Wednesday in New York, the company also showed off a bit of potentially radical software technology as well, namely the new browser for the Fire, called Silk.

Silk is different from other browsers because it can be configured to let Amazon's cloud service do much of the work assembling complex Web pages. The result is that users may experience much faster load times for Web pages, compared to other mobile devices, according to the company.

Amazon CEO Jeff Bezos introduced Silk during his keynote after unveiling the company's US$199 Kindle Fire tablet, which will be available Nov. 15.

During the introduction, Bezos noted that most modern Web pages, such as Amazon's own or CNN's, are complex creations, with multiple photos, animations, and complex scripts and mark-up code. The CNN home page, for instance, is built by the browser from 53 static images, 39 dynamic images, three Flash files, 30 JavaScript files from seven different domains, 29 HTML files and seven CSS (Cascading Style Sheet) files."

"The modern Web has become a complicated place," Bezos said. As a result, "It is difficult -- challenging -- for mobile devices to display modern Web pages rapidly."

All the user's Web page requests will be sent through a service in the Amazon Elastic Compute Cloud (EC2) for processing. The service will act as a caching service, as well as a staging area where the more complex bits of Web pages can be pre-processed before being redirected to the user's browser.

Silk is fully functional as a stand-alone browser, explained Jon Jenkins, director of platform analysis at Amazon.com, at a demonstration booth after the event. It supports HTML5, JavaScript, CSS and associated next-generation Web standards. It also supports Flash. Amazon built the software from the ground up, using the WebKit open-source browser engine.

All the user's requests, however, are directed to the EC2 service, which then fetches the pages from the source and optimizes the content for the platform. Complex parts of JavaScript may be pre-processed and images may be downsized to a more manageable size. Many common but rarely updated elements of a popular Web page are served directly from the EC2 cache, such as the CNN.com logo.

"EC2 knows that logo hasn't changed for months, so it doesn't wait until getting the HTML file back before pushing that logo back to you," Jenkins said.

The site's original content, as well as content personalized for each user, will be requested from the content provider.

The service also uses content compression techniques, such as re-encoding video and images before sending them to a device. The service also keeps connections constantly open to popular websites, which reduces the time needed to negotiate connections on a one-to-one basis.

Amazon also sped operations by doing away with the HTTP protocol, which is normally used to convey Web pages from the server to the user. The HTTP protocol "is not the most efficient protocol of the modern Web," Jenkins said. "It doesn't multiplex content well -- it is hard to get a bidirectional flow of content."

As an alternative, Silk uses a variant of the Google SPDY protocol. HTTP is still used between the content provider and the EC2.

The browser will determine whether to download the mobile or the static version of any given website, based on the capabilities of the hardware, as well as the richness of the site itself. "It learns effectively as you're browsing to get the best possible version of the content to you," Jenkins said. This works particularly well on popular sites, where many of the common elements can be cached.

Of course, the fact that all the user's Web browsing is being directed through Amazon will raise the interest of privacy advocates, who might see the technology as invasive. Jenkins denied that Amazon would be doing any personal traffic analysis, though. "There is no personal information stored on the EC2 at all," Jenkins said. He also noted that it is possible for users to turn off the EC2 service altogether and use the browser in a standard way.

The company also spent a lot of time making sure that one user would not accidentally be served another user's content when checking popular sites such as Facebook. "Some of the earlier efforts that other companies made at this did result in that. So we thought very very carefully about that. That was just unacceptable as an outcome."

Amazon's approach to the tablet is an "interesting spin" in a cluttered market, one analyst said.

"While the split browser architecture is not new, Opera having been a player for a couple of years, I find the overall strategy to be an interesting spin on the me-too Android software we have seen so far, and possibly a game changer," noted Al Hilwa, IDC analyst for applications development software. "In one fell swoop Amazon harnesses its commanding lead in cloud services, the content richness of a leading online retailer and its successful Kindle business strategy to deliver what might become one of the most effective antidotes to the mobile bandwidth crunch."

Amazon came out with their newest line of Kindle ebook readers today, including the appropriately named "Kindle Fire".

To quote their TV commercial: "The instruction we find in books is like fire. We fetch it from our neighbours, kindle it at home, communicate it to others, and it becomes the property of all."

This device does not kindle that fire -- it extinguishes it, with more of the same digital restrictions.

Let's look at the facts:

* Amazon claims you have no right to sell or share the books youbuy. They advertise a "lending" feature which, at best, allows youto lend a book one time *ever*, to one person, who must also be aKindle user. You don't get to make the decision about whether youcan lend a book or not -- the publisher and Amazon do.

That's notsharing.

* In fact, when people tried to cooperate to make large-scale use ofthe lending function, Amazon shut them down. The most prominentexample of this was the web site Lendle, which is back up now,albeit with fewer features, including a feature which made it easyto lend the books you have without typing in all the titles -- amove forced on them by Amazon to discourage sharing.

* Amazon is working its way into public libraries and schools now,subverting the functioning of the very places they, in the abovequote, claim to support.

* Via the wireless connectivity of these devices, Amazon can hold data about everything you read.

* Also via the connectivity, Amazon can delete books fromKindles. They have already done this multiple times. They say theywon't do it anymore, but they make users sign an agreement whichstill gives them the authority to. They have demonstrated onlyreasons to doubt their word.

* Although it is possible to use the Kindle for DRM-free materials,that is not the system that Amazon is promoting or working mostactively toward. Funding Amazon's work in this area, even if you useit differently, is supporting their moves at limiting sharing andaccess to books.

**The result: More of the same: A major threat to the shareability --like fire -- that has enabled human culture and knowledge toadvance.**

## Take action!

* Send a message to Amazon's Kindle Team via Twitter -- @amazonkindle -- be wary of using Twitter directly, as it uses proprietary JavaScript. Using your Twitter account via identi.ca is a good choice.

* Contact Amazon customer services: Chat, phone and email support are available, and ask them to drop DRM from the Kindle.

Joe Hewitt, one of the most important software developers in recent history, published a provocative and sad post on his personal blog today, predicting that unless the open and free Web gets someone to own and take responsibility for advancing it, it will inevitably fall into virtual obscurity in the dust of fast evolving platforms like iOS, Android and Windows. Chris White, one of the co-founders of Android, offers a compelling argument against Hewitt's perspective, though.

Hewitt was one of the primary co-creators of Firefox, he single handedly built the Facebook iPhone app and when he left Facebook fed up with Apple's approval process for apps - he announced that his next aim was to build tools for mobile HTML5 developers. Apparently that work has led to some frustrating experiences trying to support the open web. It's not surprising, but it is pretty heartbreaking. It's hard to imagine a decentralized platform like the web evolving to make as many things possible, as quickly and at scale, as the big centralized app platforms.

"The Web has no one who can ensure that the platform acquires cutting edge capabilities in a timely manner (camera access, anyone?)," Hewitt writes. "The Web has no one to ensure that it is competitive with other platforms, and so increasingly we are seeing developers investing their time in other platforms that serve their needs better... I can easily see a world in which Web usage falls to insignificant levels compared to Android, iOS, and Windows, and becomes a footnote in history. That thing we used to use in the early days of the Internet."

Hewitt says standards bodies are debilitatingly slow, that Web-first evangelists are guilty of staggering arrogance that puts principles above relevance for users and developers and that apps just won't run on the web in the future unless something changes dramatically.

The web needs an owner, Hewitt argues. It needs a single code repository and a strong leader to push it forward.

So far it seems that most people are in dissapointed agreement with Hewitt. One who's not is Portland, Oregon internet marketer Uriah Maynard. "Arguing for 'an owner' of the web is like winning the American revolution and then arguing that we need a king," Maynard says in articulating a counter-position well. "No owners, no masters. That is a killer feature of the web, and the reason it will never die, even if it fades in popularity. What we need is to learn how to efficiently run truly democratic organizations." Uriah's in Portland and clearly needs to put a bird on it.

Chris White, one of the co-founders of the Android OS, puts it a little bit differently. "The web is only interesting because it's a standard," White writes, on Google Plus.

"As new experiences become commonplace, they get rolled into the one standard platform Microsoft, Google, Apple, Facebook, et al agree on. The cutting edge will always occur on proprietary platforms first. Asking for a private entity to control the web is like asking for a sovereign country to control the United Nations (or the world).

StatCounter's data points to a December 2011 take-over by Google's browser
Gregg Keizer

Google's Chrome is on the brink of replacing Firefox as the second-most-popular browser, according to one Web statistics firm.

Data provided by StatCounter, an Irish company that tracks browser usage using the free analytics tools it offers websites, shows that Chrome will pass Firefox to take the No. 2 spot behind Microsoft's Internet Explorer (IE) no later than December.

As of Wednesday, Chrome's global average user share for September was 23.6%, while Firefox's stood at 26.8%. IE, meanwhile, was at 41.7%.

The climb of Chrome during 2011 has been astonishing: It has gained eight percentage point since January 2011, representing a 50% increase.

During that same period, Firefox has dropped almost four percentage points, a decline of about 13%, while IE has also fallen four points, a 9% dip.

That means Chrome is essentially reaping all the defections from Firefox and IE.

If the trends established thus far this year continue, Chrome will come close to matching Firefox's usage share in November, then pass its rival in December, when Chrome will account for approximately 26.6% of all browsers and Firefox will have a 25.3% share.

StatCounter is not the only Web metrics company that publicly posts browser share statistics, however. Data provided by U.S.-based Net Applications, for example, shows a much bigger gap between Firefox and Chrome: In its numbers for August, Net Application had Firefox with a 22.6% share of desktop browser usage, and Chrome at 15.5%.

Using Net Applications numbers, Chrome could have a 17.8% share by the end of 2011, short of Firefox's projected 22.3%. But if the pace of change lasts, Chrome should pass Firefox on Net Applications' chart by mid-2012.

Because Net Applications weights its numbers to more better estimate usage share in countries from which relatively few users navigate to sites it monitors, the company's data theoretically paints a more accurate picture because it factors in the huge Chinese market.

Some browser makers -- Microsoft in particular -- cite that as a reason why they regularly defer to Net Applications' numbers. Not coincidentally, Net Applications pegs IE with a much higher share -- 55.3% -- than do other metrics firms such as StatCounter.

The Free Press has filed a lawsuit against the Federal Communication Commission, challenging the net neutrality rules laid out by the regulator.

The lawsuit, which was filed in the U.S. Court of Appeals for the First Circuit in Boston, claims that the net neutrality rules laid down by the regulator are different for fixed line and mobile wireless broadband.

The net neutrality rules laid down by the FCC prevent fixed line broadband providers from discriminating between websites but the same rules don’t apply to mobile wireless carriers.

According to the rules, mobile wireless carriers are not allowed to block voice and other applications that compete with their own services, but other than that, they are free do to what they want.

“When the FCC first proposed the open Internet rules, they came with the understanding that there is only one Internet, no matter how people choose to reach it," Free Press Policy Director Matt Wood said in a statement.

"The final rules provide some basic protections for consumers, but do not deliver on the promise to preserve openness for mobile Internet access. They fail to protect wireless users from discrimination, and they let mobile providers block innovative applications with impunity" he added.http://www.itproportal.com/2011/09/3...trality-rules/

BitTorrent Offers Tech to Decongest ISPs' Networks
Stephen Shankland

BitTorrent, a company that's enabled network-crushing levels of file sharing, can be seen as Internet service providers' natural opponent. But the company's chief executive today entered the lion's den with a surprising message:

"I'm actually here to help."

How? In a speech at the Broadband World Forum here, BitTorrent CEO Eric Klinker tried to build enthusiasm for his company's Micro Transport Protocol, or μTP, an open-source technology that's built into the company's client software for sharing files over peer-to-peer connections. μTP increases network efficiency and addresses congestion--the biggest concern that ISPs raised a few years ago during the heated network neutrality debates, Klinker said.

Much data today is sent over the Internet with the Transmission Control Protocol, or TCP, but Klinker argued that its method of finding out when there are congestion troubles is too little, too late. TCP breaks information down into numerous individually addressed packets that are reassembled at the other end of the network link, monitoring constantly for packets that fail to arrive.

"TCP detects congestion based on lost packets," Klinker said. "This is a lot like driving your car through a school zone and only slowing down after you've struck your first pedestrian."

In contrast, μTP detects congestion earlier and steps out of the way when it discovers a problem, Klinker said.

"It was designed in its philosophy to yield to traffic," Klinker said. "μTP will no longer be the cause of any congestion on the Internet because of these mechanisms."

And avoiding congestion lowers ISP costs, he argued. "If we could somehow tackle the network congestion problem, we end up tackling the network cost issue," he said.

Of course, he also predicted more data coming to the Net. "The Internet is going to evolve, to continue its development as a multimedia network. That means a lot more big files," he said.

Here, though, he thinks BitTorrent has a role to play by helping people transfer information from all the digital cameras and other devices that can easily produce gigabytes of data.

"You'll see us roll out applications that help liberate media from those devices and share it with family and friends," he said. "The content has no value until it's shared and seen. That's hard for today's networks. The devices at the edge of the network seem to miraculously increase in capability, but the networks don't seem to change."http://news.cnet.com/8301-30685_3-20...isps-networks/

Poll: Young People Say Online Meanness Pervasive
Connie Cass

Catherine Devine had her first brush with an online bully in seventh grade, before she'd even ventured onto the Internet. Someone set up the screen name "devinegirl" and, posing as Catherine, sent her classmates instant messages full of trashy talk and lies. "They were making things up about me, and I was the most innocent 12-year-old ever," Devine remembers. "I hadn't even kissed anybody yet."

As she grew up, Devine, now 22, learned to thrive in the electronic village. But like other young people, she occasionally stumbled into one of its dark alleys.

A new Associated Press-MTV poll of youth in their teens and early 20s finds that most of them — 56 percent — have been the target of some type of online taunting, harassment or bullying, a slight increase over just two years ago. A third say they've been involved in "sexting," the sharing of naked photos or videos of sexual activity. Among those in a relationship, 4 out of 10 say their partners have used computers or cellphones to abuse or control them.

Three-fourths of the young people said they consider these darker aspects of the online world, sometimes broadly called "digital abuse," a serious problem.

They're not the only ones.

President Barack Obama brought students, parents and experts together at the White House in March to try to confront "cyberbullying." The Education Department sponsors an annual conference to help schools deal with it. Teen suicides linked to vicious online bullying have caused increasing worry in communities across the country.

Conduct that rises to the point of bullying is hard to define, but the AP-MTV poll of youth ages 14 to 24 showed plenty of rotten behavior online, and a perception that it's increasing. The share of young people who frequently see people being mean to each other on social networking sites jumped to 55 percent, from 45 percent in 2009.

That may be partly because young people are spending more time than ever communicating electronically: 7 in 10 had logged into a social networking site in the previous week, and 8 in 10 had texted a friend.

"The Internet is an awesome resource," says Devine, "but sometimes it can be really negative and make things so much worse."

Devine, who lives on New York's Long Island, experienced her share of online drama in high school and college: A friend passed around highly personal entries from Devine's private electronic journal when she was 15. She left her Facebook account open on a University of Scranton library computer, and a prankster posted that she was pregnant (she wasn't). Most upsetting, when she was 18 Devine succumbed to a boyfriend's pressure to send a revealing photo of herself, and when they broke up he briefly raised the threat of embarrassing her with it.

"I didn't realize the power he could have over me from that," Devine said. "I thought he'd just see it once and then delete it, like I had deleted it."

The Internet didn't create the turmoil of the teen years and young adulthood — romantic breakups, bitter fights among best friends, jealous rivalries, teasing and bullying. But it does amplify it. Hurtful words that might have been shouted in the cafeteria, within earshot of a dozen people, now can be blasted to hundreds on Facebook.

Plus, 75 percent of youth think people do or say things online that they wouldn't do or say face to face.

The most common complaints were people spreading false rumors on Internet pages or by text message, or being downright mean online; more than a fifth of young people said each of those things had happened to them. Twenty percent saw someone take their electronic messages and share them without permission, and 16 percent said someone posted embarrassing pictures or video of them without their permission.

Some of these are one-time incidents; others cross into repeated harassment or bullying.

Sameer Hinduja, a cyberbullying researcher, said numerous recent studies taken together suggest a cyberbullying victimization rate of 20 to 25 percent for middle and high school students. Many of these same victims also suffer from in-person abuse. Likewise, many online aggressors are also real-world bullies.

"We are seeing offenders who are just jerks to people online and offline," said Hinduja, an associate professor of criminal justice at Florida Atlantic University and co-director of the Cyberbullying Research Center.

And computers and cellphones increase the reach of old-fashioned bullying.

"When I was bullied in middle school I could go home and slam my door and forget about it for a while," said Hinduja. "These kids can be accessed around the clock through technology. There's really no escape."

"Sexting," or sending nude or sexual images, is more common among those over 18 than among minors. And it hasn't shown much increase in the past two years. Perhaps young people are thinking twice before hitting "send" after publicity about adults — even members of Congress — losing their jobs over sexual images, and news stories of young teens risking child pornography charges if they're caught.

Fifteen percent of young people had shared a nude photo of themselves in some way or another; that stood at 7 percent among teens and 19 percent among young adults. But almost a fourth of the younger group said they'd been exposed to sexting in some way, including seeing images someone else was showing around. And 37 percent of the young adults had some experience with "sexting" images.

Many young people don't take sexting seriously, despite the potential consequences.

Alec Wilhelmi, 20, says girlfriends and girls who like him have sent sexual messages or pictures — usually photos of bare body parts that avoid showing faces. Once a friend made a sexual video with his girlfriend, and showed Wilhelmi on his cellphone.

"I thought that was funny, because I don't know what kind of girl would allow that," said Wilhelmi, a freshman at Iowa State University.

Technology can facilitate dating abuse. Nearly three in 10 young people say their partner has checked up on them electronically multiple times per day or read their text messages without permission. Fourteen percent say they've experienced more abusive behavior from their partners, such as name-calling and mean messages via Internet or cellphone.

The AP-MTV poll was conducted Aug. 18-31 and involved online interviews with 1,355 people ages 14-24 nationwide. The margin of sampling error is plus or minus 3.8 percentage points.

The poll is part of an MTV campaign, "A Thin Line," aiming to stop the spread of digital abuse.

The survey was conducted by Knowledge Networks, which used traditional telephone and mail sampling methods to randomly recruit respondents. People selected who had no Internet access were given it for free.

She mixed herself a mojito, added a sprig of mint, put on her sunglasses and headed outside to her friend’s pool. Settling into a lounge chair, she tapped the Skype app on her phone. Hundreds of miles away, her face popped up on her therapist’s computer monitor; he smiled back on her phone’s screen.

She took a sip of her cocktail. The session began.

Ms. Weinblatt, a 30-year-old high school teacher in Oregon, used to be in treatment the conventional way — with face-to-face office appointments. Now, with her new doctor, she said: “I can have a Skype therapy session with my morning coffee or before a night on the town with the girls. I can take a break from shopping for a session. I took my doctor with me through three states this summer!”

And, she added, “I even e-mailed him that I was panicked about a first date, and he wrote back and said we could do a 20-minute mini-session.”

Since telepsychiatry was introduced decades ago, video conferencing has been an increasingly accepted way to reach patients in hospitals, prisons, veterans’ health care facilities and rural clinics — all supervised sites.

But today Skype, and encrypted digital software through third-party sites like CaliforniaLiveVisit.com, have made online private practice accessible for a broader swath of patients, including those who shun office treatment or who simply like the convenience of therapy on the fly.

One third-party online therapy site, Breakthrough.com, said it has signed up 900 psychiatrists, psychologists, counselors and coaches in just two years. Another indication that online treatment is migrating into mainstream sensibility: “Web Therapy,” the Lisa Kudrow comedy that started online and pokes fun at three-minute webcam therapy sessions, moved to cable (Showtime) this summer.

“In three years, this will take off like a rocket,” said Eric A. Harris, a lawyer and psychologist who consults with the American Psychological Association Insurance Trust. “Everyone will have real-time audiovisual availability. There will be a group of true believers who will think that being in a room with a client is special and you can’t replicate that by remote involvement. But a lot of people, especially younger clinicians, will feel there is no basis for thinking this. Still, appropriate professional standards will have to be followed.”

The pragmatic benefits are obvious. “No parking necessary!” touts one online therapist. Some therapists charge less for sessions since they, too, can do it from home, saving on gas and office rent. Blizzards, broken legs and business trips no longer cancel appointments. The anxiety of shrink-less August could be, dare one say ... curable?

Ms. Weinblatt came to the approach through geographical necessity. When her therapist moved, she was apprehensive about transferring to the other psychologist in her small town, who would certainly know her prominent ex-boyfriend. So her therapist referred her to another doctor, whose practice was a day’s drive away. But he was willing to use Skype with long-distance patients. She was game.

Now she prefers these sessions to the old-fashioned kind.

But does knowing that your therapist is just a phone tap or mouse click away create a 21st-century version of shrink-neediness?

“There’s that comfort of carrying your doctor around with you like a security blanket,” Ms. Weinblatt acknowledged. “But,” she added, “because he’s more accessible, I feel like I need him less.”

The technology does have its speed bumps. Online treatment upends a basic element of therapeutic connection: eye contact.

Patient and therapist typically look at each other’s faces on a computer screen. But in many setups, the camera is perched atop a monitor. Their gazes are then off-kilter.

“So patients can think you’re not looking them in the eye,” said Lynn Bufka, a staff psychologist with the American Psychological Association. “You need to acknowledge that upfront to the patient, or the provider has to be trained to look at the camera instead of the screen.”

The quirkiness of Internet connections can also be an impediment. “You have to prepare vulnerable people for the possibility that just when they are saying something that’s difficult, the screen can go blank,” said DeeAnna Merz Nagel, a psychotherapist licensed in New Jersey and New York. “So I always say, ‘I will never disconnect from you online on purpose.’ You make arrangements ahead of time to call each other if that happens.”

Still, opportunities for exploitation, especially by those with sketchy credentials, are rife. Solo providers who hang out virtual shingles are a growing phenomenon. In the Wild Web West, one site sponsored a contest asking readers to post why they would seek therapy; the person with the most popular answer would receive six months of free treatment. When the blogosphere erupted with outrage from patients and professionals alike, the site quickly made the applications private.

Other questions abound. How should insurance reimburse online therapy? Is the therapist complying with licensing laws that govern practice in different states? Are videoconferencing sessions recorded? Hack-proof?

Another draw and danger of online therapy: anonymity. Many people avoid treatment for reasons of shame or privacy. Some online therapists do not require patients to fully identify themselves. What if those patients have breakdowns? How can the therapist get emergency help to an anonymous patient? “A lot of patients start therapy and feel worse before they feel better,” noted Marlene M. Maheu, founder of the TeleMental Health Institute, which trains providers and who has served on task forces to address these questions. “It’s more complex than people imagine. A provider’s Web site may say, ‘I won’t deal with patients who are feeling suicidal.’ But it’s our job to assess patients, not to ask them to self-diagnose.” She practices online therapy, but advocates consumer protections and rigorous training of therapists.

Psychologists say certain conditions might be well-suited for treatment online, including agoraphobia, anxiety, depression and obsessive-compulsive disorder. Some doctors suggest that Internet addiction or other addictive behaviors could be treated through videoconferencing.

Others disagree. As one doctor said, “If I’m treating an alcoholic, I can’t smell his breath over Skype.”

Cognitive behavioral therapy, which can require homework rather than tunneling into the patient’s past, seems another candidate. Tech-savvy teenagers resistant to office visits might brighten at seeing a therapist through a computer monitor in their bedroom. Home court advantage.

Therapists who have tried online therapy range from evangelizing standard-bearers, planting their stake in the new future, to those who, after a few sessions, have backed away. Elaine Ducharme, a psychologist in Glastonbury, Conn., uses Skype with patients from her former Florida practice, but finds it disconcerting when a patient’s face becomes pixilated. Dr. Ducharme, who is licensed in both states, will not videoconference with a patient she has not met in person. She flies to Florida every three months for office visits with her Skype patients.

“There is definitely something important about bearing witness,” she said. “There is so much that happens in a room that I can’t see on Skype.”

Dr. Heath Canfield, a psychiatrist in Colorado Springs, also uses Skype to continue therapy with some patients from his former West Coast practice. He is licensed in both locations. “If you’re doing therapy, pauses are important and telling, and Skype isn’t fast enough to keep up in real time,” Dr. Canfield said. He wears a headset. “I want patients to know that their sound isn’t going through walls but into my ears. I speak into a microphone so they don’t feel like I’m shouting at the computer. It’s not the same as being there, but it’s better than nothing. And I wouldn’t treat people this way who are severely mentally ill.”

Indeed, the pitfalls of videoconferencing with the severely mentally ill became apparent to Michael Terry, a psychiatric nurse practitioner, when he did psychological evaluations for patients throughout Alaska’s Eastern Aleutian Islands. “Once I was wearing a white jacket and the wall behind me was white,” recalled Dr. Terry, an associate clinical professor at the University of San Diego. “My face looked very dark because of the contrast, and the patient thought he was talking to the devil.”

Another time, lighting caused a halo effect. “An adolescent thought he was talking to the Holy Spirit, that he had God on the line. It fit right into his delusions.”

Johanna Herwitz, a Manhattan psychologist, tried Skype to augment face-to-face therapy. “It creates this perverse lower version of intimacy,” she said. “Skype doesn’t therapeutically disinhibit patients so that they let down their guard and take emotional risks. I’ve decided not to do it anymore.”

Several studies have concluded that patient satisfaction with face-to-face interaction and online therapy (often preceded by in-person contact) was statistically similar. Lynn, a patient who prefers not to reveal her full identity, had been seeing her therapist for years. Their work deepened into psychoanalysis. Then her psychotherapist retired, moving out of state.

Now, four times a week, Lynn carries her laptop to an analyst’s unoccupied office (her insurance requires that a local provider have some oversight). She logs on to an encrypted program at Breakthrough.com and clicks through until she reads an alert: “Talk now!”

Hundreds of miles away, so does her analyst. Their faces loom, side by side on each other’s monitors. They say hello. Then Lynn puts her laptop on a chair and lies down on the couch. Just the top of her head is visible to her analyst.

Fifty minutes later the session ends. “The screen is asleep so I wake it up and see her face,” Lynn said. “I say goodbye and she says goodbye. Then we lean in to press a button and exit.”

However grumpy people are when they wake up, and whether they stumble to their feet in Madrid, Mexico City or Minnetonka, Minn., they tend to brighten by breakfast time and feel their moods taper gradually to a low in the late afternoon, before rallying again near bedtime, a large-scale study of posts on the social media site Twitter found.

Drawing on messages posted by more than two million people in 84 countries, researchers discovered that the emotional tone of people’s messages followed a similar pattern not only through the day but also through the week and the changing seasons. The new analysis suggests that our moods are driven in part by a shared underlying biological rhythm that transcends culture and environment.

The report, by sociologists at Cornell University and appearing in the journal Science, is the first cross-cultural study of daily mood rhythms in the average person using such text analysis. Previous studies have also mined the mountains of data pouring into social media sites, chat rooms, blogs and elsewhere on the Internet, but looked at collective moods over broader periods of time, in different time zones or during holidays.

“There’s just a torrent of new digital data coming into the field, and it’s transforming the social sciences, creating new lenses to look at all sorts of behaviors,” said Peter Sheridan Dodds, a researcher at the University of Vermont who was not involved in the new research. He called the new study “very exciting, because it complements previous findings” and expands on what is known about how mood fluctuates.

He and other outside researchers also cautioned that drawing on Twitter had its hazards, like any other attempt to monitor the fleeting internal states labeled as moods. For starters, Twitter users are computer-savvy, skew young and affluent, and post for a variety of reasons.

“Tweets may tell us more about what the tweeter thinks the follower wants to hear than about what the tweeter is actually feeling,” said Dan Gilbert, a Harvard psychologist, in an e-mail. “In short, tweets are not a simple reflection of a person’s current affective state and should not be taken at face value.”

The study’s authors, Scott A. Golder and Michael W. Macy, acknowledge such limitations and worked to correct for them. In the study, they collected up to 400 messages from each of 2.4 million Twitter users writing in English, posted from February 2008 through January 2010.

They analyzed the text of each message, using a standard computer program that associates certain words, like “awesome” and “agree,” with positive moods and others, like “annoy” and “afraid,” with negative ones. They included so-called emoticons, the face symbols like “:)” that punctuate digital missives.

The researchers gained access to the messages through Twitter, using an interface that allows scientists as well as software developers to work with the data.

The pair found that about 7 percent of the users qualified as “night owls,” showing peaks in upbeat-sounding messages around midnight and beyond, and about 16 percent were morning people, who showed such peaks very early in the day.

After accounting for these differences, the researchers determined that for the average user in each country, positive posts crested around breakfast time, from 6 a.m. to 9 a.m.; they fell off gradually until hitting a trough between 3 p.m. and 4 p.m., then drifted upward, rising more sharply after dinner.

To no one’s surprise, people’s overall moods were lowest at the beginning of the workweek, and rose later, peaking on the weekend. (The pattern of peak moods on days off held for countries where Saturday and Sunday are not the weekend.)

The pattern on weekend days was shifted about two hours later — the morning peak closer to 9 a.m. and the evening one past 9 p.m., most likely because people sleep in and stay up later — but the shape of the curve was the same.

“This is a significant finding because one explanation out there for the pattern was just that people hate going to work,” Mr. Golder said. “But if that were the case, the pattern should be different on the weekends, and it’s not. That suggests that something more fundamental is driving this — that it’s due to biological or circadian factors.”

The researchers found no evidence for the winter blues, the common assumption that short winter days contribute to negative moods. Negative messages were as likely during the winter as in the summer.

But positively rated messages tracked the rate at which day length changed: that is, they trended upward around the spring equinox in late March, and downward around the fall equinox in late September. This suggests that seasonal mood changes are due more to a diminishing of positive emotions in anticipation of short days, the authors say.

Dr. Dodds, the University of Vermont researcher, has been doing text analysis of Twitter messages worldwide as well, to get a reading on collective well-being, among other things. He said the new study comported well with his own recent analysis. “We find that swearing goes up with negative mood in the very same way,” he said. “It tracks beautifully with the pattern they’re showing.”

Social scientists analyzing digital content agree that, for all its statistical appeal, the approach still needs some fine-tuning. On Twitter, people routinely savage others with pure relish and gush sarcastically — and the software is not sophisticated enough to pick up these subtleties.

Two men who worked on the hit movie “Black Swan” have mounted an unusual challenge to the film industry’s widely accepted practice of unpaid internships by filing a lawsuit on Wednesday asserting that the production company had violated minimum wage and overtime laws by hiring dozens of such interns.

The lawsuit, filed in federal court in Manhattan, claims that Fox Searchlight Pictures, the producer of “Black Swan,” had the interns do menial work that should have been done by paid employees and did not provide them with the type of educational experience that labor rules require in order to exempt employers from paying interns.

“Fox Searchlight’s unpaid interns are a crucial labor force on its productions, functioning as production assistants and bookkeepers and performing secretarial and janitorial work,” the lawsuit says. “In misclassifying many of its workers as unpaid interns, Fox Searchlight has denied them the benefits that the law affords to employees.” Workplace experts say the number of unpaid internships has grown in recent years, in the movie business and many other industries. Some young people complain that these internships give an unfair edge to the affluent and well connected.

One plaintiff, Alex Footman, a 2009 Wesleyan graduate who majored in film studies, said he had worked as a production intern on “Black Swan” in New York from October 2009 to February 2010.

He said his responsibilities included preparing coffee for the production office, ensuring that the coffee pot was full, taking and distributing lunch orders for the production staff, taking out the trash and cleaning the office.

“The only thing I learned on this internship was to be more picky in choosing employment opportunities,” Mr. Footman, 24, said in an interview. “ ‘Black Swan’ had more than $300 million in revenues. If they paid us, it wouldn’t make a big difference to them, but it would make a huge difference to us.”

Russell Nelson, a Fox Searchlight spokesman, said Wednesday afternoon, “We just learned of this litigation and have not had a chance to review it so we cannot make any comment at this time.”

The lawsuit is seeking class-action status for what the plaintiffs say were more than 100 unpaid interns on various Fox Searchlight productions. In addition to seeking back pay under federal and state wage laws, the lawsuit seeks an injunction barring Fox Searchlight from improperly using unpaid interns.

Fox Searchlight acted illegally, the lawsuit asserts, because the company did not meet the federal labor department’s criteria for unpaid internships. Those criteria require that the position benefit the intern, that the intern not displace regular employees, that the training received be similar to what would be given in an educational institution and that the employer derive no immediate advantage from the intern’s activities.

Movie companies have defended using unpaid interns, saying the internships are educational, highly coveted and an important way for young people to break into the industry. Lawyers for numerous companies say the Labor Department’s criteria are obsolete, adding that department officials rarely enforce the rules against unpaid internships.

The other named plaintiff, Eric Glatt, 42, who has an M.B.A. from Case Western Reserve University, was an accounting intern for “Black Swan.” He prepared documents for purchase orders and petty cash, traveled to the set to obtain signatures on documents and created spreadsheets to track missing information in employee personnel file.

Mr. Glatt, who had been working at A.I.G. training new employees, said he took the position because he wanted to move into the film industry.

“When I started looking for opportunities in the industry, I saw that most people accept an ugly trade-off,” he said. “If you want to get your foot in the door on a studio picture, you have to suck it up and do an unpaid internship.”

Adam Klein, a lawyer for the plaintiffs, said this would be the first of several lawsuits that seek to fight these internships.

“Unpaid interns are usually too scared to speak out and to bring such a lawsuit because they are frightened it will hurt their chances of finding future jobs in their industry,” he said.

How to Automatically Download Movies as Soon as They’re Released with CouchPotato
Whitson Gordon

Windows/Mac/Linux: If you just saw an awesome movie in the theaters and want it on your computer as soon as possible, free app CouchPotato will look for it on Usenet and automatically download it as soon as a copy is available online.

So you've gotten started with Usenet, and maybe you've even turned your computer into an internet PVR with Sick Beard. If you're a movie buff, though, you're still stuck searching for and downloading movies manually—and that just won't do. CouchPotato automates the process: just tell it what movies you want to download, and it will search according to your quality and language specifications, find the perfect match, and download it for you. If the movie isn't out yet, it'll check back periodically until the movie is available, then download it for you right then and there.

Step One: Install CouchPotato

Installation on Windows is easy—just download the ZIP file and extract it somewhere on your computer. It's a portable app, so you don't need to install it or anything—just double click on it to start it up. Mac users will need to install Python, then drag the app into their Applications folder as usual. Linux users will need to do a bit more work. This guide also assumes you have SABNzbd installed as described in our original Usenet guide.

Step Two: Configure Your Settings

Once you've started up CouchPotato, it should open up the web interface in your browser. Hit the cog icon at the top of the page to edit your settings and get it set up.Under the General tab, you can tell it how often you want it to search, and which search terms you want it to add (like "Bluray" or "DTS") or remove (like "dubbed" or "hardcoded") from your search. You can also add a username and password if you want to protect the web interface from others.

Under "NZBs/Torrents", type in your host IP for SABNzbd, the API key, and the username and password you use to access SAB (you can find all this info in SAB's settings under General). You'll also need to enter your username and API key for your NZB provider—like NZBMatrix or Newzbin—under the "Providers" tab. The Quality and Renaming tabs let you customize how you search for and save movies.

Step Three: Add Movies to Download

How to Automatically Download Movies as Soon as They're Released with CouchPotatoTo add a movie, just type a movie name into the box in the top right-hand corner of the page. Select your preferred quality and click "Add". You should see the movie show up on your wanted list, which you can access via the "Wanted" button in the upper left-hand corner of the interface. CouchPotato will check Usenet every so often to see if someone's uploaded that movie, and send it straight to SAB to be downloaded when it finds a copy. If the movie's been out for awhile, it'll find the best version according to your specifications and download it for you.

It's actually very simple to set up, and takes a lot of the annoyances out of downloading movies on Usenet. Now, instead of wading through search results and checking back periodically for new releases, you can just punch your movie into CouchPotato and get on with your day. When your movie's ready, it'll show up in your download folder. http://lifehacker.com/5844853/how-to...h-couch-potato

NHL’s Montreal Canadiens Accused of Pirating The Hurt Locker
enigmax

Last month it became clear that having developed their pay-up-or-else file-sharing settlement scheme in the United States, the makers of the Hurt Locker had moved north. In their new phase of targeting Canadian IP addresses for cash settlements, Voltage Pictures have included an interesting target in their latest batch – the Montreal Canadiens hockey team.

Since last year, Voltage Pictures, the makers of Hurt Locker, have been working with the Dunlap, Grubb and Weaver law firm (better known as the U.S. Copyright Group) to target Internet users who allegedly shared their Oscar-winning movie online.

Give us thousands of dollars in settlement, they say, and we won’t ruin your life with an expensive lawsuit.

Last month Voltage exported their scheme north to Canada and through the Federal Court in Montreal the obtained an order which forced three Canadian ISPs – Bell Canada, Cogeco Cable Inc. and Videotron GP – to hand over the personal details of subscribers Voltage claim infringed their copyrights.

Following a review of the IP addresses provided to the first ISP, Bell Canada (shown below), an eyebrow-raising nugget of information has come to light.

The third IP on the list – 207.61.47.217 – looks much like any other. It is accused of sharing the movie using uTorrent v2.2.1.0 on May 4th 2011, and in itself that is nothing unusual. But further investigation shows that this particular IP address has a rather famous owner.

As shown here, not only is the IP provided by Bell Canada it can also be traced back to the Bell Centre. That’s because it’s operated by none other than the Montreal Canadiens hockey team.

The big question now is whether Goudreau Gage Dubuc LLP, the law firm hired by Voltage to carry out their Canadian shakedown, will send their usual settlement demands to Montreal Canadiens. If they do, this could get very interesting indeed.

It is highly likely that many individuals are able to obtain Internet access via 217.canadiens.com, the domain from which the infringement was allegedly logged. The problematic issue of pinning an infringement to an individual on a multiple access IP was highlighted perfectly in the Swedish Film Institute case recently. Furthermore, Montreal Canadiens have very, very deep pockets and lawyers on tap.

Hockey fans and opponents of these copyright shakedowns will be hoping that this particular Hurt Locker timebomb is dealt with by Canadiens via a boarding or their enforcer, rather than being subjected to an empty net goal, as Voltage might prefer. https://torrentfreak.com/nhls-montre...locker-110928/

Music Piracy Continues to Decline Thanks to Spotify
Ernesto

A new report looking into online music consumption habits shows that since 2009 the number of people who pirate music has dropped by 25 percent in Sweden. The sharp decrease coincides with a massive interest for the music streaming service Spotify. One of the main reasons why people switch to legal services is the wider range of material they can find there.

When Spotify launched their first beta in the fall of 2008, we branded it “an alternative to music piracy.”

Having the option to stream millions of tracks supported by an occasional ad, or free of ads for a small monthly fee, Spotify appeared to be serious competitor to music piracy. Data just released by the Swedish Music industry appears to support this theory.

Through quarterly surveys researchers have polled the music consumption habits of thousands of Swedes between the age of 15 and 74, and in their most recent report they find that music piracy continues to drop.

Since 2009 the numbers of people who download music illegally has decreased by more than 25 percent, and over the last year alone it dropped by 9 percent. The data further suggests that this downward trend is caused by the availability of improved legal services such as Spotify.

When Spotify opened up to the public early 2009, it took only three months before the number of Spotify users had outgrown the number of music pirates. In the months after that the number of downloaders continued to decline while Spotify expanded its user base.
playing in Spotify..riaa spotify

Streaming services such as Spotify are now the most popular way to consume music. More than 40 percent of the participants in the survey now use a music streaming service, compared to less than 10 percent who say they download music legally.

About 23 percent continue to pirate music, but this number is dwindling.

“The long-term trend is a sharp increase in legal streaming while we see a reduction in illegal file sharing and downloading,” Music Sweden’s CEO Elizabet Widlund said commenting on the results.

“When 800,000 Swedes are willing to pay for streaming music, there is clearly a market for more legal players in the digital music market. We encourage diversity of music services as it will provide better conditions for both those who create music and those who listen to it,” she added.

Looking at the motivations for people to switch to legal services, participants in the survey cited “the range of music that’s released” as the primary reason (40%). Other explanations were the absolute increase in available music (30%), and the fact that legal services have become cheaper (24%) and simpler (24%).

Although the above is certainly good news for the music industry, it has to be noted that the ‘change’ to legal services is ‘fragile.’ The survey shows a slight change in the ongoing trend during the second quarter of 2011, exactly when Spotify announced that its free service would have some new limitations.

Although this change motivated some (15%) to sign up with a paid Spotify account, the majority (31%) said they would leave Spotify to turn to other streaming services, like YouTube, or file-sharing sites.

There is no doubt that, unlike music industry bosses have claimed in the past, there are indeed ways to compete with free. However, time is needed to find the right balance between giving music fans what they want, and secure a healthy revenue stream. https://torrentfreak.com/music-pirac...potify-110928/

How a Magnum Opus Took Shape

Box sets of material remastered by the three surviving members of Pink Floyd, 38 years after they recorded ‘The Dark Side of the Moon’,might signal a reunion, the band’s drummer, Nick Mason, tells BRIAN BOYD

ON A SLOW week, The Dark Side of the Moon still sells 10,000 copies. And if a band sells 10,000 copies of anything a week these days, you reach for the champagne and unfurl the bunting. But Pink Floyd’s magnum opus is a once-in-a-generation affair. Not even reaching 43 minutes in length and with the band having to beg for studio time to finish it off, it was not expected to amount to much on its release in March 1973. A few years earlier the band had lost their main hit writer, Syd Barrett, to mental illness; their last few albums were proggy-noodly affairs that saw them treading water and the bitter divisions that would soon rend the band asunder were beginning to appear.

Nick Mason, the band’s only permanent member since their inception, in 1965, and keeper of all their secrets, is in reminiscence mode as he welcomes and waves me into the studio where The Dark Side Of The Moon was recorded: the famed Abbey Road. “This is Studio 3; we couldn’t record it downstairs in Studio 2 (where The Beatles recorded their albums) because Paul McCartney was in there recording Band on the Run at the time,” he says.

“It’s funny being back here again because so many things keep coming back to me. I remember a time we actually recorded Paul and Linda speaking so we could use their voices on Dark Side ; but for some reason we never used their contribution.”

As he stands in the middle of a studio that is no bigger than a one-bedroom apartment, he seems lost in thought. “That’s where I used to sit during recording,” he says, pointing to a corner.

Gesturing towards the control room of the studio, he says, “That’s where we all were when Syd wandered into Abbey Road one day. It was while we were making the Wish You Were Here album. We were all peering out from behind the glass to see who it was. I thought he was a technician come to fix something. We didn’t recognise him because he had bloated up to a huge weight and had shaved his hair off. I remember when it gradually dawned on us that it was in fact Syd, Roger just broke down in tears. We played him some of the Wish You Were Here mixes and he just said, ‘It goes on a bit, doesn’t it?’”

Now 67, and looking more like a kindly uncle than the percussive powerhouse of one of the most successful rock bands, Mason has become very wealthy thanks to royalties from Dark Side but he says they never made as much from the album as they should have because of their habit of “faffing around” in the studio.

“We weren’t some huge big band before The Dark Side of The Moon, ” he says. “In many ways we were still finding our feet after Syd. EMI actually own the recording studios here at Abbey Road, and because we knew Dark Side was never the sort of album you would get recorded in a month” – it was closer to a year – “we renegotiated our contract with the label so that we would get more studio time but at the cost of a lower royalty rate on the album’s sales.”

Mason believes a band today wouldn’t be allowed to release an album that sounds like Dark Side . “It was so different, so strange-sounding,” he says. “Pink Floyd were always the outsiders, we were never a Beatles/Stones-type band because we were viewed as being a psychedelic band. And no band today would be allowed to more or less live in a studio for a year just for one album.”

Mason is back in Studio 3 putting the finishing touches to a reissue of The Dark Side of The Moon later this month. The “Immersion” edition of the album will have six CDs and will include the new remastered edition of the original album as well as other mixes, out-takes, rarities – including demos of the songs – and the original version by Alan Parsons, the engineer on the album. There are also DVDs featuring live performances, documentaries, footage from European and US tours of the album and many other things.

“It’s all going out under the title Why Pink Floyd? ,” says Mason. “It starts with Dark Side , and then all the other albums get the same treatment over the following months. It’s a massive project, and the reason myself, Dave and Roger got together to do this was because we really think this will be the last throw of the CD dice.

“By putting every single thing of Pink Floyd’s on CD we’re just getting it out as perhaps the last great act of the physical disc age. . . What surprises us is that there is more interest now in these albums than there ever was first time around.

“When we went back to the original tapes of all the recordings, we found some incredible stuff – things we had forgotten about completely, or else just presumed that the tape had been lost,” says Mason.

“For example, we found a Wish You Were Here track with Stéphane Grappelli playing violin on it. He had been here in Abbey Road recording with Yehudi Menuhin, and we asked them both to be on the album, but only Grappelli did it.

“We found stuff from the very back of the cupboard for these releases – there’s early demo stuff from 1966 which sounds extraordinary, stuff with Syd in the very early days, and he sounds amazing – a real crystal-clear voice. Hearing it all again brought back loads of memories from that first year of Pink Floyd.”

Listening to all these demos, alternate versions and radically different remixes, you get a strong sense of the many creative rows that erupted during the recording sessions. Whereas on one version a vocal or guitar line is high up in the mix, on other versions it’s low down or is missing altogether. And the videos of those days show four identikit long-haired musicians hunched over their instruments in intense concentration as they went off on psychedelic flights of fancy. “We always were the most anonymous-looking of bands,” says Mason.

THE PRECIPITOUS SUCCESS of The Dark Side of The Moon left them all dazed and confused. “An album like this was the reason we had formed the band, back in 1965, and it really was difficult to even begin to think in terms of a follow-up to it because it sold in such huge amounts,” says Mason. “We did start a follow-up, and the idea was to make an album not using any musical instruments at all – just to use household objects. It seemed like a good idea at the time.” Two years later, though, they released Wish You Were Here , which some feel is superior to The Dark Side of The Moon.

Having had the most spectacular of fallings-out when Roger Waters left the band in 1985, the band have, of late, been giving signs of late that the impossible is possible and the three surviving members might record and tour again.

“It’s certainly not as painful now for us to be together in the same room,” Mason says carefully. “In the past we tended to be very critical of each other, but we’re more forgiving now. What happened was just a case of good old-fashioned artistic differences between Roger and David.

“There were bad things that people had done . . . We felt that there was no benefit to be gained from working with each other again. It was the same as The Beatles, really – people just went, ‘I want to do my own thing’. We have, though, managed to establish a relationship again – I had dinner with Roger a while back, but we didn’t talk about Pink Floyd. And we were all together for the Live 8 performance , which for me was a huge professional highlight, and the other week the three of us were on the same stage in London when Roger was doing his The Wall show. Dave was right up on top of the wall, playing guitar and singing Comfortably Numb – he told me before he went on that he was really, really nervous – and then I came on at the end and banged a tambourine.

As for a full Pink Floyd reunion? “I’m ready. It’s up to the other two now.”

The Immersion and Experience box sets of The Dark Side of the Moon will be released on Monday; the Wish You Were Here and The Wall box sets will be out in November and February. pinkfloyd.com

¬________________________________________

1 The Irish voice you hear during the track The Great Gig in the Sky is that of then Abbey Road doorman, Gerry O’Driscoll. He also contributes the famous closing line: “There is no dark side of the moon, really. As a matter of fact it’s all dark.”

2 The only thing that would drag the band out of Studio 3 when they were recording was when Monty Python’s Flying Circus was on TV. Profits from The Dark Side of the Moon helped to get Monty Python’s Holy Grail film made.

3 The manic laughter you hear during Brain Damage and Speak To Me comes from the band’s long-standing roadie, Peter Watts, father of the actor Naomi Watts.

4 On its release, Rolling Stone magazine reviewed it as “a fine album with a textural and conceptual richness that not only invites, but demands involvement”. The reviewer was future TV presenter and celebrity chef Loyd Grossman.

5 At the very end of the album if you listen really carefully you’ll hear snatches of an orchestral version of The Beatles’ Ticket to Ride . It is thought the song was playing in Abbey Road reception and leaked into Studio 3 during recording.http://www.irishtimes.com/newspaper/...304647430.html