Swaid says he understands the need for security checks. "It's in my interest and that of all the other travelers," he said. But the screening should be done equally for both Arabs and Jews, he said.

Proponents of Israel's approach say checking all passengers equally would require manpower and resources many times greater than are needed today and would needlessly extend the time passengers spend waiting for flights.

Ariel Merari, an Israeli terrorism expert who has written about aviation security, said ethnic profiling is both effective and unavoidable.

"It's foolishness not to use profiles when you know that most terrorists come from certain ethnic groups and certain age groups," he said. "A bomber on a plane is likely to be Muslim and young, not an elderly Holocaust survivor. We're talking about preventing a lot of casualties, and that justifies inconveniencing a certain ethnic group."

10Although SPOT is based in some respects on El Al’s aviation security program, El Al’s processes differ in substantive ways from those used by the SPOT program. In particular, El Al does not use a list of specific behaviors with numerical values for each, or a numerical threshold to determine whether or not to question a passenger; rather, El Al security officers utilize behavioral indicators as a basis for interviewing all passengers boarding El Al passenger aircraft, and accessing relevant intelligence databases, when deemed appropriate. In addition, El Al officials told us that they train all their personnel—not just security officers—in elements of behavior analysis, and conduct covert tests of their employees’ attentiveness at frequent intervals. According to these officials, El Al also permits what is termed “profiling,” in which passengers may be singled out for further questioning based on their nationality, ethnicity, religion, appearance, or other ascriptive characteristics, but these are not the only basis on which a passenger may be questioned. In addition, El Al security officers are empowered to bar any passenger from boarding an aircraft. The scale of El Al operations is considerably smaller than that of major airlines operating within the United States. As of 2008, El Al had a fleet of 34 aircraft. In Israel, El Al operates out of one hub airport, Ben-Gurion International, and also flies to Eilat, a city in southern Israel; in contrast, there are 457 TSA-regulated airports in the United States. In 2008, El Al had passenger boardings of about 3.6 million; in contrast, Southwest Airlines alone flew about 102 million passengers in the same year.

A key reason for Israel’s excellent air-safety record, many security experts agree, is stringent screening of passengers before they even approach check-in counters. However, this procedure is being changed, because the Israeli Supreme Court ruled in April that security screens were discriminatory.

The Association for Civil Rights in Israel (ACRI) filed suit in Israel’s supreme court in May 2007, arguing that airport security procedures wrongfully discriminate against Israeli Arabs, who make up 23 percent of Israel’s population of 7.1 million.

“This is an issue that we found across the board for Arab citizens. They are searched in a disproportionate way regardless of anything,” says Melanie Takefman, ACRI’s international media coordinator.

Security procedures begin as passengers approach the airport. Vehicles deemed to be a risk are ordered to stop for a search. At the terminal, agents closely question each passenger and run their names through databases. They tag passports and luggage with coded labels, according to each passenger’s ethnicity, essentially identifying Israeli Arabs as security risks.

Guards usually order more intensive searches for these passengers before they can proceed to check-in counters. Guards then escort them straight to their aircraft. The problem is that while few Israeli Jews are subjected to the extra scrutiny, nearly all Israeli Arabs have to undergo exhaustive checks, which can include body searches.

Palestinians from the West Bank and Gaza are not even permitted to use Israeli airports. They must travel through Jordan instead.

Nice, ex congress-critters lobby for this stuff, current ones accept campaign contributions from those who peddle nudie scanners, but they are exempt from having their packages fondled. Only for the little people, I guess:

No Security Pat-Downs for BoehnerBy JEFF ZELENY3:37 p.m. | Updated Representative John A. Boehner, soon to be the Speaker of the House, has pledged to fly commercial airlines back to his home district in Ohio. But that does not mean that he will be subjected to the hassles of ordinary passengers, including the controversial security pat-downs.

As he left Washington on Friday, Mr. Boehner headed across the Potomac River to Reagan National Airport, which was bustling with afternoon travelers. But there was no waiting in line for Mr. Boehner, who was escorted around the metal detectors and body scanners, and taken directly to the gate.

Mr. Boehner, who was wearing a casual yellow sweater and tan slacks, carried his own bags and smiled pleasantly at passengers who were leaving the security checkpoint inside the airport terminal. It was unclear whether any passengers waiting in the security line, including Representative Allen Boyd, a Florida Democrat who lost his re-election bid, saw Mr. Boehner.

At a Capitol Hill news conference after Election Day, as Mr. Boehner began laying out the changes he would make when he becomes House Speaker, he announced that he would continue to fly commercial airlines (usually Delta) back to Ohio. It was a not-so-subtle dig at the outgoing Democratic speaker, Nancy Pelosi of California, who had been criticized by Republicans for flying military airplanes when she returned home to San Francisco.

“Over the last 20 years, I have flown back and forth to my district on a commercial aircraft,” Mr. Boehner said at the time, “and I am going to continue to do that.”

And so on Friday, he did. But not without the perquisites of office, including avoiding those security pat-downs that many travelers are bracing for as holiday travel season approaches.

Michael Steel, a spokesman for the Republican leader, said in a statement that Mr. Boehner was not receiving special treatment. And a law enforcement official said that any member of Congress or administration official with a security detail is allowed to bypass security.

“The appropriate security procedures for all Congressional leaders, including Speaker Pelosi and Senator Reid, are determined by the Capitol Police working with the Transportation Security Administration,” Mr. Steel said.

(a) Flights for which screening is conducted. The provisions of §1544.201(d), with respect to accessible weapons, do not apply to a law enforcement officer (LEO) aboard a flight for which screening is required if the requirements of this section are met. Paragraph (a) of this section does not apply to a Federal Air Marshal on duty status under §1544.223.

(1) Unless otherwise authorized by TSA, the armed LEO must meet the following requirements:

(i) Be a Federal law enforcement officer or a full-time municipal, county, or state law enforcement officer who is a direct employee of a government agency.

(ii) Be sworn and commissioned to enforce criminal statutes or immigration statutes.

(iii) Be authorized by the employing agency to have the weapon in connection with assigned duties.

(2) In addition to the requirements of paragraph (a)(1) of this section, the armed LEO must have a need to have the weapon accessible from the time he or she would otherwise check the weapon until the time it would be claimed after deplaning. The need to have the weapon accessible must be determined by the employing agency, department, or service and be based on one of the following:

(i) The provision of protective duty, for instance, assigned to a principal or advance team, or on travel required to be prepared to engage in a protective function.

(ii) The conduct of a hazardous surveillance operation.

(iii) On official travel required to report to another location, armed and prepared for duty.

(iv) Employed as a Federal LEO, whether or not on official travel, and armed in accordance with an agency-wide policy governing that type of travel established by the employing agency by directive or policy statement.

(v) Control of a prisoner, in accordance with §1544.221, or an armed LEO on a round trip ticket returning from escorting, or traveling to pick up, a prisoner.

(vi) TSA Federal Air Marshal on duty status.

(3) The armed LEO must comply with the following notification requirements:

(i) All armed LEOs must notify the aircraft operator of the flight(s) on which he or she needs to have the weapon accessible at least 1 hour, or in an emergency as soon as practicable, before departure.

(ii) Identify himself or herself to the aircraft operator by presenting credentials that include a clear full-face picture, the signature of the armed LEO, and the signature of the authorizing official of the agency, service, or department or the official seal of the agency, service, or department. A badge, shield, or similar device may not be used, or accepted, as the sole means of identification.

(iii) If the armed LEO is a State, county, or municipal law enforcement officer, he or she must present an original letter of authority, signed by an authorizing official from his or her employing agency, service or department, confirming the need to travel armed and detailing the itinerary of the travel while armed.

(iv) If the armed LEO is an escort for a foreign official then this paragraph (a)(3) may be satisfied by a State Department notification.

(4) The aircraft operator must do the following:

(i) Obtain information or documentation required in paragraphs (a)(3)(ii), (iii), and (iv) of this section.

(ii) Advise the armed LEO, before boarding, of the aircraft operator's procedures for carrying out this section.

(iii) Have the LEO confirm he/she has completed the training program “Law Enforcement Officers Flying Armed” as required by TSA, unless otherwise authorized by TSA.

(iv) Ensure that the identity of the armed LEO is known to the appropriate personnel who are responsible for security during the boarding of the aircraft.

(v) Notify the pilot in command and other appropriate crewmembers, of the location of each armed LEO aboard the aircraft. Notify any other armed LEO of the location of each armed LEO, including FAM's. Under circumstances described in the security program, the aircraft operator must not close the doors until the notification is complete.

(vi) Ensure that the information required in paragraphs (a)(3)(i) and (ii) of this section is furnished to the flight crew of each additional connecting flight by the Ground Security Coordinator or other designated agent at each location.

(b) Flights for which screening is not conducted. The provisions of §1544.201(d), with respect to accessible weapons, do not apply to a LEO aboard a flight for which screening is not required if the requirements of paragraphs (a)(1), (3), and (4) of this section are met.

(c) Alcohol. (1) No aircraft operator may serve any alcoholic beverage to an armed LEO.

(2) No armed LEO may:

(i) Consume any alcoholic beverage while aboard an aircraft operated by an aircraft operator.

(ii) Board an aircraft armed if they have consumed an alcoholic beverage within the previous 8 hours.

(d) Location of weapon. (1) Any individual traveling aboard an aircraft while armed must at all times keep their weapon:

(i) Concealed and out of view, either on their person or in immediate reach, if the armed LEO is not in uniform.

It's not just congress, it's anyone with a LEO security detail, like mayors, Governors and the like. The assumption is that the NYPD cops assigned to Bloomberg will keep him from wearing a suicide vest onto the flight.

Senior European Union officials campaigned publicly for the first time Tuesday for an online “right to be forgotten.” Viviane Reding, EU commissioner for justice, fundamental rights and citizenship, introduced the idea earlier this month. Her proposed rules, which now face 12 to 18 months of debate before they can become EU law, would force companies like Facebook to offer users the right to permanently delete photos, contact info and messages posted on websites.

She was the keynote speaker on Tuesday morning at the 2010 European Data Protection and Privacy Conference.

Welcoming “an opportunity to explain this publicly for the first time,” Mrs. Reding, rather unusually for a European politician, invoked the Almighty: “God forgives and forgets, but the web never does.”

That should change, she said. “There are great sites where you can share information with friends, but it may be one day that you don’t want to share that information any more.”

Privacy lawyers say they aren’t so sure the EU is on firm legal ground. “If you voluntarily give information to a private company, it’s pretty clear they own that information,” says a senior partner at a major U.S. law firm.

“We still need to work out the details, but I support the right to be forgotten,” said Jacob Kohnstamm, chairman of the Article 29 Working Party, an alliance of national data supervisors. “Personally, I’ve done things, we’ve all done things we’d like to be forgotten.”

Like Mrs. Reding, he also argued the philosophical: “One of the most fundamental things in human life is to grow, to change, to be an individual, to remove the stamp that defines you.”

By JULIA ANGWIN The Obama administration called Thursday for the creation of a Privacy Policy Office that would help develop an Internet "privacy bill of rights" for U.S citizens and coordinate privacy issues globally.

The U.S. Commerce Department's report stopped short of calling directly for specific privacy legislation. Instead, it recommends a "framework" to protect people from a burgeoning personal data-gathering industry and fragmented U.S. privacy laws that cover certain types of data but not others.

The report marks a turning point for federal Internet policy. During the past 15 years of the commercial Internet, Congress and executive branch agencies have largely taken a hands off approach to the Internet out of a concern that a heavy government hand would stifle innovation.

MoreComplete Coverage: What They Know .The report cites comments from some major technology companies, including Microsoft Corp. and Google Inc., expressing concerns about the current patchwork of rules and guidelines governing online privacy.

The 88-page Commerce Department report states that the use of personal information has increased so much that privacy laws may now be needed to restore consumer trust in the medium.

The report is preliminary and will be completed next year. At that time, the administration is expected to make more specific legislative recommendations.

The report rejects the current state of Internet privacy notices. It says people shouldn't be expected to read and understand the legal jargon contained in privacy policies "that nobody understands, if they say anything about privacy at all."

A better approach, the report suggests, might be for companies to conduct privacy impact assessments that would be available to the public. Such reports "could create consumer awareness of privacy risks in a new technological context," the report said.

The Commerce report says people should be notified when data about them is being used in a way that is different than the reason for which it was collected. "Consumers need to know that when their data are re-used, the re-use will not cause them harm or unwarranted surprise," the report says.

It calls for a Privacy Policy Office that would "serve as a center of commercial data privacy policy expertise." The agency wouldn't oversee government use of data or existing health and financial privacy laws. Instead, it would aim to help the personal data-gathering industry develop codes of conduct that could be enforced by the Federal Trade Commission.

The report also calls for the development of a national data breach law that would make it easier for companies to navigate the current patchwork of state data breach laws.

It also calls for strengthening the existing wiretapping law—written in 1986—to protect more types of data from government surveillance.

DECEMBER 18, 2010 Your Apps Are Watching You A WSJ Investigation finds that iPhone and Android apps are breaching the privacy of smartphone users

By SCOTT THURM and YUKARI IWATANI KANE Few devices know more personal details about people than the smartphones in their pockets: phone numbers, current location, often the owner's real name—even a unique ID number that can never be changed or turned off.

These phones don't keep secrets. They are sharing this personal data widely and regularly, a Wall Street Journal investigation has found.

An examination of 101 popular smartphone "apps"—games and other software applications for iPhone and Android phones—showed that 56 transmitted the phone's unique device ID to other companies without users' awareness or consent. Forty-seven apps transmitted the phone's location in some way. Five sent age, gender and other personal details to outsiders.

The findings reveal the intrusive effort by online-tracking companies to gather personal data about people in order to flesh out detailed dossiers on them.

WSJ's Julia Angwin explains to Simon Constable how smartphone apps collect and broadcast data about your habits. Many don't have privacy policies and there isn't much you can do about it.Among the apps tested, the iPhone apps transmitted more data than the apps on phones using Google Inc.'s Android operating system. Because of the test's size, it's not known if the pattern holds among the hundreds of thousands of apps available.

Apps sharing the most information included TextPlus 4, a popular iPhone app for text messaging. It sent the phone's unique ID number to eight ad companies and the phone's zip code, along with the user's age and gender, to two of them.

Both the Android and iPhone versions of Pandora, a popular music app, sent age, gender, location and phone identifiers to various ad networks. iPhone and Android versions of a game called Paper Toss—players try to throw paper wads into a trash can—each sent the phone's ID number to at least five ad companies. Grindr, an iPhone app for meeting gay men, sent gender, location and phone ID to three ad companies.

"In the world of mobile, there is no anonymity," says Michael Becker of the Mobile Marketing Association, an industry trade group. A cellphone is "always with us. It's always on."

The Journal's Cellphone Testing Methodology The Wall Street Journal analyzed 50 popular applications, or "apps," on each of the iPhone and Android operating systems to see what information about the phones, their users and their locations the apps send to themselves and to outsiders. More >

iPhone maker Apple Inc. says it reviews each app before offering it to users. Both Apple and Google say they protect users by requiring apps to obtain permission before revealing certain kinds of information, such as location.

The Journal found that these rules can be skirted. One iPhone app, Pumpkin Maker (a pumpkin-carving game), transmits location to an ad network without asking permission. Apple declines to comment on whether the app violated its rules. Smartphone users are all but powerless to limit the tracking. With few exceptions, app users can't "opt out" of phone tracking, as is possible, in limited form, on regular computers. On computers it is also possible to block or delete "cookies," which are tiny tracking files. These techniques generally don't work on cellphone apps.

The makers of TextPlus 4, Pandora and Grindr say the data they pass on to outside firms isn't linked to an individual's name. Personal details such as age and gender are volunteered by users, they say. The maker of Pumpkin Maker says he didn't know Apple required apps to seek user approval before transmitting location. The maker of Paper Toss didn't respond to requests for comment.

Journal CommunityVote: Do you think apps should tell you when they collect and send information about the mobile device?

Many apps don't offer even a basic form of consumer protection: written privacy policies. Forty-five of the 101 apps didn't provide privacy policies on their websites or inside the apps at the time of testing. Neither Apple nor Google requires app privacy policies.

To expose the information being shared by smartphone apps, the Journal designed a system to intercept and record the data they transmit, then decoded the data stream. The research covered 50 iPhone apps and 50 on phones using Google's Android operating system. (Methodology at WSJ.com/WTK.)

The Journal also tested its own iPhone app; it didn't send information to outsiders. The Journal doesn't have an Android phone app.

Among all apps tested, the most widely shared detail was the unique ID number assigned to every phone. It is effectively a "supercookie," says Vishal Gurbuxani, co-founder of Mobclix Inc., an exchange for mobile advertisers.

On iPhones, this number is the "UDID," or Unique Device Identifier. Android IDs go by other names. These IDs are set by phone makers, carriers or makers of the operating system, and typically can't be blocked or deleted.

"The great thing about mobile is you can't clear a UDID like you can a cookie," says Meghan O'Holleran of Traffic Marketplace, an Internet ad network that is expanding into mobile apps. "That's how we track everything."

Ms. O'Holleran says Traffic Marketplace, a unit of Epic Media Group, monitors smartphone users whenever it can. "We watch what apps you download, how frequently you use them, how much time you spend on them, how deep into the app you go," she says. She says the data is aggregated and not linked to an individual.

More From the SeriesA Web Pioneer Profiles Users by Name Web's New Goldmine: Your Secrets Personal Details Exposed Via Biggest Sites Microsoft Quashed Bid to Boost Web Privacy On Cutting Edge, Anonymity in Name Only Stalking by Cellphone Google Agonizes Over Privacy On the Web, Children Face Intensive Tracking 'Scrapers' Dig Deep for Data on Web Facebook in Privacy Breach Insurers Test Data Profiles to Identify Risky Clients Shunned Profiling Technology on the Verge of Comeback Race Is On to 'Fingerprint' Phones, PCs The Tracking Ecosystem Follow @whattheyknow on Twitter Complete Coverage: What They Know The main companies setting ground rules for app data-gathering have big stakes in the ad business. The two most popular platforms for new U.S. smartphones are Apple's iPhone and Google's Android. Google and Apple also run the two biggest services, by revenue, for putting ads on mobile phones.

Apple and Google ad networks let advertisers target groups of users. Both companies say they don't track individuals based on the way they use apps.

Apple limits what can be installed on an iPhone by requiring iPhone apps to be offered exclusively through its App Store. Apple reviews those apps for function, offensiveness and other criteria.

Apple says iPhone apps "cannot transmit data about a user without obtaining the user's prior permission and providing the user with access to information about how and where the data will be used." Many apps tested by the Journal appeared to violate that rule, by sending a user's location to ad networks, without informing users. Apple declines to discuss how it interprets or enforces the policy.

Phones running Google's Android operating system are made by companies including Motorola Inc. and Samsung Electronics Co. Google doesn't review the apps, which can be downloaded from many vendors. Google says app makers "bear the responsibility for how they handle user information."

Google requires Android apps to notify users, before they download the app, of the data sources the app intends to access. Possible sources include the phone's camera, memory, contact list, and more than 100 others. If users don't like what a particular app wants to access, they can choose not to install the app, Google says.

"Our focus is making sure that users have control over what apps they install, and notice of what information the app accesses," a Google spokesman says.

Neither Apple nor Google requires apps to ask permission to access some forms of the device ID, or to send it to outsiders. When smartphone users let an app see their location, apps generally don't disclose if they will pass the location to ad companies.

Lack of standard practices means different companies treat the same information differently. For example, Apple says that, internally, it treats the iPhone's UDID as "personally identifiable information." That's because, Apple says, it can be combined with other personal details about people—such as names or email addresses—that Apple has via the App Store or its iTunes music services. By contrast, Google and most app makers don't consider device IDs to be identifying information.

A growing industry is assembling this data into profiles of cellphone users. Mobclix, the ad exchange, matches more than 25 ad networks with some 15,000 apps seeking advertisers. The Palo Alto, Calif., company collects phone IDs, encodes them (to obscure the number), and assigns them to interest categories based on what apps people download and how much time they spend using an app, among other factors.

By tracking a phone's location, Mobclix also makes a "best guess" of where a person lives, says Mr. Gurbuxani, the Mobclix executive. Mobclix then matches that location with spending and demographic data from Nielsen Co.

In roughly a quarter-second, Mobclix can place a user in one of 150 "segments" it offers to advertisers, from "green enthusiasts" to "soccer moms." For example, "die hard gamers" are 15-to-25-year-old males with more than 20 apps on their phones who use an app for more than 20 minutes at a time.

Mobclix says its system is powerful, but that its categories are broad enough to not identify individuals. "It's about how you track people better," Mr. Gurbuxani says.

Some app makers have made changes in response to the findings. At least four app makers posted privacy policies after being contacted by the Journal, including Rovio Mobile Ltd., the Finnish company behind the popular game Angry Birds (in which birds battle egg-snatching pigs). A spokesman says Rovio had been working on the policy, and the Journal inquiry made it a good time to unveil it.

Free and paid versions of Angry Birds were tested on an iPhone. The apps sent the phone's UDID and location to the Chillingo unit of Electronic Arts Inc., which markets the games. Chillingo says it doesn't use the information for advertising and doesn't share it with outsiders.

Apps have been around for years, but burst into prominence when Apple opened its App Store in July 2008. Today, the App Store boasts more than 300,000 programs.

Other phone makers, including BlackBerry maker Research in Motion Ltd. and Nokia Corp., quickly built their own app stores. Google's Android Market, which opened later in 2008, has more than 100,000 apps. Market researcher Gartner Inc. estimates that world-wide app sales this year will total $6.7 billion.

Many developers offer apps for free, hoping to profit by selling ads inside the app. Noah Elkin of market researcher eMarketer says some people "are willing to tolerate advertising in apps to get something for free." Of the 101 apps tested, the paid apps generally sent less data to outsiders.

Ad sales on phones account for less than 5% of the $23 billion in annual Internet advertising. But spending on mobile ads is growing faster than the market overall.

Central to this growth: the ad networks whose business is connecting advertisers with apps. Many ad networks offer software "kits" that automatically insert ads into an app. The kits also track where users spend time inside the app.

Mr. Binshtok says he declined because of privacy concerns. But ads targeted by location bring in two to five times as much money as untargeted ads, Mr. Binshtok says. "We are losing a lot of revenue."

Other apps transmitted more data. The Android app for social-network site MySpace sent age and gender, along with a device ID, to Millennial Media, a big ad network.

In its software-kit instructions, Millennial Media lists 11 types of information about people that developers may transmit to "help Millennial provide more relevant ads." They include age, gender, income, ethnicity, sexual orientation and political views. In a re-test with a more complete profile, MySpace also sent a user's income, ethnicity and parental status.

A spokesman says MySpace discloses in its privacy policy that it will share details from user profiles to help advertisers provide "more relevant ads." My Space is a unit of News Corp., which publishes the Journal. Millennial did not respond to requests for comment on its software kit.

App makers transmitting data say it is anonymous to the outside firms that receive it. "There is no real-life I.D. here," says Joel Simkhai, CEO of Nearby Buddy Finder LLC, the maker of the Grindr app for gay men. "Because we are not tying [the information] to a name, I don't see an area of concern."

Scott Lahman, CEO of TextPlus 4 developer Gogii Inc., says his company "is dedicated to the privacy of our users. We do not share personally identifiable information or message content." A Pandora spokeswoman says, "We use listener data in accordance with our privacy policy," which discusses the app's data use, to deliver relevant advertising. When a user registers for the first time, the app asks for email address, gender, birth year and ZIP code.

Google was the biggest data recipient in the tests. Its AdMob, AdSense, Analytics and DoubleClick units collectively heard from 38 of the 101 apps. Google, whose ad units operate on both iPhones and Android phones, says it doesn't mix data received by these units.

Google's main mobile-ad network is AdMob, which it bought this year for $750 million. AdMob lets advertisers target phone users by location, type of device and "demographic data," including gender or age group.

A Google spokesman says AdMob targets ads based on what it knows about the types of people who use an app, phone location, and profile information a user has submitted to the app. "No profile of the user, their device, where they've been or what apps they've downloaded, is created or stored," he says.

Apple operates its iAd network only on the iPhone. Eighteen of the 51 iPhone apps sent information to Apple.

Apple targets ads to phone users based largely on what it knows about them through its App Store and iTunes music service. The targeting criteria can include the types of songs, videos and apps a person downloads, according to an Apple ad presentation reviewed by the Journal. The presentation named 103 targeting categories, including: karaoke, Christian/gospel music, anime, business news, health apps, games and horror movies.

People familiar with iAd say Apple doesn't track what users do inside apps and offers advertisers broad categories of people, not specific individuals.

Apple has signaled that it has ideas for targeting people more closely. In a patent application filed this past May, Apple outlined a system for placing and pricing ads based on a person's "web history or search history" and "the contents of a media library." For example, home-improvement advertisers might pay more to reach a person who downloaded do-it-yourself TV shows, the document says.

The patent application also lists another possible way to target people with ads: the contents of a friend's media library.

How would Apple learn who a cellphone user's friends are, and what kinds of media they prefer? The patent says Apple could tap "known connections on one or more social-networking websites" or "publicly available information or private databases describing purchasing decisions, brand preferences," and other data. In September, Apple introduced a social-networking service within iTunes, called Ping, that lets users share music preferences with friends. Apple declined to comment.

Tech companies file patents on blue-sky concepts all the time, and it isn't clear whether Apple will follow through on these ideas. If it did, it would be an evolution for Chief Executive Steve Jobs, who has spoken out against intrusive tracking. At a tech conference in June, he complained about apps "that want to take a lot of your personal data and suck it up."

—Tom McGinty and Jennifer Valentino-DeVries contributed to this report.

There was a prediction out of cell phone use in Japan that by now cell phones and cell phone usage including all internet would be free at least to a decent consumer because the advertisers would pay your way to get access. This is the opposite. If you were a train passenger in Tokyo and consented to the service, you could be alerted to what movies were playing or what the restaurant specials are at the next stop. Advertisers could hit consumers with precision instead of paying for broadcast to the whole metro and the subscriber of the service could benefit from timely, relevant, carefully placed info as well receiving a free service for particpating in the program. Key to that scenario (in a free society) is that you could opt-in but you could also opt out.

My older cell phone had a software switch where you could switch your GPS off and hide it except for emergency services like a 911 call. I can't find that option on my current 'smartphone' (treo, not iphone or android) meaning I assume that a GPS of me is running and sending all the time for anyone clever enough to track me, like a freeware or paidware app writer. I notice that google searches from my cell phone tend to know where I am and give me local results first. Nice feature up to a point. When they decide to sell off the complete record of everywhere I've been to the highest bidder or every bidder, then it is not such a nice feature.

Opting out of privacy surrenders and unwanted advertising should always be a choice at a fair market price. Bad business behavior like this by an unregulated market gives the over-regulators another generation of life and energy, and gives the Democrats and RINOs who yearn for a more government-centric, fully-regulated society the winning side of another consumer issue. Free market conservatives and libertarians should get out in front of these privacy loss and disclosure issues. Like the Do Not Call list concept, some government protection can be a good thing. Give us the easy option of not being tracked or recorded.

Depending on gov't to protect your privacy is like expecting it to protect your person. Law enforcement in the US is mostly stuck reacting to crimes after the fact.

True, however in general I like to think that by having laws with penalties and law enforcement available to enforce those laws, most people are deterred from breaking those laws.

For example, the "do not call list" has not eliminated calls, but I do think the volume of calls is much less. If you put some sharp teeth into the penalty, and/or increase civil liability, even more people would be deterred.

This doesn't cover non social networking stuff but if it didn't happen on facebook did it really happen???

http://www.eff.org/deeplinks/2010/05/bill-privacy-rights-social-network-usersA Bill of Privacy Rights for Social Network UsersCommentary by Kurt OpsahlSocial network service providers today are in a unique position. They are intermediaries and hosts to our communications, conversations and connections with loved ones, family, friends and colleagues. They have access to extremely sensitive information, including data gathered over time and from many different individuals.

Here at EFF, we've been thinking a lot recently about what specific rights a responsible social network service should provide to its users. Social network services must ensure that users have ongoing privacy and control over personal information stored with the service. Users are not just a commodity, and their rights must be respected. Innovation in social network services is important, but it must remain consistent with, rather than undermine, user privacy and control. Based on what we see today, therefore, we suggest three basic privacy-protective principles that social network users should demand:

#1: The Right to Informed Decision-Making

Users should have the right to a clear user interface that allows them to make informed choices about who sees their data and how it is used.

Users should be able to see readily who is entitled to access any particular piece of information about them, including other people, government officials, websites, applications, advertisers and advertising networks and services.

Whenever possible, a social network service should give users notice when the government or a private party uses legal or administrative processes to seek information about them, so that users have a meaningful opportunity to respond.

#2: The Right to Control

Social network services must ensure that users retain control over the use and disclosure of their data. A social network service should take only a limited license to use data for the purpose for which it was originally given to the provider. When the service wants to make a secondary use of the data, it must obtain explicit opt-in permission from the user. The right to control includes users' right to decide whether their friends may authorize the service to disclose their personal information to third-party websites and applications.

Social network services must ask their users' permission before making any change that could share new data about users, share users' data with new categories of people, or use that data in a new way. Changes like this should be "opt-in" by default, not "opt-out," meaning that users' data is not shared unless a user makes an informed decision to share it. If a social network service is adding some functionality that its users really want, then it should not have to resort to unclear or misleading interfaces to get people to use it.

#3: The Right to Leave

Users giveth, and users should have the right to taketh away.

One of the most basic ways that users can protect their privacy is by leaving a social network service that does not sufficiently protect it. Therefore, a user should have the right to delete data or her entire account from a social network service. And we mean really delete. It is not enough for a service to disable access to data while continuing to store or use it. It should be permanently eliminated from the service's servers.

Furthermore, if users decide to leave a social network service, they should be able to easily, efficiently and freely take their uploaded information away from that service and move it to a different one in a usable format. This concept, known as "data portability" or "data liberation," is fundamental to promote competition and ensure that users truly maintain control over their information, even if they sever their relationship with a particular service.

So, is there a real concern/demand for such protections in social networking? Say you started "Rachelbook" with the policies below as your selling point. Do you think that would be a winning business plan for attracting people who go out of their way to post pictures of themselves puking on spring break?

Facebook certainly has its problems . However your description of facebook is a couple of years old. The fastest growing group of facebook users are 55+. Personally facebook is very valuable part of my business (not to mention linkedin) and I use to organize or be informed about community events for grown up type stuff.

I am really grateful that I went to college before it would have been normal that pictures of me at every single party I attended would end up on facebook

The privacy concerned social network already exists with Diaspora but I don't see it actually being a facebook competitor.

Hundreds of correctional officers from prisons across America descended last spring on a shuttered penitentiary in West Virginia for annual training exercises.

Some officers played the role of prisoners, acting like gang members and stirring up trouble, including a mock riot. The latest in prison gear got a workout — body armor, shields, riot helmets, smoke bombs, gas masks. And, at this year’s drill, computers that could see the action. Perched above the prison yard, five cameras tracked the play-acting prisoners, and artificial-intelligence software analyzed the images to recognize faces, gestures and patterns of group behavior. When two groups of inmates moved toward each other, the experimental computer system sent an alert — a text message — to a corrections officer that warned of a potential incident and gave the location.

The computers cannot do anything more than officers who constantly watch surveillance monitors under ideal conditions. But in practice, officers are often distracted. When shifts change, an observation that is worth passing along may be forgotten. But machines do not blink or forget. They are tireless assistants.

The enthusiasm for such systems extends well beyond the nation’s prisons. High-resolution, low-cost cameras are proliferating, found in products like smartphones and laptop computers. The cost of storing images is dropping, and new software algorithms for mining, matching and scrutinizing the flood of visual data are progressing swiftly.

A computer-vision system can watch a hospital room and remind doctors and nurses to wash their hands, or warn of restless patients who are in danger of falling out of bed. It can, through a computer-equipped mirror, read a man’s face to detect his heart rate and other vital signs. It can analyze a woman’s expressions as she watches a movie trailer or shops online, and help marketers tailor their offerings accordingly. Computer vision can also be used at shopping malls, schoolyards, subway platforms, office complexes and stadiums.

All of which could be helpful — or alarming.

“Machines will definitely be able to observe us and understand us better,” said Hartmut Neven, a computer scientist and vision expert at Google. “Where that leads is uncertain.”

Google has been both at the forefront of the technology’s development and a source of the anxiety surrounding it. Its Street View service, which lets Internet users zoom in from above on a particular location, faced privacy complaints. Google will blur out people’s homes at their request.

Google has also introduced an application called Goggles, which allows people to take a picture with a smartphone and search the Internet for matching images. The company’s executives decided to exclude a facial-recognition feature, which they feared might be used to find personal information on people who did not know that they were being photographed.

Despite such qualms, computer vision is moving into the mainstream. With this technological evolution, scientists predict, people will increasingly be surrounded by machines that can not only see but also reason about what they are seeing, in their own limited way.

The uses, noted Frances Scott, an expert in surveillance technologies at the National Institute of Justice, the Justice Department’s research agency, could allow the authorities to spot a terrorist, identify a lost child or locate an Alzheimer’s patient who has wandered off.

The future of law enforcement, national security and military operations will most likely rely on observant machines. A few months ago, the Defense Advanced Research Projects Agency, the Pentagon’s research arm, awarded the first round of grants in a five-year research program called the Mind’s Eye. Its goal is to develop machines that can recognize, analyze and communicate what they see. Mounted on small robots or drones, these smart machines could replace human scouts. “These things, in a sense, could be team members,” said James Donlon, the program’s manager.

Millions of people now use products that show the progress that has been made in computer vision. In the last two years, the major online photo-sharing services — Picasa by Google, Windows Live Photo Gallery by Microsoft, Flickr by Yahoo and iPhoto by Apple — have all started using face recognition. A user puts a name to a face, and the service finds matches in other photographs. It is a popular tool for finding and organizing pictures.

Kinect, an add-on to Microsoft’s Xbox 360 gaming console, is a striking advance for computer vision in the marketplace. It uses a digital camera and sensors to recognize people and gestures; it also understands voice commands. Players control the computer with waves of the hand, and then move to make their on-screen animated stand-ins — known as avatars — run, jump, swing and dance. Since Kinect was introduced in November, game reviewers have applauded, and sales are surging.

To Microsoft, Kinect is not just a game, but a step toward the future of computing. “It’s a world where technology more fundamentally understands you, so you don’t have to understand it,” said Alex Kipman, an engineer on the team that designed Kinect.

‘Please Wash Your Hands’

A nurse walks into a hospital room while scanning a clipboard. She greets the patient and washes her hands. She checks and records his heart rate and blood pressure, adjusts the intravenous drip, turns him over to look for bed sores, then heads for the door but does not wash her hands again, as protocol requires. “Pardon the interruption,” declares a recorded women’s voice, with a slight British accent. “Please wash your hands.”

Three months ago, Bassett Medical Center in Cooperstown, N.Y., began an experiment with computer vision in a single hospital room. Three small cameras, mounted inconspicuously on the ceiling, monitor movements in Room 542, in a special care unit (a notch below intensive care) where patients are treated for conditions like severe pneumonia, heart attacks and strokes. The cameras track people going in and out of the room as well as the patient’s movements in bed.

==================

The first applications of the system, designed by scientists at General Electric, are immediate reminders and alerts. Doctors and nurses are supposed to wash their hands before and after touching a patient; lapses contribute significantly to hospital-acquired infections, research shows.

The camera over the bed delivers images to software that is programmed to recognize movements that indicate when a patient is in danger of falling out of bed. The system would send an alert to a nearby nurse. If the results at Bassett prove to be encouraging, more features can be added, like software that analyzes facial expressions for signs of severe pain, the onset of delirium or other hints of distress, said Kunter Akbay, a G.E. scientist.

Hospitals have an incentive to adopt tools that improve patient safety. Medicare and Medicaid are adjusting reimbursement rates to penalize hospitals that do not work to prevent falls and pressure ulcers, and whose doctors and nurses do not wash their hands enough. But it is too early to say whether computer vision, like the system being tried out at Bassett, will prove to be cost-effective.

Mirror, Mirror

Daniel J. McDuff, a graduate student, stood in front of a mirror at the Massachusetts Institute of Technology’s Media Lab. After 20 seconds or so, a figure — 65, the number of times his heart was beating per minute — appeared at the mirror’s bottom. Behind the two-way mirror was a Web camera, which fed images of Mr. McDuff to a computer whose software could track the blood flow in his face.

The software separates the video images into three channels — for the basic colors red, green and blue. Changes to the colors and to movements made by tiny contractions and expansions in blood vessels in the face are, of course, not apparent to the human eye, but the computer can see them.

“Your heart-rate signal is in your face,” said Ming-zher Poh, an M.I.T. graduate student. Other vital signs, including breathing rate, blood-oxygen level and blood pressure, should leave similar color and movement clues.

The pulse-measuring project, described in research published in May by Mr. Poh, Mr. McDuff and Rosalind W. Picard, a professor at the lab, is just the beginning, Mr. Poh said. Computer vision and clever software, he said, make it possible to monitor humans’ vital signs at a digital glance. Daily measurements can be analyzed to reveal that, for example, a person’s risk of heart trouble is rising. “This can happen, and in the future it will be in mirrors,” he said.

Faces can yield all sorts of information to watchful computers, and the M.I.T. students’ adviser, Dr. Picard, is a pioneer in the field, especially in the use of computing to measure and communicate emotions. For years, she and a research scientist at the university, Rana el-Kaliouby, have applied facial-expression analysis software to help young people with autism better recognize the emotional signals from others that they have such a hard time understanding.

The two women are the co-founders of Affectiva, a company in Waltham, Mass., that is beginning to market its facial-expression analysis software to manufacturers of consumer products, retailers, marketers and movie studios. Its mission is to mine consumers’ emotional responses to improve the designs and marketing campaigns of products.

John Ross, chief executive of Shopper Sciences, a marketing research company that is part of the Interpublic Group, said Affectiva’s technology promises to give marketers an impartial reading of the sequence of emotions that leads to a purchase, in a way that focus groups and customer surveys cannot. “You can see and analyze how people are reacting in real time, not what they are saying later, when they are often trying to be polite,” he said. The technology, he added, is more scientific and less costly than having humans look at store surveillance videos, which some retailers do.

The facial-analysis software, Mr. Ross said, could be used in store kiosks or with Webcams. Shopper Sciences, he said, is testing Affectiva’s software with a major retailer and an online dating service, neither of which he would name. The dating service, he said, was analyzing users’ expressions in search of “trigger words” in personal profiles that people found appealing or off-putting.

Watching the Watchers

Maria Sonin, 33, an office worker in Waltham, Mass., sat in front of a notebook computer looking at a movie trailer while Affectiva’s software, through the PC’s Webcam, calibrated her reaction. The trailer was for “Little Fockers,” starring Robert De Niro and Ben Stiller, which opened just before Christmas. The software measured her reactions by tracking movements on a couple of dozen points on her face — mostly along the eyes, eyebrows, nose and the perimeter of her lips.

To the human eye, Ms. Sonin appeared to be amused. The software agreed, said Dr. Kaliouby, though it used a finer-grained analysis, like recording that her smiles were symmetrical (signaling amusement, not embarrassment) and not smirks. The software, Ms. Kaliouby said, allows for continuous, objective measurement of viewers’ response to media, and in the future will do so in large numbers on the Web.

Ms. Sonin, an unpaid volunteer, said later that she did not think about being recorded by the Webcam. “It wasn’t as if it was a big camera in front of you,” she said.

=================

Page 3 of 3)

Christopher Hamilton, a technical director of visual effects, has used specialized software to analyze facial expressions and recreate them on the screen. The films he has worked on include “King Kong,” “Charlotte’s Web” and “The Matrix Revolutions.” Using facial-expression analysis technology to gauge the reaction of viewers, who agree to be watched, may well become a valuable tool for movie makers, said Mr. Hamilton, who is not involved with Affectiva.

Today, sampling audience reaction before a movie is released typically means gathering a couple of hundred people at a preview screening. The audience members then answer questions and fill out surveys. Yet viewers, marketing experts say, are often inarticulate and imprecise about their emotional reactions. The software “makes it possible to measure audience response with a scene-by-scene granularity that the current survey-and-questionnaire approach cannot,” Mr. Hamilton said. A director, he added, could find out, for example, that although audience members liked a movie over all, they did not like two or three scenes. Or he could learn that a particular character did not inspire the intended emotional response.

Emotion-sensing software, Mr. Hamilton said, might become part of the entertainment experience — especially as more people watch movies and programs on Internet-connected televisions, computers and portable devices. Viewers could share their emotional responses with friends using recommendation systems based on what scene — say, the protagonists’ dancing or a car chase — delivered the biggest emotional jolt.

Affectiva, Dr. Picard said, intends to offer its technology as “opt-in only,” meaning consumers have to be notified and have to agree to be watched online or in stores. Affectiva, she added, has turned down companies, which she declined to name, that wanted to use its software without notifying customers.

Darker Possibilities

Dr. Picard enunciates a principled stance, but one that could become problematic in other hands.

The challenge arises from the prospect of the rapid spread of less-expensive yet powerful computer-vision technologies.

At work or school, the technology opens the door to a computerized supervisor that is always watching. Are you paying attention, goofing off or daydreaming? In stores and shopping malls, smart surveillance could bring behavioral tracking into the physical world.

More subtle could be the effect of a person knowing that he is being watched — and how that awareness changes his thinking and actions. It could be beneficial: a person thinks twice and a crime goes uncommitted. But might it also lead to a society that is less spontaneous, less creative, less innovative?

“With every technology, there is a dark side,” said Hany Farid, a computer scientist at Dartmouth. “Sometimes you can predict it, but often you can’t.”

A decade ago, he noted, no one predicted that cellphones and text messaging would lead to traffic accidents caused by distracted drivers. And, he said, it was difficult to foresee that the rise of Facebook and Twitter and personal blogs would become troves of data to be collected and exploited in tracking people’s online behavior.

Often, a technology that is benign in one setting can cause harm in a different context. Google confronted that problem this year with its face-recognition software. In its Picasa photo-storing and sharing service, face recognition helps people find and organize pictures of family and friends.

But the company took a different approach with Goggles, which lets a person snap a photograph with a smartphone, setting off an Internet search. Take a picture of the Eiffel Tower and links to Web pages with background information and articles about it appear on the phone’s screen. Take a picture of a wine bottle and up come links to reviews of that vintage.

Google could have put face recognition into the Goggles application; indeed, many users have asked for it. But Google decided against it because smartphones can be used to take pictures of individuals without their knowledge, and a face match could retrieve all kinds of personal information — name, occupation, address, workplace.

“It was just too sensitive, and we didn’t want to go there,” said Eric E. Schmidt, the chief executive of Google. “You want to avoid enabling stalker behavior.”

A number of years ago, I was unfortunately a part of a national news story. I went to great lengths to avoid being interviewed and filmed. Just what restrictions on the press do you propose?

I am a bit confused, but interested. Different questions/issues are being addressed. As GM has pointed out, and/or I did, photography on public property is pretty well open and protected. I do a lot of photography and know the laws. Except for rare exceptions you do not have to ask "permission". Nor is there any age restrictions, etc. as long as there is not an expectation of privacy. Whether the "press" is shooting the picture or not. "Street shooting" has been around for a long time. How you "feel" about being shot is not legally relevant. Now the question, how it can be used is another issue. Perhaps not for commercial purposes, but for "fine art" there are few restrictions. You can avoid being "interviewed"; you can probably avoid being filmed on private property, but on public property it is difficult if not impossible to avoid being filmed/shot. And you have little recourse. There can and should be no restrictions.

Crafty, are you saying that signs should be posted in public spaces that surveillance cameras are being used?On street corners? In front of ATM machines? Inside Office lobbies? Government buildings? Hotels? As long as you have no expectation of privacy, I don't think there should be any restrictions on cameras.

The key is expectation of privacy. For example, cameras in a hotel lobby/hallway/elevator seem reasonable. Cameras in your private room are not.Shots through your bedroom window from the street are legal; shots of you nude sunbathing in your enclosed back yard from a neighbor's tall tree are not.

"Crafty, are you saying that signs should be posted in public spaces that surveillance cameras are being used?On street corners? In front of ATM machines? Inside Office lobbies? Government buildings? Hotels?"

Yes.

"As long as you have no expectation of privacy, I don't think there should be any restrictions on cameras."

I have not said otherwise! I have said that people should be informed if they are systematically surveilled. To be perfectly clear, what I have in mind is different from, say, videoing someone on a workman's comp fraud case, or a politician or other public figure simply hiding cameras and recording every and anybody in sight.

@GM: I would love to hear about that little adventure of yours, eithere here or by email

PS: It occurs to me that your , , , comfort with authority may come from your being surveilled all the time

"Crafty, are you saying that signs should be posted in public spaces that surveillance cameras are being used?On street corners? In front of ATM machines? Inside Office lobbies? Government buildings? Hotels?"

Yes.

"As long as you have no expectation of privacy, I don't think there should be any restrictions on cameras."

I have not said otherwise! I have said that people should be informed if they are systematically surveilled.

So you have no objection to someone photography you without your permission at will in a public space?They find you "handsome"

"Crafty, are you saying that signs should be posted in public spaces that surveillance cameras are being used?On street corners? In front of ATM machines? Inside Office lobbies? Government buildings? Hotels?"

Yes.

I guess what I am asking is since I or anyone can photograph you at will in a public place, without telling you or asking your permission,why should the establishment, i.e. Hotel, etc. be required to give you notice?

Perhaps because if a human being is doing it, usually I can see them. Surveillance cameras are often quite sneaky. Also, with the accelerating technology in this area we are looking at levels of surveillance previously unimaginable.

If you have a speeding ticket in our state, I can already look up your birth date. If you write me a check, I know your bank account number. This new law could post under housing, tax policy or Glibness, but nobody cares politically about a landlord's paperwork issues, so let's turn it around the other way. If you want to mow a lawn, shovel a walk, change a light bulb or a faucet washer for me, fine, give me your social security number.

New law effective 5 days ago (who knew?) requires a rental property owner to file a 1099 for anyone everyone that provided $600 of service in a year - that is $50/mo. The only way to know if it will reach $600 per year is to track it from the first dollar and require a W-9 before the mower sets a wheel on the property and before the first dollar changes hands. Part I required on the W-9: Exact name and exact matching social security number, not last 4 digits or any effort at privacy protection.

Those my age now look back and see how many people you would have your ss# by now as this new law carries over to every other area of money changing hands.

What could possibly go wrong? Besides bad landlords with info to sell, all the predator would have to do is stand in front of a vacant property, hire out small jobs, collect identity theft info and leave without paying while the work is in process.

When investigators discovered that Arizona gunman Jared Lee Loughner had been rejected by the Army (because of admitted drug use), it was just a matter of time before some politician connected the dots: Hey, let's require military recruiters to report anyone with a history of drug abuse to other federal agencies!

Senator Charles Schumer (D-NY), come on down. Earlier this week, Mr. Schumer proposed that federal officials who learn of an individual's illegal drug use must report that information to the FBI. The admission would then go into a federal database, and be used to deny the individual the right to purchase a gun.

From FoxNews.com:

Noting that the alleged shooter in the Tucson massacre had admitted to military recruiters that he had used drugs on several occasions, Schumer said he is proposing to the Justice Department and the Bureau of Alcohol, Tobacco, Firearms and Explosives that the military be required to to notify federal officials about such admissions. He said such a process does not require new legislation.

[snip]

Schumer said if military recruiters or other officials report admissions of drug use to a national database, those individuals could be denied a gun.

After Jared Loughner was interviewed by the military, he was rejected from the Army because of excessive drug use. Now by law, by law that's on the books, she should not have been allowed to buy a gun," Schumer told NBC.

"But the law doesn't require the military to notify the FBI about that and in this case they didn't. So I--this morning--I'm writing the administration and urging that be done and the military notify the FBI when someone is rejected from the military for excessive drug use and that be added to the FBI database."

Obviously, Schumer's "proposal" is little more than a thinly-veiled effort to restrict Second Amendment rights. But unfortunately, his suggestion may gain traction, given the fallout from the Tucson tragedy and the administration's own feelings on gun control. We can hear the arguments now: This is a reasonable proposal; it won't require any new laws and it might prevent a similar massacre in the future.

But even a cursory examination reveals that the Schumer suggestion is a horribly bad idea, on multiple levels. First, it places a undue burden on military recruiters, who talk to literally dozens of potential recruits during any given week. We're reasonably sure that Senator Schumer has no idea (read: doesn't care) how much work--and paperwork--is involved in processing a single person into the U.S. military.

Now, on top of all that effort, Schumer wants armed forces recruiters--who often work in a "one-deep" office, miles from the nearest military installation--to screen all of their contacts for illegal drug use and report it to the FBI. Memo to Mr. Schumer: in 21st Century America, most of the young men and women who express an interest in military service are ultimately rejected, for a variety of reasons. So, the recruiter must wade through his list of rejects, looking for individuals whose drug use might make them a future, crazed gunman.

Readers will also note that Senator Schumer didn't bother to define the level of illegal drug use that should be reported to the FBI. Why is that an issue? Because the U.S. military, thank God, has standards that are much tougher than society as a whole. By regulation, the armed services routinely reject applicants who fail a urinalysis test, or admit to the recreational use of marijuana (or other drugs) on more than 15 occasions. That's the way it should be. We don't want stoners (or drunks) handling classified information, or maintaining multi-billion dollar weapons systems.

But that doesn't necessarily mean those same individuals should be denied the right to own a gun. In many cases, that rejection by the military is a wake-up call, convincing young people to give up the weed or the booze and become responsible adults. Those individuals, with no arrest record or convictions on file, should not be penalized for what they told a military recruiter years ago. Under current laws, persons in that category are still eligible for gun ownership, and we see no reason to change.

Besides, the type of drug use in Lougher's case was not a clear predictor of his future rampage. We're guessing the marijuana didn't help, but no one can make the case that Lougher was pushed over the edge because of his drug use. Indeed, the type of activity that Lougher told the Army about is a misdemeanor offense in much of the country.

Ask yourself this question: Do we really need to create a national database of young people who have admitted to marijuana use, and send the FBI to pay them a visit--on the very remote chance they might buy a gun and go off the deep end? Personally, I'd rather see the FBI devote its resources to more important tasks, such as tracking down the thousands of individuals from terrorist havens who enter this country each year. That group poses a far greater menace than military rejects who admit to past recreational drug use and may choose to buy a gun some day.

Schumer's proposal creates civil liberties issues as well. Requiring military recruiters to report applicant's admitted drug use could be construed as a form of illegal domestic surveillance. There's also the matter of where the reporting might end. At some point, most recruits fill out a SF-86, which provides background information for their security clearance. Would Mr. Schumer like the military to hand over those as well? Compared to recruiter interview forms, the SF-86 is a veritable goldmine of information on past residences, associations and travels.

And while we're on that topic, what about notes from the Defense Investigative Service agents who interview the family and friends of those applying for a clearance? Did we mention that some of the claims made in those interviews are unsubstantiated? Now, imagine all that information making its way into a national database, accessible to legions of bureaucrats and available for all sorts of purposes. Gee, whatever happened to that supposed right to privacy that the left keeps harping about?

If it's any consolation, the Schumer proposal is still a ways from becoming a legal requirement. But don't discount that possibility, since it can be implemented without new legislation. Stroke of the pen, law of the land, as the Clintonistas used to say.***ADDENDUM: Hard-core libertarians and the folks at NORML should not interpret this as an endorsement of legalizing drugs. Far from it. We still support the "zero tolerance" policy of the U.S. military and wish the same standard could be applied to military recruits. Unfortunately, the armed services have elected to tolerate certain levels of recreational drug use among prospective enlistees, due to the widespread use of marijuana among those in the primary recruiting cohort (18-25 year-olds).

Your Rx or your privacyThe Supreme Court will decide whether states can bar the buying and selling of prescription data.

IMS Health Inc. operates in the shadows of the healthcare industry, gathering data that drug makers can use to sell medications more effectively. The data, however, are taken from the prescriptions that doctors write for their patients. That information is at the heart of a dispute over how far states can go to protect privacy — a dispute that has reached the Supreme Court, and one that could broaden the reach of the 1st Amendment in troubling ways.

IMS and a handful of market research competitors pay pharmacists for the details contained in prescriptions, including the name of the doctor and the patient, the drug prescribed and the dosage. They compile that information into databases that track individual doctors' prescribing habits, replacing patients' names with "de-identified" numbers. Such databases can be valuable to the public, potentially helping to enforce drug laws, find patterns in the spread of disease and spot variations in how medications are used. But the main use — and the one that pays for the databases — is to help pharmaceutical companies persuade physicians to prescribe more of their products.

That's one of the reasons states across the country have proposed or enacted regulations governing prescription data mining. Drug makers hire legions of sales representatives to pitch physicians in person about new products and new applications for older medications. They pay market researchers millions of dollars for information on individual doctors' prescriptions because it helps them find sick people (chronically sick people in particular) who could be treated with their drugs or who are taking their competitors' medications.

Some doctors object to the disclosure of such arguably private information to drug company sales forces. And some consumer advocates argue persuasively that the marketing inevitably leads physicians to prescribe drugs too frequently, and to prescribe the newer and more expensive drugs that pharmaceutical companies hawk most aggressively. These drugs may have been approved by the Food and Drug Administration, but that doesn't mean they're necessarily the best choice for the patient; the FDA doesn't compare the effectiveness of new drugs against existing therapies.

In light of these concerns, Maine, New Hampshire and Vermont each adopted laws restricting the release of information on individual physicians' prescriptions. IMS, other market researchers and drug manufacturers challenged those laws in federal court, claiming that their 1st Amendment rights were violated. The plaintiffs contended that the information provided by market researchers to drug companies and from drug companies to physicians was a form of "speech" that the states could regulate only if there was a compelling state interest and only if they used the least restrictive means to do so. There was no evidence that drug marketing harmed physicians or patients, they argued, so there was no compelling state interest in limiting speech.

The U.S. 1st Circuit Court of Appeals upheld the strictures in New Hampshire (and later, Maine) but the 2nd Circuit overturned the law in Vermont. The divergent rulings reflected a split between the courts over whether regulating the sale of such data amounted to a restraint on speech. The 1st Circuit held that New Hampshire's law restricted market research companies' conduct — namely, their ability to aggregate and transfer information for drug-marketing purposes — not their speech. The 2nd Circuit held that Vermont restricted speech by data miners and pharmaceutical companies, but did so without demonstrating a compelling state interest.

This month the Supreme Court agreed to consider Vermont's appeal, and we hope the justices will be guided by the dissent written by 2nd Circuit Judge Debra Ann Livingston. As Livingston noted, pharmacies obtain sensitive information about doctors and prescriptions only because the state orders them to gather it for law enforcement reasons. Otherwise, doctors and patients might insist that the data be kept confidential. That information is every bit as sensitive as a hospital chart or a doctor's notes, and should be subject to equally effective protection.

Just because IMS doesn't supply patients' names to drug companies, that doesn't mean they can't be tracked individually. According to Meredith Jacob of the American University Washington College of Law, the databases assign unique numbers to pharmacies' customers that can be used to follow their prescriptions over time, helping drug makers spot the patients most likely to be customers for their new drugs and market those medicines to their physicians.

What's worse, the data about prescriptions could conceivably be combined with other records to reveal some patients' names. That's because "de-identified" data may provide clues that enable it to be matched against names in other databases. In one example of this technique cited in a brief by the Electronic Privacy Information Center, a researcher was able to use public records to name more than a third of the supposedly anonymized victims in Chicago's homicide database.

Someday very soon, if you stroll through Piedmont Park, travel the Downtown Connector, hit one of the bars or restaurants in Midtown or visit the Georgia Dome or Philips Arena, you'll have an invisible companion: the Atlanta Police Department.

This spring, the department will open a video integration center designed to compile and analyze footage from thousands of public and private security cameras throughout the city. Images from as many as 500 cameras in downtown and Midtown are expected to be flowing into the center by mid-summer.

Several metro Atlanta police agencies use cameras to bolster public safety, but the city's new venture, which will integrate data supplied by private entities such as CNN, America's Mart and Midtown Blue as well as public agencies such as the Federal Reserve, MARTA and the Georgia Department of Transportation, represents a whole new level of electronic surveillance.

Atlanta police Chief George Turner pointed to the case of Charles Boyer, gunned down outside a Virginia-Highland apartment building in November, to show what cameras can do. Footage from a security camera, which captured images of men refueling a vehicle similar to one described by witnesses to the shooting, contributed to the arrest five days later of the three men charged with Boyer's murder.

"How successful were we in solving that crime because of the video we had?" Turner asked in an interview with the Atlanta Journal-Constitution. "That's an example of how this will work."

In fact, the technology installed in the new center will be capable of much more, according to David Wilkinson, president of the Atlanta Police Foundation, which funds a camera network operated by the private security agency Midtown Blue.

The foundation raised a half-million dollars to supplement the $2.6 million in federal funds the city will use to build its new center. The federal money came from Homeland Security grants and Justice Department seizure funds.

Wilkinson said the center will use software that can identify suspicious activity and guide officers right to the scene of a crime as it's occurring. In effect, the software will multiply the eyes and ears of the five to seven people per shift who will initially monitor video footage around the clock.

"Monitoring is somewhat of a fallacy," Wilkinson said. "Analytics will help control the cameras."

The software includes a program called "Gun Spotter," which automatically cues up cameras in the vicinity of the sound of gunfire, so dispatchers can get a quick jump on what happened. Other software will send images to the officers' in-car computers and even to the screens of web-enabled smart phones.

"The real goal is to prevent the crime," Wilkinson said. "You do that by setting up police patrols, cameras, things that deter criminal from ever committing crime."

Facial recognition systems, license plate reading and automatic tracking programs also are available, although cities such as Chicago, which has pioneered citywide video surveillance, has reported those technologies are not yet ready for prime time.

Atlanta is modeling its surveillance network after Chicago's, which integrates data from a 10,000-camera network. This week, the Illinois ACLU issued a report demanding a moratorium on further expansion of Chicago's system on the grounds that it represents an unacceptable threat to personal privacy.

"Cameras do not deter crime, they just displace it," said Adam Schwartz, a lawyer for the Illinois ACLU. "It's difficult to see where the benefits of using cameras outweighs the costs --- including a vast amount of money, potential privacy invasion and a potential chilling of free speech."

With the promise of integrated surveillance capabilities in the hands of Atlanta police, Georgia's ACLU is voicing similar concerns.

"We always hope for strong oversight and regulation to make sure there are no violations of privacy," Georgia ACLU attorney Chara Fisher Jackson said. "But until we see it [at work], we won't say what actions we might take."

Greg McGraw, who lives in East Cobb and works in Atlanta's Old Fourth Ward, isn't too worried about police looking over his shoulder.

"People expose themselves so much on Facebook, privacy is a joke," McGraw said. "If it's going to make people safer, I'm for it."

Megan Larion, who lives in Buckhead and manages a Virginia-Highland apartment complex, is OK with the cameras, too, especially when she thinks about Boyer's slaying.

"I guess those folks who think these cameras mark the end of the world will be upset, but that's all," Larion said. "I think it's a good thing. It'll improve our industry, and people will feel more safe."

For a preview of how Atlanta's proposed network will function, you just have to look at the nearly 50 video screens that flicker above the front office of Midtown Blue. When someone calls in to report suspicious activity, a video dispatcher can remotely pan, tilt or zoom any one of the $13,000 cameras, tracking the suspect and directing an officer to the spot.

"When you have a dispatcher sitting here, you can actually catch crimes before they occur," said Col. Wayne Mock, a retired Atlanta policeman who manages Midtown Blue.

If a crime does occur, the cameras make excellent witnesses, he said. "The video tells you what actually happened and doesn't get excited like the average witness might."

Other local police agencies also are using cameras to bolster the impact of their officers.

"We were convinced that this was an effective force multiplier," said Lilburn police Chief John Davidson.

But cities in other states have encountered glitches. Cincinnati is currently on its second video surveillance network; the first system, started in 2005, proved ineffective. And Orlando's system failed to deliver on its promise when the city ran short of funds for the necessary software.

In Chicago, even with cameras on every corner, as Mayor Richard M. Daley famously said he wants, video has its limits, said Jonathan Lewin, managing deputy director of the city's emergency management office.

"It provides an overall positive effect if you can saturate the area," Lewin said. "But it's not going to provide the panacea that will completely eliminate crime."

Prepare To Give Up All Private Data For Any Gold Purchase Over $100Submitted by Tyler Durden on 02/18/2011 20:59 -0500www.zerohedge.com

A week ago, when we reported on a move by the Dutch central bank that ordered a pension fund to forcibly reduce its gold holdings, we speculated that "this latest gold confiscation equivalent event is most certainly coming to a banana republic near you." And while we got the Banana republic right, the event that we are about to describe is not necessarily identical. It is much worse. A bill proposed in the State of Washington (House Bill 1716), by representatives Asay, Hurst, Klippert, Pearson, and Miloscia, whose alleged purpose is to regulate secondhand gold dealers, seeks to capture "the name, date of birth, sex, height, weight, race, and address and telephone number of the person with whom the transaction is made" or said otherwise, of every purchaser of gold in the state of Washington. Furthermore, if passed, Bill 1716 will record "a complete description of the property pledged, bought, or consigned, including the brand name, serial number, model number or name, any initials or engraving, size, pattern, and color or stone or stones" and of course price. But the kicker: if a transaction is mode for an amount over $100, which means one tenth of an ounce of golds, also required will be a "signature, photo, and fingerprint of the person with whom the transaction is made." In other words, very soon Washington state will know more about you than you know about yourself, if you dare to buy any gold object worth more than a C-note. How this proposal is supposed to protect consumers against vulture gold dealers we don't quite get. Hopefully someone will explain it to us. We do, however, get how Americans will part with any and all privacy if they were to exchange fiat for physical. And in a police state like America, this will likely not be taken lightly, thereby killing the gold trade should the proposed Bill pass, and be adopted elsewhere.

By JULIA ANGWIN and EMILY STEEL As the surreptitious tracking of Internet users becomes more aggressive and widespread, tiny start-ups and technology giants alike are pushing a new product: privacy.

Companies including Microsoft Corp., McAfee Inc.—and even some online-tracking companies themselves—are rolling out new ways to protect users from having their movements monitored online. Some are going further and starting to pay people a commission every time their personal details are used by marketing companies.

"Data is a new form of currency," says Shane Green, chief executive of a Washington start-up, Personal Inc. , which has raised $7.6 million for a business that aims to help people profit from providing their personal information to advertisers.

The Wall Street Journal's year-long What They Know investigation into online tracking has exposed a fast-growing network of hundreds of companies that collect highly personal details about Internet users—their online activities, political views, health worries, shopping habits, financial situations and even, in some cases, their real names—to feed the $26 billion U.S. online-advertising industry.

In the first nine months of last year, spending on Internet advertising rose nearly 14%, while the overall ad industry only grew about 6%, according to data from PriceWaterhouseCoopers LLP and WPP PLC's Kantar Media.

Testing the new privacy marketplace are people like Giles Sequeira, a London real-estate developer who recently began selling his own personal data. "I'm not paranoid about privacy," he says. But as he learned more, he says, he became concerned about how his data was getting used.

People "have no idea where it is going to end up," he says.

So in December, Mr. Sequeira became one of the first customers of London start-up Allow Ltd. , which offers to sell people's personal information on their behalf, and give them 70% of the sale. Mr. Sequeira has already received one payment of £5.56 ($8.95) for letting Allow tell a credit-card company he is shopping for new plastic.

"I wouldn't give my car to a stranger" for free, Mr. Sequeira says, "So why do I do that with my personal data?"

As people are becoming more aware of the value of their data, some are seeking to protect it, and sometimes sell it. In January at the World Economic Forum in Davos, Switzerland, executives and academics gathered to discuss how to turn personal data into an "asset class" by giving people the right to manage and sell it on their own behalf.

"We are trying to shift the focus from purely privacy to what we call property rights," says Michele Luzi, a director at consulting firm Bain & Co. who led the Davos discussion.

Allow, the company that paid Mr. Sequeira, is just one of nearly a dozen start-ups hoping to profit from the nascent privacy market. Several promise to pay people a commission on the sale of their data. Others offer free products to block online tracking, in the hopes of later selling users other services—such as disposable phone numbers or email addresses that make personal tracking tougher. Still others sell paid services, such as removing people's names from marketing databases.

"Entrepreneurs smell opportunity," says Satya Patel, venture capitalist at Battery Ventures, which led a group of investors that poured $8 million in June into a start-up called SafetyWeb , which helps parents monitor their children's activities on social-networking sites and is rolling out a new privacy-protection service for adults, myID.com .

For the lightly regulated tracking industry, a big test of the new privacy marketplace is whether it will quiet the growing chorus of critics calling for tougher government oversight. Lawmakers this month introduced two separate privacy bills in Congress, and in December the Obama administration called for an online-privacy "bill of rights." The Federal Trade Commission is pushing for a do-not-track system inspired by the do-not-call registry that blocks phone calls from telemarketers.

The industry is hustling on several fronts to respond to regulatory concerns. Last week, Microsoft endorsed a do-not-track system. Microsoft also plans to add a powerful anti-tracking tool to the next version of its Web-browsing software, Internet Explorer 9. That's a reversal: Microsoft's earlier decision to remove a similar privacy feature from Explorer was the subject of a Journal article last year.

The online-ad industry itself is also rolling out new privacy services in hopes of heading off regulation. Most let users opt out of seeing targeted ads, though they generally don't prevent tracking.The privacy market has been tested before, during the dot-com boom around 2000, a time when online tracking was just being born. A flurry of online-privacy-related start-ups sprang up but only a few survived due to limited consumer appetite.

As recently as 2008, privacy was so hard to sell that entrepreneur Rob Shavell says he avoided even using the word when he pitched investors on his start-up, Abine Inc. , which blocks online tracking. Today, he says, Abine uses the word "privacy" again, and has received more than 30 unsolicited approaches from investors in the past six months.

It's rarely a coincidence when you see Web ads for products that match your interests. WSJ's Christina Tsuei explains how advertisers use cookies to track your online habits.In June, another company, TRUSTe, raised $12 million from venture capitalists to expand its privacy services. At the same time, Reputation.com Inc. raised $15 million and tripled its investments in new privacy initiatives including a service that removes people's names from online databases and a tool to let people encrypt their Facebook posts.

"It's just night and day out there," says Abine's Mr. Shavell.

Online advertising companies—many of which use online tracking to target ads—are also jumping into the privacy-protection business. AOL, one of largest online trackers, recently ramped up promotion of privacy services that it sells.

And in December, enCircle Media, an ad agency that works with tracking companies, invested in the creation of a privacy start-up, IntelliProtect . Last month IntelliProtect launched a $8.95-a-month privacy service that will, among other things, prevent people from seeing some online ads based on tracking data.

In its marketing material, IntelliProtect doesn't disclose its affiliation with the ad company, enCircle Media, that invested in it. When contacted by the Journal, IntelliProtect said it would never give or sell customer data to other entities, including its parent companies.A cofounder of Allow, Justin Basini, also traces his roots to the ad industry. Mr. Basini came up with the idea for his new business when working as head of brand marketing for Capital One Europe. He says he was amazed at the "huge amounts" of data the credit-card companies had amassed about individuals.

But the data didn't produce great results, he says. The response rate to Capital One's targeted mailings was 1-in-100, he says—vastly better than untargeted mailings, but still "massively inefficient." Mr. Basini says. "So I thought, 'Why not try to incentivize the customer to become part of the process?"

People feel targeted ads online are "spooky," he says, because people aren't aware of how much personal data is being traded. His proposed solution: Ask people permission before showing them ads targeted at their personal interests, and base the ads only on information people agree to provide.

In 2009, Mr. Basini left Capital One and teamed up with cofounder Howard Huntley, a technologist. He raised £440,000 ($708,400) from family, friends and a few investors, and launched Allow in December. The company has attracted 4,000 customers, he says.

Mr. Basini says his strategy is to first make individuals' data scarce, so it can become more valuable when he sells it later. To do that, Allow removes its customers from the top 12 marketing databases in the U.K., which Mr. Basini says account for 90% of the market. Allow also lists its customers in the official U.K. registries for people who don't want to receive telemarketing or postal solicitations.

Currently, Allow operates only in the U.K., which (unlike the U.S.) has a law that requires companies to honor individuals' requests to be removed from marketing databases.

Then, Mr. Basini asks his customers to create a profile that can contain their name, address, employment, number of kids, hobbies and shopping intent—in other words, lists of things they're thinking about buying. Customers can choose to grant certain marketers permission to send them offers, in return for a 70% cut of the price marketers pay to reach them. Allow says it has finalized a deal with one marketer and has five more deals it hopes to close soon.

Mr. Basini says Allow tries to prevent people from "gaming" the system by watching for people who state an intention to buy lots of things, but don't follow through.Because Allow's data comes from people who have explicitly stated their interest in being contacted about specific products, it can command a higher price than data gathered by stealthier online-tracking technologies. For instance, online-tracking companies routinely sell pieces of information about people's Web-browsing habits for less than a penny per person. By comparison, Allow says it sells access to Mr. Sequeira for £5 to £10 per marketer.

Mr. Sequeira, the London real-estate executive, says that after he filled out an "intention" to get a new credit card, he received a £15.56 credit in his Allow account: a £10 signing fee plus a £5.56 payment from the sale of his data to a credit-card marketer. So far, he says, he hasn't received a card offer from the company.

"I don't think it's going to make a life-changing amount of money," says Mr. Sequeira. But, he says he enjoyed the little windfall enough that he is now letting Allow offer his data to other advertisers. "I can see this becoming somewhat addictive."

Sens. John McCain and John Kerry are circulating proposed legislation to create an "online privacy bill of rights," according to people familiar with the situation, a sign of bipartisan support for efforts to curb the Internet-tracking industry.

John McCain .Mr. McCain, an Arizona Republican, and Mr. Kerry, a Massachusetts Democrat, are backing a bill that would require companies to seek a person's permission to share data about him with outsiders. It would also give people the right to see the data collected on them. The bill is expected to be introduced ahead of a Senate Commerce Committee hearing next Wednesday on online privacy.

The move comes amid widening scrutiny of the tracking industry. In the past year, The Wall Street Journal's "What They Know" series has revealed that popular websites install thousands of tracking technologies on people's computers without their knowledge, feeding an industry that gathers and sells information on their finances, political leanings and religious interests, among other things.

In another sign of Washington's efforts to regulate tracking, the Obama administration is moving to fill two key jobs related to privacy policy. People familiar with the matter said the administration is in talks with Jules Polonetsky, who currently heads the Future of Privacy Forum, an industry-funded think tank, to run a new privacy office in the Commerce Department. Mr. Polonetsky was previously chief privacy officer at online-advertising companies AOL Inc. and DoubleClick, now part of Google Inc.

John Kerry .Daniel Weitzner, a Commerce Department official who pushed for creation of the agency's new privacy office, is expected to become deputy chief technology officer in the White House, where he would oversee a privacy task force, the people familiar with the matter said.

Sen. McCain's endorsement of privacy legislation adds a prominent Republican voice to the issue, indicating that concern over Internet tracking crosses party lines.

In December, the Federal Trade Commission urged Congress to authorize creation of a "do-not-track" system, modeled after the do-not-call list that governs telemarketers. Rep. Jackie Speier, a California Democrat, introduced such a bill in January.

The draft Kerry-McCain bill would create the nation's first comprehensive privacy law, covering personal-data gathering across all industries. That was a key recommendation of a recent Commerce Department's report, developed in part by Sen. Kerry's brother Cameron, the department's general counsel. Current laws cover only the use of certain types of personal data, such as financial and medical information.

Experience WSJ professional Editors' Deep Dive: Five Aspects of Online PrivacySC MAGAZINEDOJ Pushes for ISPs to Retain User Logs DMNewsMarketers Step Up Self-Regulation Practices The National Law JournalPrivacy and Online Data Collection at a Crossroads Access thousands of business sources not available on the free web. Learn More The Kerry-McCain bill would cover data ranging from names and addresses to fingerprints and unique IDs assigned to individuals' cellphones or computers. It would also establish a program to certify companies with high privacy standards. Those companies would be allowed to sell personal data to outsiders without seeking permission in each instance.

A spokeswoman for Sen. McCain confirmed that the two senators were "in discussion" but said "we don't have anything to announce at this time." A spokeswoman for Sen. Kerry declined to comment.

Last week, Florida Republican Rep. Cliff Stearns said he would introduce draft privacy legislation soon, although his approach would largely allow the industry to continue many current practices.

Speaking at the Technology Policy Institute, Rep. Stearns said his proposal would allow the FTC to approve a five-year self-regulatory program that would encourage companies to offer more information to consumers about how they were being tracked. "The goal of the legislation is to empower consumers to make their own privacy choices," he said.

The White House today proposed sweeping revisions to U.S. copyright law, including making "illegal streaming" of audio or video a federal felony and allowing FBI agents to wiretap suspected infringers.In a 20-page white paper (PDF), the Obama administration called on the U.S. Congress to fix "deficiencies that could hinder enforcement" of intellectual property laws.

Victoria Espinel, the first Intellectual Property Enforcement Coordinator, with Vice President Joe Biden during an event last year.(Credit: Whitehouse.gov)The report was prepared by Victoria Espinel, the first Intellectual Property Enforcement Coordinator who received Senate confirmation in December 2009, and represents a broad tightening of many forms of intellectual property law including ones that deal with counterfeit pharmaceuticals and overseas royalties for copyright holders. (See CNET's report last month previewing today's white paper.)Some of the highlights:• The White House is concerned that "illegal streaming of content" may not be covered by criminal law, saying "questions have arisen about whether streaming constitutes the distribution of copyrighted works." To resolve that ambiguity, it wants a new law to "clarify that infringement by streaming, or by means of other similar new technology, is a felony in appropriate circumstances."• Under federal law, wiretaps may only be conducted in investigations of serious crimes, a list that was expanded by the 2001 Patriot Act to include offenses such as material support of terrorism and use of weapons of mass destruction. The administration is proposing to add copyright and trademark infringement, arguing that move "would assist U.S. law enforcement agencies to effectively investigate those offenses."• Under the 1998 Digital Millennium Copyright Act, it's generally illegal to distribute hardware or software--such as the DVD-decoding software Handbrake available from a server in France--that can "circumvent" copy protection technology. The administration is proposing that if Homeland Security seizes circumvention devices, it be permitted to "inform rightholders," "provide samples of such devices," and assist "them in bringing civil actions."The term "fair use" does not appear anywhere in the report. But it does mention Web sites like The Pirate Bay, which is hosted in Sweden, when warning that "foreign-based and foreign-controlled Web sites and Web services raise particular concerns for U.S. enforcement efforts." (See previous coverage of a congressional hearing on overseas sites.)The usual copyright hawks, including the U.S. Chamber of Commerce, applauded the paper, which grew out of a so-called joint strategic plan that Vice President Biden and Espinel announced in June 2010.Rob Calia, a senior director at the Chamber's Global Intellectual Property Center, said we "strongly support the white paper's call for Congress to clarify that criminal copyright infringement through unauthorized streaming, is a felony. We know both the House and Senate are looking at this issue and encourage them to work closely with the administration and other stakeholders to combat this growing threat."In October 2008, President Bush signed into law the so-called Pro IP ACT, which created Espinel's position and increased penalties for infringement, after expressing its opposition to an earlier version.Unless legislative proposals--like one nearly a decade ago implanting strict copy controls in digital devices--go too far, digital copyright tends not to be a particularly partisan topic. The Digital Millennium Copyright Act, near-universally disliked by programmers and engineers for its anti-circumvention section, was approved unanimously in the U.S. Senate.At the same time, Democratic politicians tend to be a bit more enthusiastic about the topic. Biden was a close Senate ally of copyright holders, and President Obama picked top copyright industry lawyers for Justice Department posts. Last year, Biden warned that "piracy is theft."No less than 78 percent of political contributions from Hollywood went to Democrats in 2008, which is broadly consistent with the trend for the last two decades, according to OpenSecrets.org.