Category: Digital Rights

DOVER, Del. (AP) — State lawmakers have given final approval to a bill prohibiting universities and colleges in Delaware from requiring that students or applicants for enrollment provide their social networking login information.

The bill, which unanimously passed the Senate shortly after midnight Saturday, also prohibits schools and universities from requesting that a student or applicant log onto a social networking site so that school officials can access the site profile or account.

The bill includes exemptions for investigations by police agencies or a school's public safety department if criminal activity is suspected.

Lawmakers approved the bill after deleting an amendment that expanded the scope of its privacy protections to elementary and secondary school students.

First of all there was the realization that if lawmakers had to draft this law it meant universities and colleges were already strong-arming students into giving up their social networking credentials. This descent into hell knocked my breath away.

But I groped my way back from the burning sulfur since the new bill seemed to show a modicum of common sense.

Until finally we learn that younger children won't be afforded the same protections… Can teachers and principals actually bully youngsters to log in to Facebook and access their accounts? Can they make kids hand over their passwords? What are we teaching our young people about their identity?

It seems a number of people take the use of “real names” on the Internet as something we should all just accept without further thought. But a recent piece by Gartner Distinguished Analyst Bob Blakley shows very clearly why at least a bit of thought is actually called for – at least amongst those of us building the infrastructure for cyberspace:

… Google is currently trying to enforce a “common name” policy in Google+. The gist of the policy is that “your Google+ name must be “THE” name by which you are commonly known”.

This policy is insane. I really mean insane; the policy is simply completely divorced from the reality of how names really work AND the reality of how humans really work, and it’s also completely at odds with what Google is trying to achieve with G+. (my emphasis – Kim)

The root of the problem is that Google suffers from the common – but false – belief that names are uniquely and inherently associated with people. I’ve already explained why this belief is false elsewhere, but for the sake of coherence, I’ll summarize here.

There isn’t a one-to-one correspondence between people and names. Multiple people share the same name (George Bush, for example, or even me: George Robert Blakley III), and individual people have multiple names (George Eliot, George Sand, George Orwell, or Boy George – or even me, George Robert “Bob” Blakley III). And people use different names in different contexts; King George VI was “Bertie” to family and close friends.

THERE IS NO SUCH THING AS A “REAL” NAME.

A name is not an attribute of a person; it is an identifier of a person, chosen arbitrarily and changeable at will. In England, I can draw up a deed poll in my living room and change my name at any time I choose, without the intervention or assistance of any authority. In California, I apparently don’t even need to write anything down: I can change my name simply by having people call me by the new name on the street.

COMMON NAMES ARE NOT SINGULAR OR UNIQUE.

Richard Garriott is COMMONLY known as “Richard Garriott” in some contexts (check Wikipedia), and COMMONLY known as Lord British in other contexts (go to a computer gaming convention). Bob Wills and Elvis are both “The King”.

Despite these complexities, Google wants to intervene in your choice of name. They want veto power over what you can call yourself.

Reversing the presumption that I choose what to be called happens – in the real world – only in circumstances which diminish the dignity of the individual. We choose the names of infants, prisoners, and pets. Imposing a name on someone is repression; free men and women choose their names for themselves.

Google+’s naming policy isn’t failing because it’s poorly implemented, or because Google’s enforcement team is stupid. It’s failing because what they’re trying to do is (1) impossible, and (2) antisocial.

(2) is critical. Mike Neuenschwander has famously observed that social software is being designed by the world’s least sociable people, and Google+ seems to be a case in point. Google wants to be in the “social” business. But they’re not behaving sociably. They’re acting like prison wardens. No one will voluntarily sign up to be a prisoner. Every day Google persists in their insane attempt to tell people what they can and can’t call themselves, Google+ as a brand becomes less sociable and less valuable. The policy is already being described as racist and sexist; it’s also clearly dangerous to some disadvantaged groups.

If you want to be the host of a social network, you’ve got to create a social space. Creating a social space means making people comfortable. That’s hard, because people don’t fit in any set of little boxes you want to create – especially when it comes to names. But that’s table stakes for social – people are complicated; deal with it. Facebook has an advantage here; despite its own idiotic real-names policy and its continual assaults on privacy, the company has real (i.e. human) sociability in its DNA – it was created by college geeks who wanted to get dates; Google+ wasn’t, and it shows.

If Google’s intention in moving into social networking is to sell ads, Google+’s common names policy gives them a lock on the North American suburban middle-aged conservative white male demographic. w00t.

The Google+ common name policy is insane. It creates an antisocial space in what is supposed to be a social network. It is at odds with basic human social behavior; its implementation is NECESSARILY arbitrary and infuriating, and it is actively damaging the Google+ brand and indeed the broader Google brand.

The problem is not flawed execution; it is that the policy itself is fundamentally unsound, unworkable, and unfixable.

Google can be a social network operator, or they can be the name police. They can’t be both. They need to decide – soon. If I were Google, I’d scrap the policy – immediately – and let people decide for themselves what they will be called.

If you are interested in social networks, don't miss the slick video about Max Schrems’ David and Goliath struggle with Facebook over the way they are treating his personal information. Click on the red “CC” in the lower right-hand corner to see the English subtitles.

Max is a 24 year old law student from Vienna with a flair for the interview and plenty of smarts about both technology and legal issues. In Europe there is a requirement that entities with data about individuals make it available to them if they request it. That's how Max ended up with a personalized CD from Facebook that he printed out on a stack of paper more than a thousand pages thick (see image below). Analysing it, he came to the conclusion that Facebook is engineered to break many of the requirements of European data protection. He argues that the record Facebook provided him finds them to be in flagrante delicto.

The logical next step was a series of 22 lucid and well-reasoned complaints that he submitted to the Irish Data Protection Commissioner (Facebook states that European users have a relationship with the Irish Facebook subsidiary). This was followed by another perfectly executed move: setting up a web site called Europe versus Facebook that does everything right in terms using web technology to mount a campaign against a commercial enterprise that depends on its public relations to succeed.

Europe versus Facebook, which seems eventually to have become an organization, then opened its own YouTube channel. As part of the documentation, they publicised the procedure Max used to get his personal CD. Somehow this recipe found its way to reddit where it ended up on a couple of top ten lists. So many people applied for their own CDs that Facebook had to send out an email indicating it was unable to comply with the requirement that it provide the information within a 40 day period.

If that seems to be enough, it's not all. As Max studied what had been revealed to him, he noticed that important information was missing and asked for the rest of it. The response ratchets the battle up one more notch:

Dear Mr. Schrems:

We refer to our previous correspondence and in particular your subject access request dated July 11, 2011 (the Request).

To date, we have disclosed all personal data to which you are entitled pursuant to Section 4 of the Irish Data Protection Acts 1988 and 2003 (the Acts).

Please note that certain categories of personal data are exempted from subject access requests.
Pursuant to Section 4(9) of the Acts, personal data which is impossible to furnish or which can only be furnished after disproportionate effort is exempt from the scope of a subject access request. We have not furnished personal data which cannot be extracted from our platform in the absence of is proportionate effort.

Section 4(12) of the Acts carves out an exception to subject access requests where the disclosures in response would adversely affect trade secrets or intellectual property. We have not provided any information to you which is a trade secret or intellectual property of Facebook Ireland Limited or its licensors.

Please be aware that we have complied with your subject access request, and that we are not required to comply with any future similar requests, unless, in our opinion, a reasonable period of time has elapsed.

For example, as I wrote here (and Max describes here), Facebook's “Like” button collects information every time an Internet user views a page containing the button, and a Facebook cookie associates that page with all the other pages with “Like” buttons visited by the user in the last 3 months.

If you use Facebook, records of all these visits are linked, through cookies, to your Facebook profile – even if you never click the “like” button. These long lists of pages visited, tied in Facebook's systems to your “Real Name identity”, were not included on Max's CD.

Is Facebook prepared to argue that it need not reveal this stored information about your personal data because doing so would adversely affect its “intellectual property”?

It will be absolutely amazing to watch how this issue plays out, and see just what someone with Max's media talent is able to do with the answers once they become public.

The result may well impact the whole industry for a long time to come.

Meanwhile, students of these matters would do well to look at Max's many complaints:

Excessive processing of Data.Facebook is hosting enormous amounts of personal data and it is processing all data for its own purposes.
It seems Facebook is a prime example of illegal “excessive processing”.

Like Button.
The Like Button is creating extended user data that can be used to track users all over the internet. There is no legitimate purpose for the creation of the data. Users have not consented to the use.

Obligations as Processor.
Facebook has certain obligations as a provider of a “cloud service” (e.g. not using third party data for its own purposes or only processing data when instructed to do so by the user).

According to this piece in Digital Trend, LinkedIn has “opted” 100 million of us into sharing private information within advertisements. This includes posting our names and photos as advertisers’ helpers.

“When a LinkedIn user views a third-party advertisement on the social network, they will see user profile pictures and names of connections if that connection has recommended or followed a brand. Any time that a user follows a brand, they unwittingly become a cheerleader for the company or organization if it advertises through LinkedIn.”

And in case that doesn't surprise you, how about this:

“In order to opt out of social advertising, the LinkedIn user has to take four steps to escape third-party advertisements:

“Hover over the user name in the top right hand corner of any LinkedIn page and click ‘Settings’. On the Settings page, click ‘Account’. On the Account tab, click ‘Manage Social Advertising’. Uncheck the box next to “LinkedIn may use my name, photo in social advertising.” and click the save button.”

What a mistake.

I know there are many who think that if Facebook can take the huddled masses to the cleaners, why shouldn't everyone?

It seems obvious that the overwhelming majority of people who participate in Facebook are still a few years away from understanding and reacting to what they have got themselves into.

But Linked In's membership is a lot more savvy about the implications of being on the site – and why they are sharing information there. Much of their participation has to do with future opportunities, and everyone is sensitive about the need to control and predict how they will be evaluated later in their career. Until yesterday I for one had been convinced that Linked In was smart enough to understand this.

But apparently not. And I think it will turn out that many of the professionals who until now have been happy to participate will choke on the potential abuse of their professional information and reputation – and Linked In's disregard for their trust.

My conclusion? Linked in has just thrown down the gauntlet and challenged us, as a community of professionals, to come up with safe and democratic ways to network.

This much is obvious: we need a network that respects the rights of the people in it. Linked In just lost my vote.

Skud at Geek Feminism Blog has created a wiki documenting work she and her colleagues are doing to “draft a comprehensive list” of those who would be harmed by a policy banning pseudonymity and requiring “real names”.

The result is impressive. The rigour Skud and colleagues have applied to their quest has produced an information payload that is both illuminating and touching.

Those of us working on identity technology have to internalize the lessons here. Over-identification is ALWAYS wrong. But beyond that, there are people who are especially vulnerable to it. They have to be treated as first class citizens with clear rights and we need to figure out how to protect them. This goes beyond what we conventionally think of as privacy concerns (although perhaps it sheds light on the true nature of what privacy is – I'm still learning).

Often people argue in favor of “Real Names” in order to achieve accountability. The fact is that technology offers us other ways to achieve accountability. By leveraging the properties of minimal disclosure technology, we can allow people to remain anonymous and yet bar them from given environments if their behavior gets sufficiently anti-social.

But enough editorializing. Here's Skud's intro. Just remember that in this case the real enlightenment is in the details, not the summary.

This page lists groups of people who are disadvantaged by any policy which bans Pseudonymity and requires so-called “Real names” (more properly, legal names).

This is an attempt to create a comprehensive list of groups of people who are affected by such policies.

The cost to these people can be vast, including:

harassment, both online and offline

discrimination in employment, provision of services, etc.

actual physical danger of bullying, hate crime, etc.

arrest, imprisonment, or execution in some jurisdictions

economic harm such as job loss, loss of professional reputation, etc.

social costs of not being able to interact with friends and colleagues

possible (temporary) loss of access to their data if their account is suspended or terminated

The groups of people who use pseudonyms, or want to use pseudonyms, are not a small minority (some of the classes of people who can benefit from pseudonyms constitute up to 50% of the total population, and many of the others are classes of people that almost everyone knows). However, their needs are often ignored by the relatively privileged designers and policy-makers who want people to use their real/legal names.

Wait a minute. Just got a note from the I Can't Stop Editorializing Department: the very wiki page that brings us Skud's analysis contains a Facebook “Like” button. It might be worth removing it given that Facebook requires “Real Names”, and then transmits the URL of any page with a “Like” button to Facebook so it can be associated with the user's “Real Name” – whether or not they click on the button or are logged into Facebook.

In Europe there has been a lot of discussion about “the Right to be Forgotten” (see, for example, Le droit à l’oubli sur Internet). The notion is that after some time, information should simply fade away (counteracting digital eternity).

Whatever words we use, the right, if recognized, would be a far-reaching game-changer – and as I wrote here, represent a “cure as important as the introduction of antibiotics was in the world of medicine”.

Against this backdrop, the following report by CIARAN GILES of the Associated Press gives us much to think about. It appears Google is fighting head-on against the “the Right to be Forgotten”. It seems to be willing to take on any individual or government who dares to challenge the immutable right of its database and algorithms to define you through something that has been written – forever, and whether it's true or not.

MADRID – Their ranks include a plastic surgeon, a prison guard and a high school principal. All are Spanish, but have little else in common except this: They want old Internet references about them that pop up in Google searches wiped away.

In a case that Google Inc. and privacy experts call a first of its kind, Spain's Data Protection Agency has ordered the search engine giant to remove links to material on about 90 people. The information was published years or even decades ago but is available to anyone via simple searches.

Scores of Spaniards lay claim to a “Right to be Forgotten” because public information once hard to get is now so easy to find on the Internet. Google has decided to challenge the orders and has appealed five cases so far this year to the National Court.

Some of the information is embarrassing, some seems downright banal. A few cases involve lawsuits that found life online through news reports, but whose dismissals were ignored by media and never appeared on the Internet. Others concern administrative decisions published in official regional gazettes.

In all cases, the plaintiffs petitioned the agency individually to get information about them taken down.

And while Spain is backing the individuals suing to get links taken down, experts say a victory for the plaintiffs could create a troubling precedent by restricting access to public information.

The issue isn't a new one for Google, whose search engine has become a widely used tool for learning about the backgrounds about potential mates, neighbors and co-workers. What it shows can affect romantic relationships, friendships and careers.

For that reason, Google regularly receives pleas asking that it remove links to embarrassing information from its search index or least ensure the material is buried in the back pages of its results. The company, based in Mountain View, Calif., almost always refuses in order to preserve the integrity of its index.

A final decision on Spain's case could take months or even years because appeals can be made to higher courts. Still, the ongoing fight in Spain is likely to gain more prominence because the European Commission this year is expected to craft controversial legislation to give people more power to delete personal information they previously posted online.

“This is just the beginning, this right to be forgotten, but it's going to be much more important in the future,” said Artemi Rallo, director of the Spanish Data Protection Agency. “Google is just 15 years old, the Internet is barely a generation old and they are beginning to detect problems that affect privacy. More and more people are going to see things on the Internet that they don't want to be there.”

Many details about the Spaniards taking on Google via the government are shrouded in secrecy to protect the privacy of the plaintiffs. But the case of plastic surgeon Hugo Guidotti vividly illustrates the debate.

In Google searches, the first link that pops up is his clinic, complete with pictures of a bare-breasted women and a muscular man as evidence of what plastic surgery can do for clients. But the second link takes readers to a 1991 story in Spain's leading El Pais newspaper about a woman who sued him for the equivalent of euro5 million for a breast job that she said went bad.

By the way, if it really is true that the nothing should ever interfere with the automated pronouncements of the search engine – even truth – does that mean robots have the right to pronounce any libel they want, even though we don't?

In my view the Commercial Privacy Bill of Rights drafted by US Senators McCain and Kerry would significantly strengthen the identify fabric of the Internet through its proposal that “a unique persistent identifier associated with an individual or a networked device used by such an individual” must be treated as personally identifiable information (Section 3 – 4 – vii). This clear and central statement marks a real step forward. Amongst other things, it covers the MAC addresses of wireless devices and the serial numbers and random identifiers of mobile phones and laptops.

From this fact alone the bill could play a key role in limiting a number of the most privacy-invasive practices used today by Internet services – including location-based services. For example, a company like Apple could no longer glibly claim, as it does in its current iTunes privacy policy, that device identifiers and location information are “not personally identifying”. Nor could it profess, as iTunes also currently does, that this means it can “collect, use, transfer, and disclose” the information “for any purpose”. Putting location information under the firm control of users is a key legislative requirement addressed by the bill.

The bill also contributes both to the security of the Internet and to individual privacy by unambiguously embracing “Minimal Disclosure for a Constrained Use” as set out in Law 2 of the Laws of Identity. Title III explicitly establishes a “Right to Purpose Specification; Data Minimization; Constraints on Distribution; and Data Integrity.”

Despite these real positives, the bill as currently formulated leaves me eager to consult a bevy of lawyers – not a good sign. This may be because it is still a “working draft”, with numerous provisions that must be clarified.

For example, how would the population at large ever understand the byzantine interlocking of opt-in and opt-out clauses described in Section 202? At this point, I don't.

And what does the list of exceptions to Unauthorized Use in Section 3 paragraph 8 imply? Does it mean such uses can be made without notice and consent?

I'll be looking for comments by legal and policy experts. Already, EPIC has expressed both support and reservations:

Senators John Kerry (D-MA) and John McCain (R-AZ) have introduced the “Commercial Privacy Bill of Rights Act of 2011,” aimed at protecting consumers’ privacy both online and offline. The Bill endorses several “Fair Information Practices,” gives consumers the ability to opt-out of data disclosures to third-parties, and restricts the sharing of sensitive information.

But the Bill does not allow for a private right of action, preempts better state privacy laws, and includes a “Safe Harbor” arrangement that exempts companies from significant privacy requirements.

If you have kept up with the excellent Wall Street Journal series on smartphone apps that inappropriately collect and release location information, you won't be surprised at their latest chapter: Federal Prosecutors are now investigating information-sharing practices of mobile applications, and a Grand Jury is already issuing subpoenas. The Journal says, in part:

Federal prosecutors in New Jersey are investigating whether numerous smartphone applications illegally obtained or transmitted information about their users without proper disclosures, according to a person familiar with the matter…

The criminal investigation is examining whether the app makers fully described to users the types of data they collected and why they needed the information—such as a user's location or a unique identifier for the phone—the person familiar with the matter said. Collecting information about a user without proper notice or authorization could violate a federal computer-fraud law…

Online music service Pandora Media Inc. said Monday it received a subpoena related to a federal grand-jury investigation of information-sharing practices by smartphone applications…

… 56 transmitted the phone's unique device identifier to other companies without users’ awareness or consent. Forty-seven apps transmitted the phone's location in some way. Five sent a user's age, gender and other personal details to outsiders. At the time they were tested, 45 apps didn't provide privacy policies on their websites or inside the apps.

In Pandora's case, both the Android and iPhone versions of its app transmitted information about a user's age, gender, and location, as well as unique identifiers for the phone, to various advertising networks. Pandora gathers the age and gender information when a user registers for the service.

Legal experts said the probe is significant because it involves potentially criminal charges that could be applicable to numerous companies. Federal criminal probes of companies for online privacy violations are rare…

The probe centers on whether app makers violated the Computer Fraud and Abuse Act, said the person familiar with the matter. That law, crafted to help prosecute hackers, covers information stored on computers. It could be used to argue that app makers “hacked” into users’ cellphones.

The elephant in the room is Apple's own approach to location information, which should certainly be subject to investigation as well. The user is never presented with a dialog in which Apple's use of location information is explained and permission is obtained. Instead, the user's agreement is gained surreptitiously, hidden away on page 37 of a 45 page policy that Apple users must accept in order to use… iTunes. Why iTunes requires location information is never explained. The policy simply states that the user's device identifier and location are non-personal information and that Apple “may collect, use, transfer, and disclose non-personal information for any purpose“.

Any purpose?

Is it reasonable that companies like Apple can proclaim that device identifiers and location are non-personal and then do whatever they want with them? Informed opinion seems not to agree with them. The International Working Group on Data Protection in Telecommunications, for example, asserted precisely the opposite as early as 2004. Membership of the Group included “representatives from Data Protection Authorities and other bodies of national public administrations, international organisations and scientists from all over the world.”

More empirically, I demonstrated in Non-Personal information, like where you live that the combination of device identifier and location is in very many cases (including my own) personally identifying. This is especially true in North America where many of us live in single-family dwellings.

[BTW, I have not deeply investigated the approach to sharing of location information taken by other smartphone providers – perhaps others can shed light on this.]

Australia's CRN reports that former Australian Privacy Commissioner Malcolm Crompton has called for the establishment of a formal privacy industry to rethink identity management in an increasingly digital world:

Addressing the Cards & Payments Australasia conference in Sydney this week, Crompton said the online environment needed to become “safe to play” from citizens’ perspective.

While the internet was built as a “trusted environment”, Crompton said governments and businesses had emerged as “digital gods” with imbalanced identification requirements.

“Power allocation is where we got it wrong,” he said, warning that organisations’ unwarranted emphasis on identification had created money-making opportunities for criminals.

Malcolm puts this well. I too have come to see that the imbalance of power between individual users and Internet business is one of the key factors blocking the emergence of a safe Internet.

Currently, users were forced to provide personal information to various email providers, social networking sites, and online retailers in what Crompton described as “a patchwork of identity one-offs”.

Not only were login systems “incredibly clumsy and easy to compromise”; centralised stores of personal details and metadata created honeypots of information for identity thieves, he said…

Refuting arguments that metadata – such as login records and search strings – was unidentifiable, Crompton warned that organisations hording such information would one day face a user revolt…

He also recommended the use of cloud-based identification management systems such as Azigo, Avoco and OpenID, which tended to give users more control of their information and third-party access rights.

User-centricity was central to Microsoft chief identity architect Kim Cameron’s ‘Laws of Identity’ (pdf), as well as Canadian Privacy Commissioner Ann Cavoukian’s seven principles of ‘Privacy by Design’ (pdf).

I hadn't noticed the UK's new Protection of Freedoms Bill until I heard cabinet minister Damian Green talk about it as he pulverized the UK's centralized identity database recently. Naturally I turned to Ray Corrigan for comment, only to discover that the political housecleaning had also swept away the assumptions behind widespread fingerprinting in Britain's schools, reinstating user control and consent.

The new Protection of Freedoms Bill gives pupils in schools and colleges the right to refuse to give their biometric data and compels schools to make alternative provision for them. The several thousand schools that already use the technology will also have to ask permission from parents retrospectively, even if their systems have been established for years…

It turns out that Britain's headmasters, apparently now a lazy bunch, have little stomach for trivialities like civil liberties. And writing about this, Ray's tone seems that of a judge who has had an impetuous and over-the-top barrister try to bend the rules one too many times. It is satisfying to see Ray send them home to study the Laws of Identity as scientific laws governing identity systems. I hope they catch up on their homework…

The Association of School and College Leaders (ASCL) is reportedly opposing the controls on school fingerprinting proposed in the UK coalition government's Protection of Freedoms Bill.

I always understood the reason that unions existed was to protect the rights of individuals. That ASCL should give what they perceive to be their own members’ managerial convenience priority over the civil rights of kids should make them thoroughly ashamed of themselves. Oh dear – now head teachers are going to have to fill in a few forms before they abuse children's fundamental right to privacy – how terrible.

Although headteachers and governors at schools deploying these systems may be typically ‘happy that this does not contravene the Data Protection Act’, a number of leading barristers have stated that the use of such systems in schools may be illegal on several grounds. As far back as 2006 Stephen Groesz, a partner at Bindmans in London, was advising:

“Absent a specific power allowing schools to fingerprint, I'd say they have no power to do it. The notion you can do it because it's a neat way of keeping track of books doesn't cut it as a justification.”

The recent decisions in the European Court of Human rights in cases like S. and Marper v UK (2008 – retention of dna and fingerprints) and Gillan and Quinton v UK (2010 – s44 police stop and search) mean schools have to be increasingly careful about the use of such systems anyway. Not that most schools would know that.

Again the question of whether kids should be fingerprinted to get access to books and school meals is not even a hard one! They completely decimate Kim Cameron's first four laws of identity.

1. User control and consent – many schools don't ask for consent, child or parental, and don't provide simple opt out options

2. Minimum disclosure for constrained use – the information collected, children's unique biometrics, is disproportionate for the stated use

3. Justifiable parties – the information is in control of or at least accessible by parties who have absolutely no right to it

4. Directed identity – a unique, irrevocable, omnidirectional identifier is being used when a simple unidirectional identifier (eg lunch ticket or library card) would more than adequately do the job.

It's irrelevant how much schools have invested in such systems or how convenient school administrators find them, or that the Information Commissioner's Office soft peddled their advice on the matter (in 2008) in relation to the Data Protection Act. They should all be scrapped and if the need for schools to wade through a few more forms before they use these systems causes them to be scrapped then that's a good outcome from my perspective.

In addition just because school fingerprint vendors have conned them into parting with ridiculous sums of money (in school budget terms) to install these systems, with promises that they are not really storing fingerprints and they can't be recreated, there is no doubt it is possible to recreate the image of a fingerprint from data stored on such systems. Ross, A et al ‘From Template to Image: Reconstructing Fingerprints from Minutiae Points’ IEEE Transactions on Pattern Analysis and Machine Intelligence, Vol. 29, No. 4, April 2007 is just one example of how university researchers have reverse engineered these systems. The warning caveat emptor applies emphatically to digital technology systems that buyers don't understand especially when it comes to undermining the civil liberties of our younger generation.