But it's not as simple as some court decisions make it appear. Even passwords can be considered testimonial, as they may indicate ownership of a locked device or compel production of evidence to be used against the device's owner. The passcode argument has gone both ways in court, which usually comes down to the individual judge's definition of "foregone conclusion." Does the foregone conclusion refer to the device's ownership or the evidence contained in it? The latter is harder to prove, and raising the burden of proof to this level tends to result in courts finding the compelled production of passwords to be a Fifth Amendment violation.

[I]n a more significant part of the ruling, Judge Westmore declared that the government did not have the right, even with a warrant, to force suspects to incriminate themselves by unlocking their devices with their biological features.

As the court points out [PDF], when the fingerprint IS the password, the Fifth Amendment is implicated despite these features normally being considered non-testimonial.

The Court finds that utilizing a biometric feature to unlock an electronic device is not akin to submitting to fingerprinting or a DNA swab, because it differs in two fundamental ways. First, the Government concedes that a finger, thumb, or other biometric feature may be used to unlock a device in lieu of a passcode. In this context, biometric features serve the same purpose of a passcode, which is to secure the owner's content, pragmatically rendering them functionally equivalent.

The court notes law enforcement is well aware of jurisprudence surrounding device security. In this case, the more time that passed between the seizure of the devices and their compelled unlocking, the less likely law enforcement would be able to evade the Fifth Amendment. Judge Westmore doesn't find this reasoning acceptable.

[A] passcode is generally required "when a device has been restarted, inactive, or has not been unlocked for a certain period of time." This is, no doubt, a security feature to ensure that someone without the passcode cannot readily access the contents of the phone. Indeed, the Government expresses some urgency with the need to compel the use of the biometric features to bypass the need to enter a passcode. This urgency appears to be rooted in the Government's inability to compel the production of the passcode under the current jurisprudence. It follows, however, that if a person cannot be compelled to provide a passcode because it is a testimonial communication, a person cannot be compelled to provide one's finger, thumb, iris, face, or other biometric feature to unlock that same device.

The court goes on to say the government had other options to access messages -- like approaching Facebook with a warrant -- rather than intrude on the Fifth Amendment (and the Fourth Amendment -- more on that in a moment), but it chose to do it this way. Just because it's easier and faster to do it via compelled production doesn't make it right. In fact, in the court's eyes, all this effort did was violate the Constitution in multiple ways.

An attempted assault on the Fourth Amendment also occurred in this case. Investigators looking for evidence of extortion via Facebook sought to have every device and person at a residence seized and searched, with every resident compelled to unlock devices found during the search. As the judge points out in the rejection of the search warrant application, the Fourth Amendment requires far more specificity.

This request is overbroad. There are two suspects identified in the affidavit, but the request is neither limited to a particular person nor a particular device.

Thus, the Court finds that the Application does not establish sufficient probable cause to compel any person who happens to be at the Subject Premises at the time of the search to provide a finger, thumb or other biometric feature to potentially unlock any unspecified digital device that may be seized during the otherwise lawful search.

This is a far better answer to this sort of request than others we've seen. Searching someone's home and digging through their electronics is one of the scariest powers the government has. The Fourth Amendment is in place to limit these exercises of immense government power to those that are justifiable and necessary. When judges grant overbroad orders, they're doing more than failing to act as a check against government abuse. They're normalizing abuse of citizens' rights via judicial precedent.

from the just-wait-until-they-know-your-citizen-score-too dept

Surveillance using facial recognition is sweeping the world. That's partly for the usual reason that the underlying digital technology continues to become cheaper, more powerful and thus more cost-effective. But it's also because facial recognition can happen unobtrusively, at a distance, without people being aware of its deployment. In any case, many users of modern smartphones have been conditioned to accept it unthinkingly, because it's a quick and easy way to unlock their device. This normalization of facial recognition is potentially bad news for privacy and freedom, as this story in the South China Morning Post indicates:

Beijing is speeding up the adoption of facial recognition-enabled smart locks in its public housing programmes as part of efforts to clamp down on tenancy abuse, such as illegal subletting.

The face-scanning system is expected to cover all of Beijing's public housing projects, involving a total of 120,000 tenants, by the end of June 2019

Although a desire to stop tenancy abuses sounds reasonable enough, it's important to put the move in a broader context. As Techdirt reported back in 2017, China is creating a system storing the facial images of every Chinese citizen, with the ability to identify any one of them in three seconds. Although the latest use of facial recognition with "smart" locks is being run by the Beijing authorities, such systems don't exist in isolation. Everything is being cross-referenced and linked together to ensure a complete picture is built up of every citizen's activities -- resulting in what is called the "citizen score" or "social credit" of an individual. China said last year that it would start banning people with "bad" citizen scores from using planes and trains for up to a year. Once the "smart" locks are in place, it would be straightforward to make them part of the social credit system and its punishments -- for example by imposing a curfew on those living at an address, or only allowing certain "approved" visitors.

Even without using "smart" locks in this more extreme way, the facial recognition system could record everyone who came visiting, and how long they stayed, and transmit that data to a central monitoring station. The scope for abuse by the authorities is wide. If nothing else, it's a further reminder that if you are not living in China, where you may not have a choice, installing "smart" Internet of things devices voluntarily may not be that smart.

from the TOP.-TECH. dept

Facial recognition tech isn't working quite as well as the agencies deploying it have hoped, but failure after failure hasn't stopped them from rolling out the tech just the same. I guess the only way to improve this "product" is to keep testing it on live subjects in the hope that someday it will actually deliver on advertised accuracy.

The DHS is shoving it into airports -- putting both international and domestic travelers at risk of being deemed terrorists by tech that just isn't quite there yet. In the UK -- the Land of Cameras -- facial recognition tech is simply seen as the logical next step in the nation's sprawling web o' surveillance. And Amazon is hoping US law enforcement wants to make facial rec tech as big a market for it as cloud services and online sales.

Thanks to its pervasiveness across the pond, the UK is where we're getting most of our data on the tech's successes. Well... we haven't seen many successes. But we are getting the data. And the data indicates a growing threat -- not to the UK public from terrorists or criminals, but to the UK public from its own government.

London cops have been slammed for using unmarked vans to test controversial and inaccurate automated facial recognition technology on Christmas shoppers.

The Metropolitan Police are deploying the tech today and tomorrow in three of the UK capital's tourist hotspots: Soho, Piccadilly Circus, and Leicester Square.

The tech is basically a police force on steroids -- capable of demanding ID from thousands of people per minute. Big Brother Watch says the Metro tech can scan 300 faces per second, running them against hot lists of criminal suspects. The difference is no one's approaching citizens to demand they identify themselves. The software does all the legwork and citizens have only one way to opt out: stay home.

Given these results, staying home might just be the best bet.

In May, a Freedom of Information request from Big Brother Watch showed the Met's facial recog had a 98 per cent false positive rate.

The group has now said that a subsequent request found that 100 per cent of the so-called matches since May have been incorrect.

A recent report from Cardiff University questioned the technology's abilities in low light and crowds – which doesn't bode well for a trial in some of the busiest streets in London just days before the winter solstice.

The tech isn't cheap, but even if it was, it still wouldn't be providing any return on investment. To be fair, the software isn't misidentifying people hundreds of times a second. In a great majority of scans, nothing is returned at all. The public records response shows the Metro Police racked up five false positives during their June 28th deployment. This led to one stop of a misidentified individual.

But even if the number of failures is small compared to the number of faces scanned, the problem is far from minimal. A number of unknowns make this tech a questionable solution for its stated purpose. We have no idea how many hot list criminals were scanned and not matched. We don't know how many scans the police performed in total. We don't know how many of these scans are retained and what the government does with all this biometric data it's collecting. About all we can tell is the deployment led to zero arrests and one stop instigated by a false positive. That may be OK for a test run (it isn't) but it doesn't bode well for the full-scale deployment the Met Police have planned.

The public doesn't get to opt out of this pervasive scanning. Worse, it doesn't even get to opt in. There's no public discussion period for cop tech even though, in the case of mass scanning systems, the public is by far the largest stakeholder. Instead, the public is left to fend for itself as law enforcement agencies deploy additional surveillance methods -- not against targeted suspects, but against the populace as a whole. This makes the number of failures unacceptable, even if the number is a very small percentage of the whole.

As the article's title informs the reader, camera footage could be scanned for face matches using skin tone as a search constraint. Considering this was pushed by IBM as a tool to prevent the next 9/11, it's easy to see why the NYPD -- given its history of surveilling Muslim New Yorkers -- might be willing to utilize a tool like this to pare down lists of suspects to just the people it suspected all along (Muslims).

There are a number of surprises contained in the long, detailed article, but the first thing that jumps out is IBM's efforts and statements, rather than the NYPD's. We all know the government capitalizes on tragedies to expand its power, but here we see a private corporation appealing to this base nature to make a sale.

In New York, the terrorist threat “was an easy selling point,” recalled Jonathan Connell, an IBM researcher who worked on the initial NYPD video analytics installation. “You say, ‘Look what the terrorists did before, they could come back, so you give us some money and we’ll put a camera there.”

From this pitch sprung an 8-year program -- deployed in secrecy by the NYPD to gather as much footage as possible of New Yorkers for dual purposes: its own law enforcement needs and to serve as a testing ground for IBM's new facial recognition tech. Needless to say, New Yorkers were never made aware of their lab rat status in IBM's software development process.

Even though the software could search by skin tone (as well as by "head color," age, gender, and facial hair), the NYPD claims it never used that feature in a live environment, despite IBM's urging.

According to the NYPD, counterterrorism personnel accessed IBM’s bodily search feature capabilities only for evaluation purposes, and they were accessible only to a handful of counterterrorism personnel. “While tools that featured either racial or skin tone search capabilities were offered to the NYPD, they were explicitly declined by the NYPD,” Donald, the NYPD spokesperson, said. “Where such tools came with a test version of the product, the testers were instructed only to test other features (clothing, eyeglasses, etc.), but not to test or use the skin tone feature. That is not because there would have been anything illegal or even improper about testing or using these tools to search in the area of a crime for an image of a suspect that matched a description given by a victim or a witness. It was specifically to avoid even the suggestion or appearance of any kind of technological racial profiling.”

It's easy to disbelieve this statement by the NYPD, given its long history of racial profiling, but it may be those handling the secret program deployment actually understood no program remains secret forever and sought to head off complaints and lawsuits by discouraging use of a controversial search feature. It also may be the NYPD was super-sensitive to these concerns following the partial dismantling of its stop-and-frisk program and the outing of its full-fledged, unconstitutional surveillance of local Muslims.

The thing is IBM is still selling this tech it beta tested live from New York. The same features the NYPD rejected are used to sell other law enforcement agencies on the power of its biometric profiling software.

In 2017, IBM released Intelligent Video Analytics 2.0, a product with a body camera surveillance capability that allows users to detect people captured on camera by “ethnicity” tags, such as “Asian,” “Black,” and “White.”

And there's a counter-narrative that seems to dispute the NYPD's assertions about controversial image tagging features. The IBM researcher who helped develop the skin tone recognition feature is on record stating the company doesn't develop features unless there's a market for them. In his estimation, the NYPD approached IBM to ask for this feature while the 8-year pilot program was still underway. The NYPD may have opted out after the feature went live, but it may have only done so to steer clear of future controversy. An ulterior motive doesn't make it the wrong move, but it also shouldn't be assumed the NYPD has morphed into heroic defenders of civil liberties and personal privacy.

What's available to other law enforcement agencies not similarly concerned about future PR black eyes is "mass racial profiling" at their fingertips. IBM has built a product that appeals to law enforcement's innate desire to automate police work, replacing officers on the street with cameras and software. Sure, there will be some cameras on patrol officers as well, but those are just for show. The real work of policing is done at desks using third-party software that explicitly allows -- if not encourages -- officers to narrow down suspect lists based on race. In a country so overly concerned about terrorism, this is going to lead to a lot of people being approached by law enforcement simply because of their ethnicity.

An additional problem with IBM's software -- and with those produced by competitors -- is a lot of markers used to identify potential suspects can easily net a long list of probables who share nothing but similar body sizes or clothing preferences. Understandably, more work is done by investigators manning these systems before cops start rounding people up, but the potential for inadvertent misuse (never mind actual misuse) is still incredibly high.

The secrecy of these programs is also an issue. Restrictive NDAs go hand-in-hand with private sector partnerships and these are often translated by police officials to mean information must be withheld from judges, criminal defendants, and department oversight. When that happens, due process violations gather atop the privacy violation wreckage until the whole thing collapses under its own audacity. Nothing stays secret forever, but entities like the NYPD and IBM could do themselves a bunch of favors by engaging in a little proactive transparency.

from the ALL-YOUR-FACE-ARE-BELONG-TO-US dept

The DHS is moving forward with the deployment of facial recognition tech at ports of entry, including US airports hosting international flights. The tech is still in its infancy, more prone to ringing up bogus hits than removing criminals and terrorists from circulation. But the DHS -- like many other government agencies -- isn't afraid to let a mere toddler do an adult's job. Faces will be scanned, whether travelers like it or not.

The DHS has issued an undated Privacy Impact Assessment [PDF] meant to unruffle the feathers of Americans it informed last year that not traveling internationally was the only way to opt out of this collection. The next phase of the facial rec tech deployment dials things back a bit, offering a bit more in the way of data collection/retention constraints.

In an effort to mitigate the impacts of this expanded collection, CBP seeks to minimize the data it maintains by purging facial images as quickly as possible after use. Each traveler’s biographic and biometric data is deleted from the TSA-issued device, either at the time of the next passenger’s transaction or after two minutes, whichever occurs first. All PII collected for the TVS transaction is stored in a secure database within the CBP network. CBP does not retain images of U.S. Citizens in ATS-UPAX but does retain images of non-U.S. Citizens for up to 14 days for confirmation of the match, as well as evaluation and audit purposes. CBP deletes all photos, regardless of immigration or citizenship status, from the TVS cloud matching service within 12 hours of the match.

[...]

TSA will only use these photos for identity verification at the checkpoint and cannot access the photos after the inspection is completed.

This is one of the better data retention policies the government has rolled out, especially considering it pertains to a border security program. But the DHS is far more vague -- and in some cases appears to be fudging the truth -- when it comes to details about data sharing. DHS claims this data won't be shared with airlines because the airlines have no interest in the data. As Edward Hasbrouck of Papers, Please points out, this statement runs contrary to the DHS's actions.

The joint government/industry interest and intent to develop and deploy a shared system of automated facial image tracking and control of travelers is made clear in white papers on government/industry biometric strategy and in the agendas for events at which CBP, TSA, and industry executives get together to discuss their plans.

Next month’s Future Travel Experience Global 2018 conference, for example, includes presentations by the planning and implementation directors for CBP’s “Biometric Exit” program, followed later the same day by a “working session” with CBP, airline, and airport executives on “Implementing advanced passenger processing with automation and biometrics”.

As for allowing American travelers to opt of the program, the DHS's stance has softened from its earlier "just don't travel" posture. Travelers can bypass face-scanning kiosks, but that just routes them towards CBP secondary inspection. All the CBP/DHS has to do to encourage more opting-in is make the non-scan option as laborious and invasive as possible. As has already been observed by Papers, Please, opting out sends American citizens to the same line as foreign international travelers, ensuring the wait for clearance is much longer than utilizing the facial rec option. Bottlenecks are good way of routing traffic where you want it to go, rather than where it would naturally flow.

The end goal isn't more surveillance of international travelers. The DHS ultimately wants to harvest faces from every domestic traveler in the US. It's not stated in the Impact Assessment, but there are already signs the agency has no interest in limiting this to those arriving from foreign countries.

Use of automated facial recognition is intended by the DHS to become a routine element of the surveillance and control of all air travelers, foreign and domestic. As the head of the CBP, Commissioner Kevin McAleenan, said in a press release in June 2018, “We are at a critical turning point in the implementation of a biometric entry-exit system, and we’ve found a path forward that transforms travel for all travelers.” [emphasis in the original]

The best way to fight this is to opt out. It may subject travelers to longer waits at checkpoints, but it also forces CBP agents to process more people the old fashioned way, without the aid of the DHS's new tech. Annoying the government by refusing to be traffic-shaped by deliberate bottlenecks can be its own small victory, even if the War on Terror machinery continues to rack up loss after loss to terrorists.

from the don't-be-complicit dept

While we were still in the middle of the heat storm over Donald Trump's decision to enact a zero tolerance border policy that resulted in children being separated from their parents at the border in far greater numbers than previous administrations, there was some interesting background coverage about the employees and customers of big tech companies like Microsoft receiving backlash for contracting with ICE. While much of that backlash came from outside those companies, there was plenty coming from within as well. Microsoft in particular saw throngs of employees outraged that the technology they had helped to develop was now being turned on the innocent children of migrants and asylum-seekers.

In an open letter to Microsoft CEO Satya Nadella sent today, employees demanded that the company cancel its $19.4 million contract with ICE and instate a policy against working with clients who violate international human rights law. The text of the employee letter was first reported by the New York Times and confirmed by Gizmodo.

“We believe that Microsoft must take an ethical stand, and put children and families above profits,” the letter, signed by Microsoft employees, states. “We request that Microsoft cancel its contracts with ICE, and with other clients who directly enable ICE. As the people who build the technologies that Microsoft profits from, we refuse to be complicit. We are part of a growing movement, comprised of many across the industry who recognize the grave responsibility that those creating powerful technology have to ensure what they build is used for good, and not for harm.”

The 300 employees that signed the open letter represent a fraction of Microsoft's total work force, of course, but you can bet that those willing to sign such a letter also represent a fraction of the staff that share the letter's viewpoint. For its part, Microsoft condemned the Trump separation policy (how brave!), but the company has also refused thus far to acknowledge whether the ICE contract includes facial recognition software or AI. Such powerful tools would seem to be in the wheelhouse of what ICE would want as it carries out this ridiculous policy and Microsoft's refusal to say such tools are not included in its contract with the agency sure seem to suggest that they are.

Of course, Microsoft is niether the only tech company going through this, nor the company that has had the largest in employee backlash. That distinction likely goes to Google, where employees not only voiced displeasure over the company's contract to provide AI technology for the Pentagon's drone warfare program, but where many people actually up and quit.

The resigning employees’ frustrations range from particular ethical concerns over the use of artificial intelligence in drone warfare to broader worries about Google’s political decisions—and the erosion of user trust that could result from these actions. Many of them have written accounts of their decisions to leave the company, and their stories have been gathered and shared in an internal document, the contents of which multiple sources have described to Gizmodo.

Google has long had a culture that encouraged employee feedback on the products it produces, in some cases such influence resulting in real policy shifts. The employees protesting Google's drone contract say that has changed recently, with upper management far less transparent about what work the company is doing and far more deaf to the opinions of the employees that actually carry that work out. Combine it all with the growing distrust of Google in the public and it can appear that Google is trying to pantomime the caricature it is so often painted to be: faceless corporate greed-hounds without soul or morality.

And then there is Amazon, where the company's AI contracts with the government and its granting of access to data-mining company Palantir also resulted in anger from within.

Amazon employees objected to the Trump administration’s “zero-tolerance” policy at the U.S. border, which has resulted in thousands of children being separated from their parents.

“Along with much of the world we watched in horror recently as U.S. authorities tore children away from their parents,” the letter, distributed on a mailing list called ‘we-won’t-build-it,’ states. “In the face of this immoral U.S. policy, and the U.S.’s increasingly inhumane treatment of refugees and immigrants beyond this specific policy, we are deeply concerned that Amazon is implicated, providing infrastructure and services that enable ICE and DHS.”

Amazon employees want the company out of the policing and immigration business, and have gone further by calling on the company to boot customers working with ICE off of its platform. Leadership at Amazon, as elsewhere, has been mostly silent, but it's worth noting that Amazon shareholders actually kicked off the angry protests even before its employees did so. Whatever shakes out of this, this isn't something Jeff Bezos is going to be able to ignore.

This is a good time to remind people that companies, including big tech companies, are not comprised of the steel and glass that makes up their offices, but of the people that run and work within them. It's also worth acknowledging that the government has been after big tech firms for some time over the very tools that are likely in this contract. The lesson in this is that the government needs tech companies to carry out this disaster of a policy more than tech companies need the government for anything at all.

In other words, if these companies decided to put some moral courage on display en masse, it would have an effect. If they elect to do otherwise, their employees may force their hand. After all, the people signing these government contracts are certainly not the ones fulfilling them. That work is being done by the very employees revolting in protest. Given that there is pressure coming from not just within these companies to get out of the immigration business, but from outside as well, business interests may be lining up to give these companies an excuse to show a little backbone.

from the Citizen-Suspect dept

Law enforcement agencies have embraced facial recognition. And contractors have returned the embrace, offering up a variety of "solutions" that are long on promise, but short on accuracy. That hasn't stopped the mutual attraction, as government agencies are apparently willing to sacrifice people's lives and freedom during these extended beta tests.

The latest example of widespread failure comes from the UK, where the government's embrace of surveillance equipment far exceeds that of the United States. Matt Burgess of Wired obtained documents detailing the South Wales Police's deployment of automated facial recognition software. What's shown in the FOI docs should worry everyone who isn't part of UK law enforcement. (It should worry law enforcement as well, but strangely does not seem to bother them.)

During the UEFA Champions League Final week in Wales last June, when the facial recognition cameras were used for the first time, there were 2,470 alerts of possible matches from the automated system. Of these 2,297 turned out to be false positives and 173 were correctly identified – 92 per cent of matches were incorrect.

That's the most gaudy number returned in response to the records request. But the other numbers -- even though they contain smaller sample sets -- are just as terrible. The following table comes from the South Wales Police FOI response [PDF]:

In all but three cases, the number of false positives outnumbered positive hits. (And in one of those cases, it was a 0-0 tie.) The police blame the 2,300 false positives on garbage intake.

A spokesperson for the force blamed the low quality of images in its database and the fact that it was the first time the system had been used.

"We don't notice it, we don't see millions of people in one shot ... but how many times have people walked down the street following somebody that they thought was somebody they knew, only to find it isn't that person?" NEC Europe head of Global Face Recognition Solutions Chris de Silva told ZDNet in October.

I think most people who see someone they think they know might wave or say "Hi," but only the weirdest will follow them around attempting to determine if they are who they think they are. Even if everyone's a proto-stalker like NEC's front man seems to think, the worst that could happen is an awkward (and short) conversation. The worst case scenario for false positives triggered by law enforcement software is some time in jail and an arrest record. The personal stake for citizens wrongly identified is not even comparable using de Silva's analogy.

If large watchlists are the problem, UK law enforcement is actively seeking to make it worse. Wired reports the South Wales Police are looking forward to adding the Police National Database (19 million images) to its watchlist, along with others like drivers license data stores.

No matter what the real issue is here, the South Wales Police believe there are no adverse effects to rolling out facial recognition tech that's wrong far more often than it's right. It states it has yet to perform a false arrest based on bogus hits, but its privacy assessment shows it's not all that concerned about the people swept up by poorly-performing software.

South Wales Police, in its privacy assessment of the technology, says it is a "significant advantage" that no "co-operation" is required from a person.

Sure, it's an "advantage," but one that solely serves law enforcement. It allows them to gather garbage images and run them against watchlists while hoping the false hits won't result in the violation of an innocent person's rights. But that's all they have: hope. The tech isn't ready for deployment. But it has been deployed and UK citizens are the beta testing group.

So, it will come as an unpleasant non-surprise that Axon (Taser's body cam spinoff) is looking to add facial recognition tech to cameras officers are supposed to deploy only in certain circumstances. This addition will repurpose them into always-on surveillance devices, gathering up faces with the same efficiency as their automated license plate readers. False positives will continue to be a problem and deployment will scale far faster than tech advancements.

UPDATE: Axon apparently takes issue with the final paragraph of this post. It has demanded a correction to remove an unspecified "error" and to smooth the corners off some "bold claims." Here's Axon's full statement:

At this point in time, we are not working on facial recognition technology to be deployed on body cameras. While we do see the value in this future capability, we also appreciate the concerns around privacy rights and the risks associated with misidentification of individuals. Accordingly, we have chosen to first form an AI Ethics Board to help ensure we balance both the risks and the benefits of deploying this technology. At Axon we are committed to ensuring that the technology we develop makes the world a better, and a safer place.

If there's anything to be disputed in the last paragraph of the post, it might be "looking to add facial recognition tech to its cameras." But more than one source (including the one linked in the paragraph) make the same claim about Axon looking at the possibility of adding this tech to its body camera line, so while Axon may not be currently working on it, it appears to be something it is considering. The addition of an ethics board is certainly the right way to approach this issue and its privacy concerns, but Axon's statement does not actually dispute the assertions I made in the post.

As for the rest of the paragraph, I will clarify that I did not mean Axon specifically will push for body cameras to become ALPRs but for faces. Axon likely won't. But police departments will. If the tech is present, it will be used. And history shows the tech will be deployed aggressively under minimal oversight, with apologies and policies appearing only after some damage has been done. To be certain, accuracy will be improved as time goes on. But as the UK law enforcement efforts show, deployment will far outpace tech advancements, increasing the probability of wrongful arrests and detentions.

from the even-orwell-would-have-said-this-goes-too-far dept

Via Josh Taylor, we learn of the recently released "Intergovernmental Agreement on Identity Matching Services", which is a fancy way of saying that the federal government and Australian state and territory governments had agreed to work together on a big face recognition surveillance system. But the truly incredible thing is that these Australian governments have decided to try to out-Orwell Orwell, by arguing that pervasive facial recognition is actually... good for privacy.

The Identity Matching Services will help promote privacy by strengthening the integrity and security of Australia’s identity infrastructure—the identity management systems of government Agencies that issue Australia’s core identity documents such as driver licences and passports. These systems play an important role in preventing identity crime. Identity crime is one of the most common and costly crimes in Australia and is a key enabler of serious and organised crime. Identity crime is also a threat to privacy when it involves the theft or assumption of the identity of an individual. The misuse of personal information for criminal purposes causes substantial harm to the economy and individuals each year.

We often see people make the silly claim that security and privacy are at odds with one another, which we believe is generally not true. In fact, there are strong arguments that greater privacy increases security by better protecting everyone (go encryption!). But here, Australia appears to be trying to flip that rationale totally on its head by arguing that the more security you have, the better it is for privacy, because they'll catch those nasty criminals who aim to do harm to your privacy. But... that's not privacy. Indeed, it says nothing of how governments, for example, might violate everyone's privacy with such a system (which is a larger concern than your everyday criminal).

It's difficult to take such a system seriously, when this is how they approach the privacy question.

from the in-the-land-of-the-no-eyed-cop,-the-civil-liberties-barrister-is-king dept

UK law enforcement has proudly been using facial recognition for tech for a few years now. As is the case with any new law enforcement tech advancement, it's being deployed as broadly as possible with as little oversight as agencies can get away with.

As of 2015, UK law enforcement had 18 million faces stashed away in its databases. Presumably, the database did not contain 18 million criminals and their mugshots. Concerns were raised but waved away with promises to put policies in place at some point in the future and with grandiose claims of 100% reliability. The latter, naturally, came from the police inspector who headed the facial recognition department. Caveat: this had only been tested on a limited set using "clear images."

The controversial trial of facial recognition equipment at Notting Hill Carnival resulted in roughly 35 false matches and an erroneous arrest, highlighting questions about police use of the technology.

The system only produced a single accurate match during the course of Carnival, but the individual had already been processed by the justice system and was erroneously included on the suspect database.

Yeah, that's going to keep UK citizens from being menaced by terrorists, drug dealers, and whatever else was cited to keep the facial recognition program from being derailed by concerned legislators and citizens. And, while the tech was busy failing to do its job, a few thousand photos of people engaged in nothing more than being criminally underdressed were added to the pot of randomly-drawn faces for the next round of facial recognition roulette.

Supposedly, this was a trial run. The false positives were apparently derived from a list of suspects' faces wanted on rioting-related charges. Fortunately, those who were approached by officers as the result of bogus tech tip-offs had their identification documents on them. Nothing in the law requires you to carry them wherever you go, but if the law's going to use tech as faulty as this, it may as well be a criminal offense to leave home without them. You're going to get rung up -- at least temporarily -- if you can't prove you aren't who the software says you are.

Undeterred by this resounding lack of success, the Metropolitan police are planning to test the software again. This will give another set of UK citizens the chance to be wrongfully arrested at some point in the near future. Until the bugs are worked out -- which means violating the rights and freedoms of UK citizens during the beta testing phase -- UK law enforcement facial recognition tech will still be remembered as the thing that caught that shoplifter that one time.

from the HAL-would-be-proud dept

Techdirt has written a number of stories about facial recognition software being paired with CCTV cameras in public and private places. As the hardware gets cheaper and more powerful, and the algorithms underlying recognition become more reliable, it's likely that the technology will be deployed even more routinely. But if you think loss of public anonymity is the end of your troubles, you might like to think again:

Lip-reading CCTV software could soon be used to capture unsuspecting customer's private conversations about products and services as they browse in high street stores.

Security experts say the technology will offer companies the chance to collect more "honest" market research but privacy campaigners have described the proposals as "creepy" and "completely irresponsible".

That story from the Sunday Herald in Scotland focuses on the commercial "opportunities" this technology offers. It's easy to imagine the future scenarios as shop assistants are primed to descend upon people who speak favorably about goods on sale, or who express a wish for something that is not immediately visible to them. But even more troubling are the non-commercial uses, for example when applied to CCTV feeds supposedly for "security" purposes.

How companies and law enforcement use CCTV+lip-reading software will presumably be subject to legislation, either existing or introduced specially. But given the lax standards for digital surveillance, and the apparent presumption by many state agencies that they can listen to anything they are able to grab, it would be na&iumlve to think they won't deploy this technology as much as they can. In fact, they probably already have.