Techdirt. Stories filed under "accuracy"Easily digestible tech news...https://www.techdirt.com/
en-usTechdirt. Stories filed under "accuracy"https://ii.techdirt.com/s/t/i/td-88x31.gifhttps://www.techdirt.com/Wed, 29 Nov 2017 20:15:03 PSTDrug Dog Testing Process Eliminates Handler Bias. Unsurprisingly, Cops Don't Like it.Tim Cushinghttps://www.techdirt.com/articles/20171124/11044338675/drug-dog-testing-process-eliminates-handler-bias-unsurprisingly-cops-dont-like-it.shtml
https://www.techdirt.com/articles/20171124/11044338675/drug-dog-testing-process-eliminates-handler-bias-unsurprisingly-cops-dont-like-it.shtml
When a cop needs an excuse to search something (but can't manage to talk the citizen into consenting) there's almost always a four-legged cop waiting in the wings to give the cop permission to do what he wanted to do anyway. You will rarely hear testimony given in any court case where a K9 hasn't "alerted" to the smell of drugs. Once this "alert" is delivered, officers are free to override objections to warrantless searches under the theory that a dog's permission is all that's needed.

What's willfully ignored by law enforcement officers is the nature of the beasts they deploy: dogs like pleasing handlers and will react to unconscious cues and/or do the thing they're expected to do: "find drugs." If the dog knows it can perform an act for a reward, it will perform that act, whether or not drugs are present. Unfortunately, there's a deliberate dearth of data when it comes to drug-sniffing dog fallibility. Tracking this data would undercut the dogs' raison d'etre: to act as probable cause for warrantless searches. This lack of data makes challenging drug dog "alerts" in court almost impossible.

One organization trying to address handler bias is the Pacific Northwest Police Detection Dog Association. In the U.S., a drug-sniffing dog team — the dog and its handler — has to be periodically retested and certified, usually by one of the many regional K9 associations. Some groups have tougher testing methods than others; the PNWK9 has a method that aspires to scientific levels of impartiality.

"It's a double-blind," says Fred Helfers, the retired police K9 handler and trainer who designed the system. "No outside influence."

In Helfers' tests, nobody in the room knows where the drugs are hidden; not the handler, not even the test administrator. That's to eliminate the possibility of someone unconsciously telegraphing signals to the dog as it gets close to the target.

Why this hasn't been done before is a mystery. (I mean, it's a mystery if we pretend there aren't a million reasons law enforcement agencies prefer the status quo.) As NPR points out, a study published seven years ago showed drug dogs respond more to handler cues than to the presence of drugs. Researcher Lisa Lit's tests found dogs alerting to areas researchers indicated scents would be likely, rather than where scents were actually located. What was presented as a test of drug dogs was actually a test of the dogs' handlers. The dogs failed because their handlers failed.

Needless to say, the study was unpopular in the law enforcement community. Law enforcement K9 trainers denounced the study and refused to provide any more assistance to researchers. Lit calls this study -- one that pointed out the Clever Hans-esque performance of drug sniffing dogs -- a "career killer." This is what happens to research that doesn't conform with law enforcement's self-image.

Occasionally, the dice determine that there will be no drugs hidden at all — sometimes for several tests in a row. He recalls that happening at another certification event.

"There were some new teams that failed that sequence," Helfers says. "Because they didn't trust their dog."

He says those handlers couldn't get past their expectation that drugs should be there. "I think they 'overworked' the car. Instead of going around once or twice and trusting their dog and watching their dog work, maybe they'd seen something that wasn't there," Helfers says.

This shows there's no question drug dogs respond to handlers. If dogs fail to respond, the animals are treated as untrustworthy by the same officers who refer to them as "probable cause on four legs." This is part of the problematic law enforcement mindset. A cop would never stop anyone who isn't a criminal… at least according to cops. This likely isn't a conscious thought, but rather the expected outcome of years of instruction that lead officers to view a wide swath of innocent behavior as inherently suspicious. (See also: too nervous, too calm, moving too much, moving too little, not looking directly at officers, looking directly at officers, traveling on any major highway, driving too fast/too slow/too perfect, ad nauseum.)

There is no room in this mindset for the possibility that the person being questioned isn't a criminal. If a cop can't find anything, it's time for a drug dog to do a few laps around the person's car, luggage, etc. If there's still no "hit," the problem must be the dog rather than the lack of contraband. Why? Because the only reason a cop would be interested in this particular person is because this person is doing something illegal. All other possibilities are discarded. This is clearly and disturbingly illustrated by this statement from another K9 officer:

"There's been cars that my dog's hit on... and just because there wasn't a product in it, doesn't mean the dog can't smell it," says Gunnar Fulmer, a K9 officer with the Walla Walla Police Department. "[The drug odor] gets permeated in clothing, it gets permeated in the headliners in cars."

[...]

"The dogs are mainly used to confirm what we already suspect," says Fulmer. "When the dogs come out, about 99 percent of the time we get an alert. And it's because we already know what's in the car; we just need that confirmation to help us out with that."

Confirmation bias, plain as day, and yet Officer Fulmar seems completely unaware of the underlying thrust of his statement. Worse, officers like Fulmer remains opposed to tracking of K9 false hits or to the introduction of any form of scientific rigor to the process.

Handlers also point out that scientific neutrality is not something you can reasonably expect during traffic stops, since police are trained to act on their suspicions.

In short, officers want to have free rein to allow their hunches to develop into warrantless searches with the assistance of animals prone to responding to handlers' cues, rather than the existence of contraband. Better an innocent man have his vehicle tossed than an officer admit his K9 partner might be more interested in giving him what he wants (a warrantless search) than in detecting the presence (or non-presence) of drugs.

This mindset permeates the entire process. When testing methods eliminate officers' involuntary cues or point out how frequently dogs respond to their handlers, it's the process that's wrong. Or the dogs. But never, under any circumstances, are the officers wrong. Law enforcement is willingly operating in its own massive blind spot, unable to fathom the slim possibility that the person they thought had drugs on them might not actually possess any drugs.

And this doesn't even address the bottom feeders of law enforcement: officers who knowingly use K9s to skirt warrant requirements, telling citizens the dog "alerted" even when it hasn't or has only done so in response to the officer's prompts. All of this is excused when officers actually find drugs and the times they don't are waved away with tired Drug War cliches about the sacrifice of a few people's rights for the greater good.

What this testing method shows is dogs (and their handlers) aren't to be trusted -- not without more data. If law enforcement can't admit to being wrong, they'll never look for ways to improve. Given what's been shown, drug dogs should not be treated as "probable cause on four legs." At best, they're walking confirmation bias -- self-serving tools of civil liberties circumvention.

Permalink | Comments | Email This Story
]]>accurate-data-is-fake-newshttps://www.techdirt.com/comment_rss.php?sid=20171124/11044338675Thu, 28 Sep 2017 10:44:25 PDTAs Broadband Usage Caps Expand, Nobody Is Checking Whether Usage Meters Are ReliableKarl Bodehttps://www.techdirt.com/articles/20170927/09491438298/as-broadband-usage-caps-expand-nobody-is-checking-whether-usage-meters-are-reliable.shtml
https://www.techdirt.com/articles/20170927/09491438298/as-broadband-usage-caps-expand-nobody-is-checking-whether-usage-meters-are-reliable.shtml
Despite the hype surrounding Google Fiber and gigabit connections, vast swaths of the U.S. broadband industry are actually becoming less competitive than ever. As large telcos like Windstream, Frontier, CenturyLink, and Verizon refuse to upgrade aging DSL lines at any scale, they're effectively giving cable providers a growing monopoly over broadband in countless markets. And these companies are quickly rushing to take advantage of this dwindling competition by imposing entirely arbitrary, confusing and unnecessary usage caps and overage fees in these captive markets.

The benefits of these pricey limitations are two fold: they allow cable providers to not only jack up the price of service, but they're an incredible weapon against the looming threat of streaming video competition. Caps and overage fees make using streaming alternatives notably more expensive, helping to protect legacy TV revenues. But cable operators are also exempting their own streaming services from these caps (as Comcast did with the launch of its own, new streaming platform this week), while still penalizing competitors. This kind of behavior is just one of several reasons why net neutrality rules are kind of important.

Oddly though, you'd be hard pressed to find politicians or regulators from either party that give much of a damn that this massive distortion of the level internet playing field is occurring. Which is why, unlike in other sectors, nobody anywhere is verifying whether ISP usage meters are accurate. As a result, there have been countless instances where users say they've been billed for bandwidth despite their modem being off or the power being out. And numerous studies have indicated ISPs routinely abuse this lack of oversight by overcharging for service.

Comcast has, of course, been at the forefront of imposing these usage limitations and overage fees. And unsurprisingly, consumers pretty consistently state that the cable giant -- already world renowned for historically-abysmal customer service -- isn't tracking usage or billing these customers accurately. Users who were billed for usage while away on vacation have had no real ability to challenge Comcast's meter readings. And Ars Technica documented another user this week who says he battled with Comcast for months over errant meter readings before cancelling fixed-line broadband service entirely:

"At one point, Weaver says he left town for three days and had left his wireless router unplugged, though the modem itself was plugged in. After his trip, Comcast's meter showed that he "used 500GB in three days of not even being home and not having a Wi-Fi network running," Weaver said. He then tried disconnecting the modem for three days and found that Comcast's meter finally stopped counting data usage, he said.

"I have been told no less than eight times that I can rest easy if I would just buy the $50 unlimited data plan," he said. "This whole thing reeks of scam."

In short it goes something like this: lobby to keep the broadband industry uncompetitive, use that lack of competition to impose arbitrary and unnecessary limits that hinder competitors, then charge users $50 more per month if they want to enjoy the same, unlimited connection they used to enjoy. It is a scam, but again, you'd be hard pressed to find absolutely anybody in government that gives much of a damn, despite the ploy's negative impact on competition and the health of the internet. What a wonderful time to dismantle some of the only rules we have protecting consumers from this kind of behavior, don't you think?

Permalink | Comments | Email This Story
]]>what-could-possibly-go-wronghttps://www.techdirt.com/comment_rss.php?sid=20170927/09491438298Mon, 12 Dec 2016 03:25:00 PSTMore Prosecutors Refuse To Accept Guilty Pleas Based On Faulty $2 Field Drug TestsTim Cushinghttps://www.techdirt.com/articles/20161206/15130336213/more-prosecutors-refuse-to-accept-guilty-pleas-based-faulty-2-field-drug-tests.shtml
https://www.techdirt.com/articles/20161206/15130336213/more-prosecutors-refuse-to-accept-guilty-pleas-based-faulty-2-field-drug-tests.shtml
Thanks to ProPublica's research -- and a high-profile article in the New York Times -- prosecutors in Oregon will no longer accept guilty pleas based solely on the results of often-inaccurate field drug tests.

Last July, shortly after ProPublica and The New York Times Magazine published an article detailing that the kits are prone to error and years earlier had helped account for roughly 300 wrongful convictions in Houston, the Multnomah County District Attorney’s Office in Portland decided to change the way it secured guilty pleas in drug possession cases. Today, when a defendant pleads guilty before the lab analysis is performed, prosecutors must still have the field test results double-checked.

J. Russell Ratto, the head of conviction integrity at the district attorney’s office, had asked his colleagues whether it might be wise to change the policy after the article’s publication.

“Our DDAs [deputy district attorneys] are always looking to make sure we’re using the very best practices,” said J.R. Ujifusa, the deputy district attorney who oversees drug prosecutions.

It's a good start. The $2 drug tests are great for law enforcement drug warriors, but not much good for anyone else. The cheap tests, performed in the field by officers, have been known to call everything from baking soda to donut glaze illegal substances. This is often good enough for government work, especially when the government work involves obtaining convictions.

Harris County, Texas -- where the 300 wrongful convictions were uncovered -- is also no longer accepting pleas based on field test results. But that only solves part of the problem. The other problem is what to do with those accused of drug possession. Treating the tests as fallible helps prevent wrongful convictions, but those facing drug charges remain locked up while waiting for lab test results.

Thomas Johnson of Fault Lines points out that the Portland DA's office has its head and heart in the right place, but it won't do much for people picked up by cops utilizing $2 field drug tests, not until the rest of the system is overhauled.

You have to give up what the court orders for bail. You can post a percentage of the bail, but it’s the judge who decides the bail amount. This bail amount can vary depending on the type of drug you’re suspected of possessing and your past record. For instance, if you have a violent incident on your record, it will increase the bail for your new charge. The same goes for a previous drug conviction.

Bail can vary from the $10,000 on a charge of unlawful meth possession, for example, to $75,000 for someone accused of sex abuse.

If someone can't come up with $1,000, they're stuck in jail while awaiting lab results. They can't plead guilty but they can't be proven innocent either. They're in limbo -- the sort of limbo that can ruin someone's life before they even have a chance to be exonerated.

As of 2015, the goal at the state crime lab was to have evidence tested within 30 days, but the average turnaround was 65 days. For many defendants, if you can’t afford bail, you are already living hand to mouth and 65 days in jail will effectively finish you off. You’ll be starting from scratch once you hit the street.

Fault Line's Johnson suggests another solution: if the DA's office is conceding that these drug tests are often inaccurate, it should follow this assumption to its logical conclusion and work on changing this part of the system as well. If it's really interested in not ruining people's lives over a $2 drug test, it needs to push for greatly-reduced bail or no bail at all in cases where the only evidence at the time of booking is subject to a mandatory second pass.

Permalink | Comments | Email This Story
]]>at-long-last,-some-forward-momentumhttps://www.techdirt.com/comment_rss.php?sid=20161206/15130336213Mon, 18 Jul 2016 03:26:57 PDTField Drug Tests: The $2 Tool That Can Destroy LivesTim Cushinghttps://www.techdirt.com/articles/20160712/15543134951/field-drug-tests-2-tool-that-can-destroy-lives.shtml
https://www.techdirt.com/articles/20160712/15543134951/field-drug-tests-2-tool-that-can-destroy-lives.shtml
It only takes $2 and a few minutes to ruin someone's life. Field tests for drugs are notoriously unreliable and yet they're still considered to be evidence enough to deprive someone of their freedom and start a chain of events that could easily end in joblessness and/or homelessness.

Ryan Gabrielson and Topher Sanders -- writing for the New York Times magazine -- take a detailed look at these field tests, filtered through the experience of Amy Albritton, who spent 21 days in jail thanks to a false positive.

A traffic stop that resulted in a vehicle search turned up an empty syringe and a "suspicious" crumb of something on the floor. The field test said it was crack cocaine. Albritton was taken to a county jail where she spent the next three weeks after pleading guilty to possession, rather than face a trial and a possible sentence of two years.

The crumb of whatever had been sent on to a lab for verification, but with Albritton's guilty plea, there was no hurry to ensure the substance retrieved from Albritton's car was actually illegal. In fact, with the case adjudicated and closed, the evidence could simply have been destroyed. It wasn't. Long after Albritton had been released, the substance was tested.

On Feb. 23, 2011 — five months after Albritton completed her sentence and returned home as a felon — one of Houston’s forensic scientists, Ahtavea Barker, pulled the envelope up to her bench. It contained the crumb, the powder and the still-unexplained syringe. First she weighed everything. The syringe had too little residue on it even to test. It was just a syringe. The remainder of the “white chunk substance” that Officer Helms had tested positive with his field kit as crack cocaine totaled 0.0134 grams, Barker wrote on the examination sheet, about the same as a tiny pinch of salt.

[...]

The powder was a combination of aspirin and caffeine — the ingredients in BC Powder, the over-the-counter painkiller, as Albritton had insisted.

[...]

The crumb’s fragmentation pattern did not match that of cocaine, or any other compound in the lab’s extensive database. It was not a drug. It did not contain anything mixed with drugs. It was a crumb — food debris, perhaps. Barker wrote “N.A.M.” on the spectrum printout, “no acceptable match,” and then added another set of letters: “N.C.S.” No controlled substance identified.

Albritton was innocent, but with a guilty plea, she now had a criminal record. And three weeks in jail turned her life upside down.

Albritton had managed the Frances Place Apartments, a well-maintained brick complex, for two years, and a free apartment was part of her compensation. But as far as the company knew, Albritton had abandoned her job and her home. She was fired, and her furniture and other belongings were put out on the side of the road. “So I lost all that,” she says.

[...]

Albritton gave up trying to convince people otherwise. She focused instead on Landon [her son]. Using a wheelchair, he needed regular sessions of physical and occupational therapy, and Albritton’s career managing the rental complex had been an ideal fit, providing a free home that kept her close to her son while she was at work, and allowing her the flexibility to ferry him to his appointments. But now, because of her new felony criminal record, which showed up immediately in background checks, she couldn’t even land an interview at another apartment complex. With a felony conviction, she couldn’t be approved as a renter either.

As the authors point out, 90% of jurisdictions will allow prosecutors to accept a guilty plea based on nothing more than highly-unreliable field test results. The test used in Albritton's case contains a chemical that turns blue when exposed to cocaine. Unfortunately, it also turns blue when exposed to 80 other legal substances, including acne medicine and household cleaners.

The tests are about as accurate as you'd expect for a $2 test. Differences in ambient temperature can affect test results, as can the alteration of the order in which the three tubes in each test are used. A positive field test is still a long way from being a credible indication of an illegal substance.

In Las Vegas, authorities re-examined a sampling of cocaine field tests conducted between 2010 and 2013 and found that 33 percent of them were false positives. Data from the Florida Department of Law Enforcement lab system show that 21 percent of evidence that the police listed as methamphetamine after identifying it was not methamphetamine, and half of those false positives were not any kind of illegal drug at all. In one notable Florida episode, Hillsborough County sheriff’s deputies produced 15 false positives for methamphetamine in the first seven months of 2014.

But they're just good enough to destroy lives.

Fortunately, the article reports a few positive developments. Some people who have pled guilty to possession charges based on field tests have had their convictions overturned when lab tests come back clean. This is after the fact -- sometimes years after the fact -- so it does little to undo the damage already done.

In addition to only allowing someone who's life has been drastically altered to maybe finally make some forward progress, this sort of thing is limited only to those jurisdictions where crime labs are required to test every incoming sample, whether or not a conviction has already been obtained. Very few labs have this requirement. The standard M.O. is to simply destroy "unneeded" evidence if the case has been closed.

The most immediate fix would be to discard the faulty tests and develop something far more precise for field testing. But until that occurs, it seems unlikely law enforcement will abandon a product that allows officers to develop probable cause for drug possession arrests. A more immediate route towards ensuring few wrongful convictions would be to institute a requirement that all field-tested substances be tested by a lab before the prosecution can move forward. Otherwise, the system is basically convicting people on suspicion, rather than actual guilt.

In the county where Albritton was arrested, this change has been made.

Last year, Devon Anderson, the current Harris County district attorney, prohibited plea deals in drug-possession cases before the lab has issued a report.

That's still not enough to prevent the accused's world from falling apart while waiting for a lab test.

The labs issue reports in about two weeks, but defendants typically wait three weeks before they can see a judge — enough time to lose a job, lose an apartment, lose everything.

But it's still better than the alternative: doing nothing. Since this policy was implemented, dismissals are up 31% in the county, thanks to lab tests showing substances seized were not illegal.

Permalink | Comments | Email This Story
]]>life-is-cheaphttps://www.techdirt.com/comment_rss.php?sid=20160712/15543134951Mon, 20 Jun 2016 14:06:00 PDTFBI's Facial Recognition Database Still Huge, Still Inaccurate, And DOJ Shows Zero Interest In Improving ItTim Cushinghttps://www.techdirt.com/articles/20160618/09314834742/fbis-facial-recognition-database-still-huge-still-inaccurate-doj-shows-zero-interest-improving-it.shtml
https://www.techdirt.com/articles/20160618/09314834742/fbis-facial-recognition-database-still-huge-still-inaccurate-doj-shows-zero-interest-improving-it.shtml
The FBI's biometric database continues to grow. Its Next Generation Identification system (NGI) is grabbing everything it can from multiple sources, compiling millions of records containing faces, tattoos, fingerprints, etc. from a blend of criminal and non-criminal databases. It went live in 2014, but without being accompanied by the Privacy Impact Assessment (PIA) it promised to deliver back in 2012.

Lawsuits and pressure from legislators finally forced the FBI to comply with government requirements. That doesn't mean the FBI has fully complied, not even two years past the rollout. And it has no interest in doing so in the future. It's currently fighting to have its massive database exempted from federal privacy laws.

Much of the information we have about the FBI's NGI database has come from outside sources. The EFF and EPIC have forced documentation out of the agency's hands via FOIA lawsuits. And now, the Government Accountability Office (in an investigation prompted by Sen. Al Franken) is turning over more information to the public with its review of the system.

The FBI’s system searches not just its own database, but also photo databases maintained by seven participating states, the US Department of State – which issues passports – and the US Department of Defense, shared among federal law enforcement agencies and the participating agencies, though access on the state level is obtained through the FBI.

This is only part of the NGI. To amass the 411 million photos it has collected to this point, the FBI dumps in the contents of a national criminal database.

[T]he GAO report found a much larger program, run by the criminal justice information services division of the FBI (CJIS), called Facial Analysis, Comparison and Evaluation, or Face, which “conducts face recognition searches on NGI-IPS and can access external partners’ face recognition systems to support FBI active investigations”.

The multiple inputs -- which allow criminal and non-criminal biometric data to intermingle -- still return an alarmingly high number of false positives. According to data obtained by EPIC, the facial recognition portion shows an error rate of 15-20% in the top 50 results returned from searches. That was the error rate in 2010. We can assume the hit rate has improved since then, but we have no way of knowing what the current error rate is because the FBI is uninterested in policing the accuracy of its database.

Prior to deploying NGI-IPS, the FBI conducted limited testing to evaluate whether face recognition searches returned matches to persons in the database (the detection rate) within a candidate list of 50, but has not assessed how often errors occur. FBI officials stated that they do not know, and have not tested, the detection rate for candidate list sizes smaller than 50, which users sometimes request from the FBI… Additionally, the FBI has not taken steps to determine whether the face recognition systems used by external partners, such as states and federal agencies, are sufficiently accurate for use by FACE Services to support FBI investigations

The GAO report also points out the FBI has been severely delinquent in its obligations to the public. Reports it was supposed to deliver prior to rollout have only just recently appeared, including one release apparently prompted by the GAO's assessment of the NGI program.

NGI-IPS has been in place since 2011, but DOJ did not publish a System of Records Notice (SORN) that addresses the FBI's use of face recognition capabilities, as required by law, until May 5, 2016, after completion of GAO's review. The timely publishing of a SORN would improve the public's understanding of how NGI uses and protects personal information.

The GAO has made six recommendations to the agency, three of which are being disputed by the DOJ. According to the DOJ, the reason for the mandatory reports being delivered after-the-fact doesn't need to be examined because the FBI "has established practices that protect privacy and civil liberties beyond the requirements of the law." This sounds like the FBI has "nothing to hide," which is at odds with the lack of responsiveness by the agency to demands for updated PIAs and SORNs over the last eight years.

The DOJ also disagrees that it should have to audit the facial recognition database's "hit rate," something that was only 80-85% accurate five years ago. (In fact, the FBI's specifications consider 85% accuracy to be acceptable when returning lists of possible suspects.) The DOJ claims the database can never return a false positive because it apparently has enough manpower and resources to chase down every bogus lead.

In its response, DOJ stated that because searches of NGI-IPS produce a gallery of likely candidates to be used as investigative leads instead of for positive identification, NGI-IPS cannot produce false positives and there is no false positive rate for the system.

The GAO understandably disagrees. Accuracy is important, especially if the FBI is going to put innocent people under investigation… or overlook potentially dangerous suspects.

Without actual assessments of the results from its state and federal partners, the FBI is making decisions to enter into agreements based on assumptions that the search results may provide valuable investigative leads. In addition, we disagree with DOJ’s assertion that manual review of automated search results is sufficient. Even with a manual review process, the FBI could miss investigative leads if a partner does not have a sufficiently accurate system.

The DOJ apparently still feels a 20% chance of putting the wrong person under investigation is still acceptable. And it still believes that it's so far ahead of the privacy curve that it doesn't need to apprise the public of the potential privacy implications of its massive biometric database. The information forced out of its hands by litigants and outside agencies shows the FBI is far more interested in collection than dissemination -- that it should be able to take all it wants from the public without having to hand out anything in return.

Permalink | Comments | Email This Story
]]>ALL-YOUR-FACE-ARE-BELONG-TO-UShttps://www.techdirt.com/comment_rss.php?sid=20160618/09314834742Thu, 11 Feb 2016 03:27:00 PSTDrug Dogs Don't Even Have To Be Right Half The Time To Be Considered 'Reliable' By The CourtsTim Cushinghttps://www.techdirt.com/articles/20160209/09322733559/drug-dogs-dont-even-have-to-be-right-half-time-to-be-considered-reliable-courts.shtml
https://www.techdirt.com/articles/20160209/09322733559/drug-dogs-dont-even-have-to-be-right-half-time-to-be-considered-reliable-courts.shtml
All in all, this motion to suppress evidence worked out for the plaintiff, but it does little to address concerns that drug dogs are basically blank permission slips for inquisitive cops.

The defendant -- Emile Martin -- was in a vehicle driven by another person (simply referred to as "Montgomery" in the opinion). This vehicle crossed the centerline multiple times and was pulled over by Deputy Brandon Williams. The driver could not produce registration or proof of insurance, which led to the issuance of a citation… eventually. But the citation process was unnecessarily prolonged to provide the deputy with a chance to have a K9 unit brought in to sniff the car for drugs.

Based on its findings of fact, the court agrees that the stop was unduly prolonged in order to allow time for the canine and its handler to reach the scene. Prior to the point that the dog alerted, at 3:37 a.m., there was merely a hunch, but neither probable cause nor reasonable articulable suspicion, that criminal conduct was afoot. The lapse of 33 minutes from 3:04 a.m. to 3:37 a.m. for the stop in this case constituted a plainly unjustifiable seizure for that length of time under the Fourth Amendment. As noted above, when Deputy Williams returned to his cruiser with Montgomery’s driver’s license and the Grand Prix title at or shortly after 3:11 a.m., he had everything he needed to begin writing the traffic citations.

However, Williams did not begin writing the citations until 3:21 a.m., and had not completed them when Dul alerted on the vehicle following the open-air sniff at 3:37 a.m. While Deputy Williams spent some time awaiting confirmation from dispatch of the license’s validity and the results of the warrant search, that does not excuse his failure to even begin writing the citations until ten minutes after he could have done so. The stop here was unduly prolonged far beyond the time reasonably required to complete the stop’s mission.

Under the Supreme Court's Rodriguez decision, officers cannot artificially prolong traffic stops in hopes of stumbling across something "better" than a traffic violation. Once the stop's "mission" has reached its conclusion, drivers are free to go, no matter how many more questions -- or dog sniffs -- the officer might wish to pursue.

Still, a drug dog was brought in and it did alert during its "search" of the vehicle. This alert was also challenged, presumably in case the defendant's citation of Rodriguez failed to result in suppression. Data was obtained on the dog's ("Dul") "hit" rate. The data wasn't exactly a confirmation of Dul's superlative skills.

The defendant has not presented any evidence challenging the adequacy of Dul’s training and certification regimen. However, he questions Dul’s reliability based on a review of the dog’s performance record, both in training sessions and in the field. The defendant argues that Dul’s training and field performance records suggest a failure rate of up to 25%. The evidence offered on this phase of the motion is generally undisputed.

Considering law enforcement officers "ask" dogs for permission to effect warrantless searches, one would hope 75% wouldn't be an acceptable success rate. Of course, many arguments were presented by the government as to why being right only three-fourths of the time is nigh unto infallibility. According to law enforcement testimony, there are any number of reasons why a drug sniff might result in a false positive, but none of those are reasons to doubt a dog's assertions.

This is the case because officers are unable to confirm false negatives in the field (as no search is conducted), may fail to find drugs where a dog correctly alerts, and may not realize a dog has alerted based on a residual odor of drugs no longer present.

This would be one thing if law enforcement was alone in finding this acceptable. Unfortunately, the court also finds this lack of accuracy to be of little import when discussing the justification of a search. Dul may only be right 75% of the time, but the bar has been set so low by previous decisions that drug dogs whose intuition is worse than a coin flip are considered to be trustworthy generators of probable cause. (h/t Brad Heath)

Notwithstanding the dispute regarding Dul’s failure rate, the court is satisfied that in conjunction with his training and certification, his performance record amply supports the officers’ reliance on his alert to support probable cause to conduct a search. Dul’s performance record is superior to that of dogs which have been found to be reliable by other courts. See Green, 740 F.3d at 283-284 (affirming district court’s finding that dog with 43% success rate was reliable); United States v. Bentley, 795 F.3d 630, 636 (7th Cir. 2015) (accepting field detection rate of 59.5%); United States v. Holleman, 743 F.3d 1152, 1157 (8th Cir.) (57%).

The only upside here is that the Rodriguez decision will provide a remedy for those whose stops have been artificially extended to bring in drug dogs whose "alert" means nothing more than ¯\_(ツ)_/¯.

In this case, the extension of the stop resulted in suppressed evidence, not the drug dog's questionable reliability. At some point, drug dogs may start being mentioned in the same breath as other law enforcement pseudoscience -- like bite mark evidence or hair comparison. But until then, dogs that can't even manage a 50% hit rate will still be allowed to give officers permission to perform warrantless searches.

Permalink | Comments | Email This Story
]]>good-news,-K9s:-even-if-you-suck-at-your-job,-you-can-keep-your-jobhttps://www.techdirt.com/comment_rss.php?sid=20160209/09322733559Mon, 28 Dec 2015 08:31:00 PSTComcast Cap Blunder Highlights How Nobody Is Ensuring Broadband Meters Are AccurateKarl Bodehttps://www.techdirt.com/articles/20151218/05401433119/comcast-cap-blunder-highlights-how-nobody-is-ensuring-broadband-meters-are-accurate.shtml
https://www.techdirt.com/articles/20151218/05401433119/comcast-cap-blunder-highlights-how-nobody-is-ensuring-broadband-meters-are-accurate.shtmlnobody is checking to confirm that ISP meters are accurate. The result has been user network hardware that reports usage dramatically different from an ISPs' meters, or users who are billed for bandwidth usage even when the power is out or the modem is off. Not only have regulators historically failed to see the anti-innovation, anti-competitive impact of usage caps, you'd be hard pressed to find a single official that has even commented on the problem of inaccurate broadband usage meters.

Enter Comcast, which has, of course, been slowly but surely expanding its usage caps into more and more noncompetitive markets. And given that Comcast continues to have among the worst customer service in any U.S. industry, the combined end result is about what you'd expect. Like users who say they've been repeatedly over-billed for broadband consumption that never actually occurred:

"Oleg received warnings in September and another in October, the latter while he was overseas for a multiple-week vacation with his wife. When they returned home on November 9th, Comcast’s data meter was “showing I used 120 gigs of data, like, while I was gone,” he wrote. Customers can check their usage on Comcast’s website."

...Calls with Comcast customer service agents didn’t clear up the problem. "I called Comcast... and was patronizingly informed that 'it must be somebody stealing your Wi-Fi,'" he wrote. "Possible, but highly unlikely. I’m a software developer, Linux kernel contributor, and I take my home security very seriously."

This being Comcast, the user was ignored when he told the ISP he was being billed for 120 GB of usage that supposedly occurred when he was away on vacation. So the user set about trying to document his problem over at YouTube, noting how he spent a few months using only cellular data to try and prove to Comcast their billing system was broken:

Note this is actually a Comcast user with some technical skills; many Comcast users likely wouldn't know they were being over-billed, or if they did, wouldn't know how to measure their own usage. And of course in traditional Comcast fashion, it once again took somebody in the press to get Comcast to fix its screw up. Ultimately, Comcast admitted that it had accidentally swapped the user's MAC address, and was charging the customer for somebody else's usage:

Oleg provided us his full name and address so we could check into his situation with Comcast. The company investigated the problem after being contacted by Ars and confirmed that its meter readings were inaccurate. “We have reached out and resolved this,” a Comcast spokesperson told Ars. “There was a technical error associated with his account, which we have since corrected.”

"Comcast told Oleg that its system had him confused with another customer, he said. “It turns out their system had my modem MAC address entered incorrectly, there was an off-by-one typo that was hard to see so they were counting data from some modem who knows where,” Oleg told Ars.

So yeah, you've got multiple problems at play all creating a supernova of dysfunction. One, Comcast's taking advantage of the lack of broadband competition to impose usage caps (the user above says he'd leave, but has no other choices). Two, Comcast is using these usage caps to give its own content a leg up by exempting it from said caps (zero rating). Three, Comcast's dismal customer service means that even if you can prove you're being over-billed, you may not be able to get it fixed. Four, nobody in government can be bothered to make sure ISPs are metering accurately.

ISPs are incredibly eager to bill like utilities, but they've fought tooth and nail against being regulated as such. And despite all of the ISP hand wringing over net neutrality rules saddling them with draconian "utility regulations," regulators have by and large avoided truly applying most utility-grade regulations and price controls on ISPs. Should Comcast and other U.S. ISPs keep pushing their luck with usage caps, all of that may eventually need to change.

After you've finished checking out those links, take a look at our Daily Deals for cool gadgets and other awesome stuff.

Permalink | Comments | Email This Story
]]>urls-we-dig-uphttps://www.techdirt.com/comment_rss.php?sid=20100804/10325910495Tue, 9 Sep 2014 07:56:37 PDTTurns Out Cell Phone Location Data Is Not Even Close To Accurate, But Everyone Falls For ItMike Masnickhttps://www.techdirt.com/articles/20140908/04435128452/turns-out-cell-phone-location-data-is-not-even-close-to-accurate-everyone-falls-it.shtml
https://www.techdirt.com/articles/20140908/04435128452/turns-out-cell-phone-location-data-is-not-even-close-to-accurate-everyone-falls-it.shtmlit doesn't need a warrant for such data, while the NSA has tested a pilot program recording all such data, and says it has the legal authority to collect it, even if it's not currently doing so.

However, as anyone with even a basic geometry education recognizes, which cell tower you're connected to does not give you a particularly exact location. It can be useful in putting someone in a specific (wide) area -- or, much more useful in detailing where someone is traveling over long distances as they repeatedly switch towers in a particular direction. But a single reading does not give you particularly exact location details. I had naturally assumed that most people understood this -- including law enforcement, lawyers, prosecutors and judges -- but it turns out they do not. A rather depressing story in The Economist notes that, thanks to this kind of ignorance (combined with bogus cop shows on TV that pretend cell site data is good for pinpointing locations), cell site location data is frequently used to convict innocent people. The story opens with a ridiculous example, in which a woman was pressured into a plea bargain based on totally false claims about tower location data:

SOMEONE strangled a prostitute in Portland, Oregon in 2002. The police arrested Lisa Roberts, the victim’s ex-lover, who spent more than two years in custody awaiting trial. Shortly before the trial the prosecutor told Ms Roberts, via her lawyer, that tower data collected by Verizon, her mobile-telephone network, showed precisely where she was at the time of the murder. As her lawyer recalled, the prosecutor said Ms Roberts could be “pinpointed” in a park shortly before the victim’s naked and sexually assaulted corpse was found there. She was told she faced 25 years to life in prison. She accepted a deal to plead guilty and serve 15 years.

But the high-tech evidence against her was bunk. Routinely collected tower data can place a mobile phone in a broad area, but it cannot “pinpoint” it. That would require a special three-tower “triangulation”, which cannot reveal past locations. It took a decade for Ms Roberts’s guilty plea to be thrown out. On May 28th she left prison, her criminal record clean, after nearly 12 years in custody.

Obviously, things like GPS do allow for much more precise targeting of location (which may be why the NSA is focusing on that instead of cell site location data), but too many people confuse cell site location data with GPS. What's ridiculous is that this mistake isn't just being made by random people -- but prosecutors and lawyers responsible for criminal cases that can destroy an innocent person's life.

This really points to a larger issue: people have this tendency to believe that technology can answer all questions. The NSA's fetishism of surveillance via technology is an example of this. There's data there, so it becomes all too tempting to assume that the data must answer any possible question (thus, the desire to collect so much of it). But the data and the interpretations it can lead to are often misleading or simply wrong. And that's especially true when dealing with newer technologies or forms of data collection. That the criminal justice system could go decades without everyone recognizing the basic geometric limits of cell site location data based on a single cell is... both astounding and depressing. But it's also a reminder that we shouldn't assume that just because some evidence comes from some new-fangled data source it's automatically legitimate and accurate.

If you'd like to read more awesome and interesting stuff, check out this unrelated (but not entirely random!) Techdirt post.

Permalink | Comments | Email This Story
]]>urls-we-dig-uphttps://www.techdirt.com/comment_rss.php?sid=20100907/23180510930Tue, 12 Jan 2010 20:33:00 PSTAs ISPs Look To Charge Per Byte... How Accurate Are Their Meters?Mike Masnickhttps://www.techdirt.com/articles/20100108/0309327670.shtml
https://www.techdirt.com/articles/20100108/0309327670.shtmlwho will monitor the broadband meters themselves to make sure they're accurate? After all, with things like energy meters, they're carefully regulated and audited to make sure they're accurate. But no such luck with broadband meters, for the most part. Broadband Reports points out that it looks like individuals will be on their own to check on whether or not their ISP is being honest with them concerning how much bandwidth they use.

Permalink | Comments | Email This Story
]]>not-very...https://www.techdirt.com/comment_rss.php?sid=20100108/0309327670Wed, 4 Nov 2009 22:26:00 PSTMaryland Testing E-Voting System That Lets People Verify Their Votes CountedMike Masnickhttps://www.techdirt.com/articles/20091104/1339256801.shtml
https://www.techdirt.com/articles/20091104/1339256801.shtmlmany years, David Chaum has been pushing for a voting system that he claims will be a lot more reliable. Basically, after you vote, you get a coded number, and then after the election, you can go to an election website, punch in your code and make sure that your vote counted, and was for whom you meant to vote. On top of this, there's a system for auditors to check to make sure that votes were counted accurately, with information released publicly so people can "audit" the election without being able to connect voters to their votes. This system tends to generate a lot of controversy (though some of it appears to be from people who just don't like David Chaum, rather than because they really have a problem with his system). However, the system hasn't been really tested in an actual US election... until now. The municipal elections in Takoma Park, Maryland used the system, despite the state recently signing a big deal with Diebold. It's not clear how the overall election went yet -- or how many people actually checked their votes online (approximately 30% in an exit poll said they copied down the code). However, it's good to see that some gov'ts are not just accepting what the big e-voting firms give them, and are willing to explore more sophisticated voting systems that aren't based on pure faith in the e-voting company to get the system right.

Permalink | Comments | Email This Story
]]>experimenting-awayhttps://www.techdirt.com/comment_rss.php?sid=20091104/1339256801Tue, 18 Aug 2009 13:17:58 PDTLatest Techno Moral Panic: Texting Is 'Rewiring Young Brains'Mike Masnickhttps://www.techdirt.com/articles/20090818/0112385912.shtml
https://www.techdirt.com/articles/20090818/0112385912.shtmlseries of alarmist studies that get lots of press lately, with titles about how social networks or other technologies are somehow negatively impacting people's brains. Nearly all of these didn't hold up under much scrutiny, as they almost all took things out of context or greatly extrapolated a finding and misinterpreted the results. The latest to add to the pile? A report claiming that texting may be "rewiring young brains." The evidence? Kids who used mobile phones a lot finished a variety of tests much faster, but tended to be "less accurate." That's about it. From there, the guy who did the study concludes that it must be the fact that many mobile phones use "predictive texting" that's training kids to be fast, but inaccurate, assuming something else will come in and fix the mess. Now, perhaps that's true, but it seems like the study doesn't actually show that at all. Also, it's not clear from the report what sort of mistakes are being made. The article talks about spelling mistakes, which are common in texting, but the real question is whether or not that really matters? It may very well depend on context. In a text message, a spelling mistake isn't a big deal. In a resume, it's a different story. But where on that spectrum did these tests land? But more importantly, even if we grant the premise that kids who text a lot are a lot sloppier on certain tests... how do you go from that to immediately concluding that their brains are being wired differently? It sounds a lot more like what they've been trained to do, rather than any serious neurological shift.

Permalink | Comments | Email This Story
]]>mmm-hmmhttps://www.techdirt.com/comment_rss.php?sid=20090818/0112385912Wed, 10 Dec 2008 07:44:00 PSTMore Votes Lost By Diebold; Discovered By Unique Voting Transparency ProjectMike Masnickhttps://www.techdirt.com/articles/20081210/0114213067.shtml
https://www.techdirt.com/articles/20081210/0114213067.shtmlfinally admitted to a glitch with some of its machines, but the company still downplayed the significance of this, claiming that it didn't believe the glitch (which loses votes) had actually impacted any elections.

Yet, even after this glitch was officially revealed, in the election just last month, we're now finding out that Diebold machines caused 200 lost votes in an election in California. Even worse, no one would even know about this at all if it weren't for a highly ambitious and very unique program set up by some voting activists to ensure there was real transparency. They convinced the local government to let them scan every single ballot and put it online for anyone to view. It was that separate process where they discovered the ballot counts didn't match, and that Diebold seemed to show absolutely no records of the missing ballots, despite having scanned them.

Makes you kinda wonder how many other areas lost votes that absolutely no one knows about because they didn't have such a system in place, huh?

Permalink | Comments | Email This Story
]]>reliable,-huh?https://www.techdirt.com/comment_rss.php?sid=20081210/0114213067Wed, 13 Feb 2008 16:52:33 PSTMaking Sure Bets On Online Prediction MarketsDennis Yanghttps://www.techdirt.com/articles/20080213/114007246.shtml
https://www.techdirt.com/articles/20080213/114007246.shtmlprediction markets still have many inefficiencies in their current state, allowing shrewed traders to make tidy profits as a result. In the Intrade market, a political future is worth $10 if the political outcome occurs, and $0 if it does not occur. Therefore, a $5 market price on a particular future is supposed to correlate with a 50% prediction of that future to occur. However, in practice, certain factors push these prices out of the range of their realistic probability. For example, contracts for Ron Paul's predicted as high as a 9% chance of him being selected as the Republican nominee, when in reality, his chances were probably closer to nil. Perhaps driven by a small cadre of Paul supporters, the Intrade market was able to be swayed by a small number of trades. Even today, Intrade shows Paul at a 1.2% -- which is a great opportunity for someone to make money on taking the short side of that contract. On the Democratic side, Al Gore supporters have put a 1% chance on his head, who never even appeared on the ballot -- wishful thinking indeed. That said, the limited amount of volume on these contracts precludes anyone from actually making a crazy amount of money on them, but it does remind us of an important fact about markets -- while they do tend to come up with the right answer in the long term, in the short term, they are incredibly susceptible to very human factors like optimism and group think.