This has to be the most disturbing thing I’ve seen in all my my political watching. Original found at Slate.

Seriously, were ya’ll watching the same debate as me? Trump is and was an absolute blathering idiot. I swear I was watching this going “after this people will see how truly unfit Trump is after this.” Then this poll?

Benjamin Franklin said, “Those who would give up essential liberty to purchase a little temporary safety deserve neither liberty nor safety.” So when do we step back and ask what is working and what is not – and also what freedoms are we giving up to gain this temporary security?

In his latest bestseller,Data and Goliath, world-renowned security expert and author Bruce Schneier goes deep into the world of surveillance, investigating how governments and corporations alike monitor nearly our every move. In this excerpt, Schneier explains how we are fed a false narrative of how our surveillance state is able to stop terrorist attacks before they happen. In fact, Schneier argues, the idea that our government is able to parse all the invasive and personal data they collect on us is laughable. The data-mining conducted every day only seems to take valuable resources and time away from the tactics that should be used to fight terrorism.

The NSA repeatedly uses a connect-the-dots metaphor to justify its surveillance activities. Again and again — after 9/11, after the Underwear Bomber, after the Boston Marathon bombings — government is criticized for not connecting the dots.

However, this is a terribly misleading metaphor. Connecting the dots in a coloring book is easy, because they’re all numbered and visible. In real life, the dots can only be recognized after the fact.

That doesn’t stop us from demanding to know why the authorities couldn’t connect the dots. The warning signs left by the Fort Hood shooter, the Boston Marathon bombers, and the Isla Vista shooter look obvious in hindsight. Nassim Taleb, an expert on risk engineering, calls this tendency the “narrative fallacy.” Humans are natural storytellers, and the world of stories is much more tidy, predictable, and coherent than reality. Millions of people behave strangely enough to attract the FBI’s notice, and almost all of them are harmless. The TSA’s no-fly list has over 20,000 people on it. The Terrorist Identities Datamart Environment, also known as the watch list, has 680,000, 40% of whom have “no recognized terrorist group affiliation.”

Data mining is offered as the technique that will enable us to connect those dots. But while corporations are successfully mining our personal data in order to target advertising, detect financial fraud, and perform other tasks, three critical issues make data mining an inappropriate tool for finding terrorists.

The first, and most important, issue is error rates. For advertising, data mining can be successful even with a large error rate, but finding terrorists requires a much higher degree of accuracy than data-mining systems can possibly provide.

Data mining works best when you’re searching for a well-defined profile, when there are a reasonable number of events per year, and when the cost of false alarms is low. Detecting credit card fraud is one of data mining’s security success stories: all credit card companies mine their transaction databases for spending patterns that indicate a stolen card. There are over a billion active credit cards in circulation in the United States, and nearly 8% of those are fraudulently used each year. Many credit card thefts share a pattern — purchases in locations not normally frequented by the cardholder, and purchases of travel, luxury goods, and easily fenced items — and in many cases data-mining systems can minimize the losses by preventing fraudulent transactions. The only cost of a false alarm is a phone call to the cardholder asking her to verify a couple of her purchases.

Similarly, the IRS uses data mining to identify tax evaders, the police use it to predict crime hot spots, and banks use it to predict loan defaults. These applications have had mixed success, based on the data and the application, but they’re all within the scope of what data mining can accomplish.

Terrorist plots are different, mostly because whereas fraud is common, terrorist attacks are very rare. This means that even highly accurate terrorism prediction systems will be so flooded with false alarms that they will be useless.

The reason lies in the mathematics of detection. All detection systems have errors, and system designers can tune them to minimize either false positives or false negatives. In a terrorist-detection system, a false positive occurs when the system mistakenly identifies something harmless as a threat. A false negative occurs when the system misses an actual attack. Depending on how you “tune” your detection system, you can increase the number of false positives to assure you are less likely to miss an attack, or you can reduce the number of false positives at the expense of missing attacks.

Because terrorist attacks are so rare, false positives completely overwhelm the system, no matter how well you tune. And I mean completely: millions of people will be falsely accused for every real terrorist plot the system finds, if it ever finds any.

We might be able to deal with all of the innocents being flagged by the system if the cost of false positives were minor. Think about the full-body scanners at airports. Those alert all the time when scanning people. But a TSA officer can easily check for a false alarm with a simple pat-down. This doesn’t work for a more general data-based terrorism-detection system. Each alert requires a lengthy investigation to determine whether it’s real or not. That takes time and money, and prevents intelligence officers from doing other productive work. Or, more pithily, when you’re watching everything, you’re not seeing anything.

The US intelligence community also likens finding a terrorist plot to looking for a needle in a haystack. And, as former NSA director General Keith Alexander said, “you need the haystack to find the needle.” That statement perfectly illustrates the problem with mass surveillance and bulk collection. When you’re looking for the needle, the last thing you want to do is pile lots more hay on it. More specifically, there is no scientific rationale for believing that adding irrelevant data about innocent people makes it easier to find a terrorist attack, and lots of evidence that it does not. You might be adding slightly more signal, but you’re also adding much more noise. And despite the NSA’s “collect it all” mentality, its own documents bear this out. The military intelligence community even talks about the problem of “drinking from a fire hose”: having so much irrelevant data that it’s impossible to find the important bits.

We saw this problem with the NSA’s eavesdropping program: the false positives overwhelmed the system. In the years after 9/11, the NSA passed to the FBI thousands of tips per month; every one of them turned out to be a false alarm. The cost was enormous, and ended up frustrating the FBI agents who were obligated to investigate all the tips. We also saw this with the Suspicious Activity Reports —or SAR — database: tens of thousands of reports, and no actual results. And all the telephone metadata the NSA collected led to just one success: the conviction of a taxi driver who sent $8,500 to a Somali group that posed no direct threat to the US — and that was probably trumped up so the NSA would have better talking points in front of Congress.

The second problem with using data-mining techniques to try to uncover terrorist plots is that each attack is unique. Who would have guessed that two pressure-cooker bombs would be delivered to the Boston Marathon finish line in backpacks by a Boston college kid and his older brother? Each rare individual who carries out a terrorist attack will have a disproportionate impact on the criteria used to decide who’s a likely terrorist, leading to ineffective detection strategies.

The third problem is that the people the NSA is trying to find are wily, and they’re trying to avoid detection. In the world of personalized marketing, the typical surveillance subject isn’t trying to hide his activities. That is not true in a police or national security context. An adversarial relationship makes the problem much harder, and means that most commercial big data analysis tools just don’t work. A commercial tool can simply ignore people trying to hide and assume benign behavior on the part of everyone else. Government data-mining techniques can’t do that, because those are the very people they’re looking for.

Adversaries vary in the sophistication of their ability to avoid surveillance. Most criminals and terrorists — and political dissidents, sad to say — are pretty unsavvy and make lots of mistakes. But that’s no justification for data mining; targeted surveillance could potentially identify them just as well. The question is whether mass surveillance performs sufficiently better than targeted surveillance to justify its extremely high costs. Several analyses of all the NSA’s efforts indicate that it does not.

The three problems listed above cannot be fixed. Data mining is simply the wrong tool for this job, which means that all the mass surveillance required to feed it cannot be justified. When he was NSA director, General Keith Alexander argued that ubiquitous surveillance would have enabled the NSA to prevent 9/11. That seems unlikely. He wasn’t able to prevent the Boston Marathon bombings in 2013, even though one of the bombers was on the terrorist watch list and both had sloppy social media trails — and this was after a dozen post-9/11 years of honing techniques. The NSA collected data on the Tsarnaevs before the bombing, but hadn’t realized that it was more important than the data they collected on millions of other people.

This point was made in the 9/11 Commission Report. That report described a failure to “connect the dots,” which proponents of mass surveillance claim requires collection of more data. But what the report actually said was that the intelligence community had all the information about the plot without mass surveillance, and that the failures were the result of inadequate analysis.

Mass surveillance didn’t catch underwear bomber Umar Farouk Abdulmutallab in 2006, even though his father had repeatedly warned the U.S. government that he was dangerous. And the liquid bombers (they’re the reason governments prohibit passengers from bringing large bottles of liquids, creams, and gels on airplanes in their carry-on luggage) were captured in 2006 in their London apartment not due to mass surveillance but through traditional investigative police work. Whenever we learn about an NSA success, it invariably comes from targeted surveillance rather than from mass surveillance. One analysis showed that the FBI identifies potential terrorist plots from reports of suspicious activity, reports of plots, and investigations of other, unrelated, crimes.

This is a critical point. Ubiquitous surveillance and data mining are not suitable tools for finding dedicated criminals or terrorists. We taxpayers are wasting billions on mass-surveillance programs, and not getting the security we’ve been promised. More importantly, the money we’re wasting on these ineffective surveillance programs is not being spent on investigation, intelligence, and emergency response: tactics that have been proven to work. The NSA’s surveillance efforts have actually made us less secure.

I don’t get these people who eagerly try to eliminate everything that might be offensive. Seriously though, if you try hard enough couldn’t you find offense in everything? Original article found here.

IRVINE, Calif. — The student government at University of California, Irvine has voted to ban display of the American flag — or any flag — from its lobby.

A resolution that was narrowly approved by the legislative council of the campus’ Associated Students calls bans all flags from the common lobby area of student government offices, according to the Orange County Register. It prompted removal of the American flag from a lobby wall.

The student council approved the resolution on a 6-4 vote Thursday, with two abstentions. The executive cabinet was expected to consider a veto on Saturday.

The resolution authored by student Matthew Guevara of the university’s social ecology school lists 25 reasons for the ban, saying that the American flag has been flown in times of “colonialism and imperialism” and could symbolize American “exceptionalism and superiority.” The resolution says “freedom of speech, in a space that aims to be as inclusive as possible, can be interpreted as hate speech.”

The American flag had hung on a wall in the student government suite. A few weeks ago, someone removed the flag and put it on the desk of Reza Zomorrodian, the Associated Students’ president, with an anonymous note saying it shouldn’t be in the lobby.

The executive members decided to put up the flag again. Then the resolution was brought to the council.

Zomorrodian, an opponent of the ban, said the American flag was “an iconic and symbolic representation of our values in the U.S.”

On Friday, state Sen. Janet Nguyen, R-Santa Ana, said she and other legislators may introduce a state constitutional amendment to prohibit “state-funded universities and college campuses from banning the United States flag.”

“There’s a set of biracial twins in the UK who are turning heads because one is black and the other is white.” That’s how the New York Post introduced a profile of Lucy and Maria Aylmer, 18-year-olds whose father identifies as white and whose mother is “half-Jamaican” (and, we’re to assume, thinks of herself as black).

It’s just the most recent story of fraternal twins born with such dramatic variations in complexion they’re seen by many — and even see themselves — as members of two different racial groups.

Each of these situations and their accompanying striking images, is a reminder of how fluid and subjective the racial categories we’re all familiar with are.

What “black and white twins” can teach us about race: it’s not real

Lucy and Maria’s story, and all the other sensational tales in the ” Black and White Twins: born a minute apart” vein are actually just overblown reports on siblings who, because of normal genetic variations that show up in more striking ways in their cases, have different complexions.

But they’re fascinating because they highlight just how flimsy and open to interpretation the racial categories we use in the US and around the world are.

Even the Post’s description of the Aylmer twins is clumsy, asserting that they’re each “biracial,” but stating in the very same sentence that one is white and the other is black.

And the fact that the two, despite having the same parents, see themselves as belonging to two different racial groups ( “I am white and Maria is black,” Lucy told the Post) proves that there’s a lot more than biology or heritage informing racial identity.

It’s a reminder that the racial categories we use are fickle, flexible, open to interpretation, and have just as many exceptions as they do rules when it comes to their criteria for membership— that’s why they have been described as “not real,” meaning:

They’re not based on facts that people can even begin to agree on. (If we can’t even get a consensus that people with the same parents are the same race, where does that leave us?)

They’re not permanent. (If Lucy decides one day, like many other people with similar backgrounds, that her Jamaican mother is black and therefore, so is she, who’s to stop her?)

They’re not scientific. (There’s no blood test or medical assessment that will provide a “white” result for Lucy and a “black” one for Maria.)

They’re not consistent (Other twins with the same respective looks and identical parentage as these twins, might both choose to call themselves black or biracial.)

“Not real” doesn’t mean not important

Of course, none of this changes the fact that the concept of race is hugely important in our lives, in the United States, in the UK where the twins live, and around the world.

There’s no question that the way people categorize Lucy and Maria, and the way they think of themselves, will affect their lives.

That’s because, even though race is highly subjective, racism and discrimination based on what people believe about race are very real. The racial categories to which we’re assigned, based on how we look to others or how we identify, can determine real-life experiences, inspire hate, drive political outcomes, and make the difference between life and death.

But it’s still important to remember that these consequences are a result of human-created racial categories that are based on shaky reasoning and shady motivations. This makes the borders of the various groups impossible to pin down — as the “black and white” twins demonstrate — and renders modern debates about how particular people should identify futile.

Ticketing cameras have been popping up in increased numbers over the last decade. Some of them only measure speed, others red-light running, some actively scan licence plates, and some do a combination of all those. To be honest, we all want to be safe on the road. Nobody likes it when someone runs a red light and certainly not when someone runs a red light and causes an accident. The question remains however of whether or not these ticketing cameras help curb the problem of speeding or red-light running. In fact, some argue that the cameras do more harm than good.

I agree with the sentiment that these cameras do more harm than good and I think anyone that lives with them would agree. I remember here in Arizona we had speed cameras on the highway and red light cameras on the corners. It was nearly a death trap on the highway where everyone would be cruising along ([above the speed limit] and when you got in the area of a speed camera everyone would slam on their brakes. Of course, logic should tell you that if everyone is suddenly slamming on their brakes that there is eventually going to be an uptick of rear-end collisions.

Whiplash anyone?

So while the cameras may have stopped people from speeding, did it actually make us more safe? If we traded decreased speed for an increase in rear-end collisions then I’d personally say that the safety of our community was degraded and I think that many would agree.

Likewise with red-light cameras and safety. We may have stopped people from running red lights but we have also increased the likelihood that people slam on their brakes at the sight of a yellow light instead of safely proceeding through and avoiding a rear-end collision. I know that I personally know where the cameras are at in my neighborhood and I try to avoid them. If I can’t avoid them then I approach them with caution – I’m always super paranoid that if I stop on yellow (to avoid running the red light) then I will be rear-ended by someone behind me who isn’t paying attention or simply isn’t expecting me to stop. I literally go through these intersections staring at my rear-view mirror! Scary – shouldn’t my eyes be forward and scanning the road in front of me?

We have all heard the reports when these cameras were being put in about how safe they made people – how people drove slower and ran less red-lights. But were those “studies” done by independent organizations, by lobbyist, or by the camera companies themselves? It seems to me that these studies very well may have been done by the latter two groups. This is even more true due to the fact of all the scandals and judgments handed down against these camera companies – everything from bribery to changing the yellow light timing to ensure more captures.

Here in Arizona we are pushing to finally rid ourselves of this cancer and return ourselves to a more sane, logical, and Constitutional way of nabbing those who break traffic laws by passing SB1167, entitled “Photo radar; prohibition.” So far the bill has managed to pass all paces and has picked up some notable endorsements, to include:

Richard Mack, a former two-term Graham County Sheriff, and current candidate for Navajo County Sheriff has been calling Arizona Senate members in support of SB 1167. (citation needed)

Tomorrow, 23 February 2015, is the day that the Arizona Senate votes on SB1167. Please consider contacting your Senator and telling them to vote YEA for SB1167. If you do not know their contact information then click here. The email I sent simply said “Please represent me by voting YEA for SB1167.”

Other items that you may want to consider that make red light cameras Unconstitutional:

4th Amendment: The cameras scan the license plate and run the MVD data (your personal information) of every motorist in Arizona that passes by them, tracking people like cattle. This is a unwarranted search.

5th Amendment: Photo tickets demand a fine be paid, or face the seizure of capital and possessions without offering due process. It’s simply a rubber stamp by an employee of the company who is collecting the fine.

6th Amendment: There is no way to exercise your right to face your accuser, when the accuser is a machine.

7th Amendment: There is no option for a trial because they’ve taken that right away from you with photo tickets, even though the fines can go as high as $350 in the state of Arizona.

14th Amendment: Two sets of standards have been created for the same offenses. Red light and speed camera tickets are treated completely differently by the courts, which is a clear violation of your right to equal protection under the laws. And no machine can replace a sworn peace officer conduction traffic stops.

Below is a link with a collection of studies on whether or not red-light cameras increase public safety.

Arizona can do this. I reported in May 2010 about how Arizona got rid of the speed cameras on the highways – so this is totally possible, especially if we all call our politicians and tell them to support this bill!

Quotes:

"We are apt to shut our eyes against a painful truth... For my part, I am willing to know the whole truth; to know the worst; and to provide for it." - Patrick Henry

"Politicians and diapers both need to be changed, and for the same reason." - Anonymous

"Right is right, even if everyone is against it, and wrong is wrong, even if everyone is for it." - William Penn

"Naturally the common people don't want war; neither in Russia, nor in England, nor in America, nor in Germany. That is understood. But after all, it is the leaders of the country who determine policy, and it is always a simple matter to drag the people along, whether it is a democracy, or a fascist dictatorship, or a parliament, or a communist dictatorship. Voice or no voice, the people can always be brought to the bidding of the leaders. That is easy. All you have to do is to tell them they are being attacked, and denounce the pacifists for lack of patriotism and exposing the country to danger. It works the same in any country" - Hermann Goering

"I know that nothing good lives in me, that is, in my sinful nature. For I have the desire to do what is good, but I cannot carry it out. For what I do is not the good I want to do; no, the evil I do not want to do this I keep on doing." - Romans 7:18-19

"Twenty years from now you will be more disappointed by the things you didn't do than by the ones you did do. So throw off the bowlines. Sail away from the safe harbor. Catch the trade winds in your sails. Explore. Dream. Discover." - Mark Twain