unambiguous message from the European Data Protection Board (EDPB), which has published updated guidelines on the rules around online consent to process people’s data.

The regional cookie wall has been crumbling for some time, as we reported last year — when the Dutch DPA clarified its guidance to ban cookie walls.

as the EDPB puts it, “actions such as scrolling or swiping through a webpage or similar user activity will not under any circumstances satisfy the requirement of a clear and affirmative action”
++++++++++++++++++++++
more on privacy on this IMS bloghttps://blog.stcloudstate.edu/ims?s=privacy

Until now, technology that readily identifies everyone based on his or her face has been taboo because of its radical erosion of privacy. Tech companies capable of releasing such a tool have refrained from doing so; in 2011, Google’s chairman at the time said it was the one technology the company had held back because it could be used “in a very bad way.” Some large cities, including San Francisco, have barred police from using facial

But without public scrutiny, more than 600 law enforcement agencies have started using Clearview in the past year, according to the company, which declined to provide a list. recognition technology.

Facial recognition technology has always been controversial. It makes people nervous about Big Brother. It has a tendency to deliver false matches for certain groups, like people of color. And some facial recognition products used by the police — including Clearview’s — haven’t been vetted by independent experts.

Clearview deployed current and former Republican officials to approach police forces, offering free trials and annual licenses for as little as $2,000. Mr. Schwartz tapped his political connections to help make government officials aware of the tool, according to Mr. Ton-That.

“We have no data to suggest this tool is accurate,” said Clare Garvie, a researcher at Georgetown University’s Center on Privacy and Technology, who has studied the government’s use of facial recognition. “The larger the database, the larger the risk of misidentification because of the doppelgänger effect. They’re talking about a massive database of random people they’ve found on the internet.”

Part of the problem stems from a lack of oversight. There has been no real public input into adoption of Clearview’s software, and the company’s ability to safeguard data hasn’t been tested in practice. Clearview itself remained highly secretive until late 2019.

The software also appears to explicitly violate policies at Facebook and elsewhere against collecting users’ images en masse.

while there’s underlying code that could theoretically be used for augmented reality glasses that could identify people on the street, Ton-That said there were no plans for such a design.

facial recognition bans are the wrong way to fight against modern surveillance. Focusing on one particular identification method misconstrues the nature of the surveillance society we’re in the process of building. Ubiquitous mass surveillance is increasingly the norm. In countries like China, a surveillance infrastructure is being built by the government for social control. In countries like the United States, it’s being built by corporations in order to influence our buying behavior, and is incidentally used by the government.

People can be identified at a distance by their heart beat or by their gait, using a laser-based system. Cameras are so good that they can read fingerprints and irispatterns from meters away. And even without any of these technologies, we can always be identified because our smartphones broadcast unique numbers called MAC addresses.

The data broker industry is almost entirely unregulated; there’s only one law — passed in Vermont in 2018 — that requires data brokers to register and explain in broad terms what kind of data they collect.

Until now, technology that readily identifies everyone based on his or her face has been taboo because of its radical erosion of privacy. Tech companies capable of releasing such a tool have refrained from doing so; in 2011, Google’s chairman at the time said it was the one technology the company had held back because it could be used “in a very bad way.” Some large cities, including San Francisco, have barred police from using facial recognition technology.

This startup claims its deepfakes will protect your privacy

The upside for businesses is that this new, “anonymized” video no longer gives away the exact identity of a customer—which, Perry says, means companies using D-ID can “eliminate the need for consent” and analyze the footage for business and marketing purposes. A store might, for example, feed video of a happy-looking white woman to an algorithm that can surface the most effective ad for her in real time.

Three leading European privacy experts who spoke to MIT Technology Review voiced their concerns about D-ID’s technology and its intentions. All say that, in their opinion, D-ID actually violates GDPR.

Behind the One-Way Mirror: A Deep Dive Into the Technology of Corporate Surveillance
BY BENNETT CYPHERS DECEMBER 2, 2019

Corporations have built a hall of one-way mirrors: from the inside, you can see only apps, web pages, ads, and yourself reflected by social media. But in the shadows behind the glass, trackers quietly take notes on nearly everything you do. These trackers are not omniscient, but they are widespread and indiscriminate. The data they collect and derive is not perfect, but it is nevertheless extremely sensitive.

A data-snorting company can just make low bids to ensure it never wins while pocketing your data for nothing. This is a flaw in the implied deal where you trade data for benefits.

You can limit what you give away by blocking tracking cookies. Unfortunately, you can still be tracked by other techniques. These include web beacons, browser fingerprinting and behavioural data such as mouse movements, pauses and clicks, or sweeps and taps.

Tor, the original anti-surveillance browser, is based on an old, heavily modified version of Firefox.

Most other browsers are now, like Chrome, based on Google’s open source Chromium. Once enough web developers started coding for Chrome instead of for open standards, it became arduous and expensive to sustain alternative browser engines. Chromium-based browsers now include Opera, Vivaldi, Brave, the Epic Privacy Browser and next year’s new Microsoft Edge.

The American Library Association said in a statement Monday that the planned changes to Lynda.com, which are slated to happen by the end of September 2019, “would significantly impair library users’ privacy rights.” That same day, the California State Library recommended that its users discontinue Lynda.com when it fully merges with LinkedIn Learning if it institutes the changes.

The library groups argue that by requiring users to create LinkedIn accounts to watch Lynda videos, the company is going from following best practices about privacy and identity protection to potentially asking libraries to violate a range of ethics codes they have pledged to uphold. The ALA’s Library Bill of Rights, for instance, states that: “All people, regardless of origin, age, background, or views, possess a right to privacy and confidentiality in their library use. Libraries should advocate for, educate about, and protect people’s privacy, safeguarding all library use data, including personally identifiable information.”

The change will not impact most colleges and university libraries or corporate users of Lynda.com services, who will not be required to force users to set up a LinkedIn profile. LinkedIn officials say that’s because colleges and corporations have more robust ways to identify users than public libraries do.

The increasing availability of these kinds of tools raise concerns and questions for Doug Levin, founder of EdTech Strategies.acial-recognition police tools have been decried as “staggeringly inaccurate.”

acial-recognition police tools have been decried as “staggeringly inaccurate.”School web filters can also impact low-income families inequitably, he adds, especially those that use school-issued devices at home. #equity.

As in the insurance industry, much of the impetus (and sales pitches) in the school and online safety market can be driven by fear. But voicing such concerns and red flags can also steer the stakeholders toward dialogue and collaboration.

In a recent article for The Verge titled “The Trauma Floor: The secret lives of Facebook moderators in America,” a dozen current and former employees of one of the company’s contractors, Cognizant, talked to Newton about the mental health costs of spending hour after hour monitoring graphic content.

Perhaps the most surprising find from his investigation, the reporter said, was how the majority of the employees he talked to started to believe some of the conspiracy theories they reviewed.

FBI Warns Educators and Parents About Edtech’s Cybersecurity Risks

The FBI has released a public service announcement warning educators and parents that edtech can create cybersecurity risks for students.

In April 2017, security researchers found a flaw in Schoolzilla’s data configuration settings. And in May 2017, a hacker reportedly stole 77 million user accounts from Edmodo.

Amelia Vance, the director of the Education Privacy Project at the Future of Privacy Forum, writes in an email to EdSurge that the FBI likely wanted to make sure that as the new school year starts, parents and schools are aware of potential security risks. And while she thinks it’s “great” that the FBI is bringing more attention to this issue, she wishes the public service announcement had also addressed another crucial challenge.

“Schools across the country lack funding to provide and maintain adequate security,” she writes. “Now that the FBI has focused attention on these concerns, policymakers must step up and fund impactful security programs.”

According to Vance, a better approach might involve encouraging parents to have conversations with their children’s’ school about how it keeps student data safe.

Schools are using AI to track what students write on their computers

Under the Children’s Internet Protection Act (CIPA), any US school that receives federal funding is required to have an internet-safety policy. As school-issued tablets and Chromebook laptops become more commonplace, schools must install technological guardrails to keep their students safe. For some, this simply means blocking inappropriate websites. Others, however, have turned to software companies like Gaggle, Securly, and GoGuardian to surface potentially worrisome communications to school administrators

Over 50% of teachers say their schools are one-to-one (the industry term for assigning every student a device of their own), according to a 2017 survey from Freckle Education

But even in an age of student suicides and school shootings, when do security precautions start to infringe on students’ freedoms?

When the Gaggle algorithm surfaces a word or phrase that may be of concern—like a mention of drugs or signs of cyberbullying—the “incident” gets sent to human reviewers before being passed on to the school. Using AI, the software is able to process thousands of student tweets, posts, and status updates to look for signs of harm.

SMPs help normalize surveillance from a young age. In the wake of the Cambridge Analytica scandal at Facebook and other recent data breaches from companies like Equifax, we have the opportunity to teach kids the importance of protecting their online data

in an age of increased school violence, bullying, and depression, schools have an obligation to protect their students. But the protection of kids’ personal information is also a matter of their safety