We should not deny the horror of January 6. But, in its aftermath, rather than uncritically reaffirm French national identity and wring our hands about Muslims’ refusal to integrate, we should use this moment of reflection to understand the various ways in which Muslims are consistently excluded from the nation, and to reassess the narrow bases what it means to be French.

People have a mental model of shopping that is based on experiences from brick-and-mortar stores. We intuitively understand how this process works: all available products are displayed around the store and the prices are clearly marked. Many stores offer deals via coupons, membership cards, or to special classes of people such as students or AARP members. Typically, everyone is aware of these discounts and has an equal opportunity to use them.

Many people assume this same mental model of shopping applies just as well to e-commerce websites. However, as we are discovering, this is not the case.

In 2010, shoppers realized that Amazon was charging different users different prices for the same DVD, a practice known as price discrimination or price differentiation. In 2012, the Wall Street Journal revealed that Staples was charging users different prices based on their geographic location. The paper also reported that travel retailer Orbitz was showing more expensive hotels to users browsing from Mac computers, a practice known as price steering.

By comparing the search results shown to these automated controls and to the real users, we identified several cases of personalization. We saw price steering from Sears, with the order of search results varying from user to user. We saw price discrimination from Home Depot, Sears, Cheaptickets, Orbitz, Priceline, Expedia, and Travelocity, with product prices varying from user to user.

They have created lists of victims of sexual assault, and lists of people with sexually transmitted diseases. Lists of people who have Alzheimer’s, dementia and AIDS. Lists of the impotent and the depressed.

There are lists of “impulse buyers.” Lists of suckers: gullible consumers who have shown that they are susceptible to “vulnerability-based marketing.” And lists of those deemed commercially undesirable because they live in or near trailer parks or nursing homes. Not to mention lists of people who have been accused of wrongdoing, even if they were not charged or convicted.

Typically sold at a few cents per name, the lists don’t have to be particularly reliable to attract eager buyers — mostly marketers, but also, increasingly, financial institutions vetting customers to guard against fraud, and employers screening potential hires.

There are three problems with these lists.

+ First, they are often inaccurate.
+ Second, even when the information is accurate, many of the lists have no business being in the hands of retailers, bosses or banks. Having a medical condition, or having been a victim of a crime, is simply not relevant to most employment or credit decisions.

Third, people aren’t told they are on these lists, so they have no opportunity to correct bad information.

It’s unrealistic to expect individuals to inquire, broker by broker, about their files. Instead, we need to require brokers to make targeted disclosures to consumers. Uncovering problems in Big Data (or decision models based on that data) should not be a burden we expect individuals to solve on their own.

There's now an intense scrutiny of the actions and habits of employees and potential employees, in the hope that statistical analysis will reveal those who have desired workplace traits. Factors such as choice of web browser, or when and where they eat lunch, could affect their chances.

This process runs up against anti-discrimination laws in countries like Australia, where employers can't base their decisions on attributes such as race, sex, disability, age, and marital status.

" Burdon and Harpur » argue that it's almost impossible for these laws to be applied when the decisions are made on the basis of talent analytics, because it's usually almost impossible for either data users (employers), or data subjects, to know even what data is being used to make decisions," Greenleaf said.

"This is very important if we're to preserve the hard-won social policies represented by anti-discrimination laws, and prevent the hidden heuristics and emerging employment practices starting to mean that 'data is destiny'."

Big data's approach of collecting as much data as you can, even if it seems irrelevant, because it may reveal a previously unknown correlation, also collides with the "data minimisation" principles of data privacy laws, which say that you only collect the data you need to do the job.

I'm pretty sure the people who create advertisements are robots, because I'd rather not think about the type of person you'd have to be to decide that any of these were good ideas.

Don't go thinking sexist advertising only affects women: At 1:37, you'll find out how it's changed the way men see themselves, and at 2:49, there's an eye-opening experiment that you don't want to miss.

If you're on the wrong side of the class divide, recent advances in retail tech will make for depressing reading. For example, some years ago Britain's class system was automated. Now, as you shop, machines can discriminate against you far more efficiently.

When you phone a big retailer, a machine decides what level of service you get, depending on your status. The software that makes this social judgement switches all incoming phone calls. The system identifies your phone number and cross-references that with its customer database. It then discovers, having looked up your address, what class of person you are and will route your call according to the class of service it thinks you merit.

You don’t see male heroes wearing these costumes or posing like this. Outside of statistical outliers like Namor, their costumes tend to have full coverage, and when they pose, it’s to inspire fear, not boners.