"A column in the New York Times by Claire Cain Miller asserts that algorithms reflect discriminatory attitudes: ""There is a widespread belief that software and algorithms that rely on data are objective. But software is not free of human influence. Algorithms are written and maintained by people, and machine learning algorithms adjust what they do based on people’s behavior. As a result, say researchers in computer science, ethics and law, algorithms can reinforce human prejudices."" http://www.nytimes.com/2015/07/10/upshot/when-algorithms-discriminate.html Many companies -- most notably, Facebook -- have taken to making ""suggestions"" to users about what to see, read, buy, using algorithms that combine what you've looked at with where you live, how old you are, who you associate with, what your job and income are, how you vote and other personal factors. How much credence do you give to these ""suggestions""? Do you notice discriminatory behaviors in Facebook, Twitter, job searches, Google or other common social and search programs?"

If the algorithms have been done well, they will closely align to my prejudices and so, I probably won't notice.

Answered

Sorry! Something went wrong on our end. Please try again later.

DACREE

July 13, 2015 09:49 AM

I have a friend Leon Jackson who constantly gets ads targeted at the black community though he is not black. To some, his name sounds black however and he lives in a largely black neighborhood so the target ad algorithms seem to reflect that. That would seem to support this thesis however, I can't help but think that saying the algorithm is discriminating shifts the focus from the prejudice of the creator. The algorithm is neutral. It only does it what you make it to do.

I have never seen a targeted ad that appeals to me, not even from Amazon Books, so I think most of these algorithms are pretty unimaginative. But I am concerned that the immature and simple-minded might take these suggestions as if they came from a wise source telling them what they Should be reading, watching or buying, thus reinforcing their biases and interfering with the healthy desire to branch out, to explore other possible ways of thinking or being.

As @Dacree said ""The algorithm is neutral. It only does it what you make it to do.""
That is the key thing. It's not the algorithm, it's the application/use of the algorithm.

Answered

Sorry! Something went wrong on our end. Please try again later.

Bryan Watson

July 14, 2015 04:12 PM

To reiterate the quoted section of the article:

""Algorithms are written and maintained by people, and machine learning algorithms adjust what they do based on people’s behavior. ... algorithms can reinforce human prejudices.""

If a machine learning algorithm reflects the prejudices of the algorithm's author, then it is likely to infuse its decision-making with those same prejudices.

Example: if an online store's ""recommendation"" algorithm incorporates the belief that someone with no college education will have little discretionary income, that algorithm may deny giving a user a recommendation for high-priced toys based not on the user's actual income but on their education level.

Example: if a real-estate algorithm is created to promote ethnic homogeneity in a community, then it will not disclose properties for sale to a user whose ethnicity does not match the surrounding neighbors.

These are egregious examples, but there are likely other smaller and more subtle instances of non-objective but data-driven, discriminatory practices.

I avoid responding to all online advertising or any other ""suggestions"" as I automatically assume that the suggested reading will ultimately lead me to an ad or some tracking site. The more in-your-face advertising is - the more likely I am to avoid it. But it must work because lots of money is spent on it.

Answered

Sorry! Something went wrong on our end. Please try again later.

VoIP Desk

July 16, 2015 11:48 AM

These days people are going out of their way to manufacture discrimination...Get a life. These are companies that are trying to make money and are going to make generalizations in the way they target the market. This equates to accusing a high end furniture store, jewelry store, fine dining restaurant...etc of discrimination because they choose to open their retail location in the affluent suburbs as opposed to the inner city. They want to to be close to the majority of their clientele. Of course there are people from the inner city that could afford to shop/eat there as well as there are people in the suburbs who cannot, but the majority of their clientele would be from the affluent suburbs.

I cannot believe that someone wasted their time to research and write this article. I cannot believe that I am wasting my own time responding to this, but every time you turn around these days people are looking for new ways to feel discriminated against.

GET A LIFE

Answered

Sorry! Something went wrong on our end. Please try again later.

VoIP Desk

July 16, 2015 11:56 AM

You can't swing a dead cat these days without hitting someone trying to manufacture discrimination or whining that they're offended because someone ordered their coffee ""Black"".

I don't know whether this is algorithm-related or not, but a couple of weeks ago I was making an on-line inquiry about BMWs (i.e. cars), but I mistakenly typed BBW and found myself on a website that appeared to be a dating site for large women. (I gather that BBW means 'big, beautiful women'). Since then I have been getting endless spam about hookup sites for BBWs. Is anyone aware of an algorithm for getting rid of this spam?

Answered

Sorry! Something went wrong on our end. Please try again later.

VoIP Desk

July 16, 2015 12:09 PM

Every ad I see is for things I already own and therefore do not need. It seems the MBA's fueling this nonsense have more dollars than sense.

As data analysis, the algorithms should be neutral [ i.e. objective ]. Unfortunately due to the practices and policies of ""getting the right conclusions"" the should has been dissolved in
massaging practices . Usually the algorithms designed/used by a company lean toward company's interests and orientation .

Sadly there are algorithms that reflect the will rather than the reality .

Please find below the beginning of Dr. James L. Mills' article on ""Data torturing "" and apply its ideas to algorithms ........ That's one of the causes that more specifically the conclusions backed-up by Google, Facebook, &Co are not trusted .

""If you torture your data long enough, they will tell you whatever you want to hear"" has become a popular observation in our office.
In plain English, this means that study data, if manipulated in enough different ways, can be made to prove whatever the investigator wants to prove.
Unfortunately, this is generally true. Because every investigator wants to present results in the most exciting way, we all look for the most dramatic, positive findings in our data.
When this process goes beyond reasonable interpretation of the facts, it becomes data torturing. The unfortunate result of torturing data is the dissemination of incorrect information to the research community and to patients.
It is impossible to tell how widespread data torturing is.
Like other forms of torture, it leaves no incriminating marks when done skillfully., and like other forms of torture, it may be difficult to prove even when there is incriminating evidence.""

Answered

Sorry! Something went wrong on our end. Please try again later.

Bryan Watson

June 26, 2016 10:59 AM

Another NYTimes opinion column revisits this question. This time, the author is a ""principal researcher at Microsoft"" who writes:

""[There are] the very real problems with artificial intelligence today, which may already be exacerbating inequality in the workplace, at home and in our legal and judicial systems. Sexism, racism and other forms of discrimination are being built into the machine-learning algorithms that underlie the technology behind many “intelligent” systems that shape how we are categorized and advertised to.""

She cites the very real and practical problem that showed up when Amazon implemented same-day delivery services, but that service was not offered in predominantly non-white areas/zip codes of some major metropolitan areas.

Algorithms are the secret sauce of the Information Age. I have no doubt that some are discriminatory, as the owners or users who control them may have some personal biases.

The recent case with Facebook, in their prioritization and grouping of news articles, is an example of a firm using algorithms to determine what's newsworthy for their subscribers. But, this isn't any different than the TV networks and newspapers deciding what is breaking news or front-page news. I am still free to choose which news service I use, and sometimes can even control the algorithm, applying my own personal biases in the process.

Most people associate algorithms on the Internet with sales and marketing, which I call noisy algorithms. They are obvious, often unwanted, not always accurate regarding your tastes, and usually ignored. I'm more concerned about the noiseless algorithms, which gently guide you in your choices without you knowing it.

Answered

Sorry! Something went wrong on our end. Please try again later.

Bryan Watson

July 03, 2016 03:16 PM

Mark, I think that there's an interesting phrase we often hear/use: ""it's no different than..."". As we move from people to computers making decisions, we understand that the computer (the algorithm) will reflect people -- at least that's the intent or the hope.

But decision-making is wildly complex and frequently irrational (not ""crazy"" but not strictly mathematical, not strictly logical and not strictly consistent). For an algorithm to reflect people's decision-making, it needs to incorporate these same traits, and that is a challenge (thus far) well beyond most algorithm designers' capabilities. We are calling upon algorithms to make moral choices, and that's something we humans haven't quite figured out how to do reliably.

This is something that automated vehicles struggle with -- when human drivers behave irrationally, what choices should the automated vehicle make? See, for example, this article:
http://www.sciencefriday.com/articles/who-should-your-autonomous-car-save/

Brian,
I agree completely with your statement ""calling on algorithms to make moral choices"". I think that's where my concern is. I am a big fan of Artificial Intelligence, but as long as I am able, I want to make those choices for myself.
Thanks!

I think the moral"" aspect only serves to obfuscate the issue. ""I want to make those choices"" is really not an option, if you want to automate processes. You either use autonomous driving, and let the ""car"" make the decisions, or not. You automate your order process and limit certain risky credit / delivery options to new / unknown customers, or you do a manual process. But really what would a moral decision be based on other than credit score, maybe history if you're on a platform like ebay, billing / delivery address and address scoring and similar? How much you like these people's phone voice, or their style of writing in the order note?

It is hard to create an automated process without background criteria to base that process on. So the creation of algorithms that provide decision-based solutions will have to be based on data that is extrapolated from the area where the decisions are being made. If the algorithm is a law enforcement algorithm then it's decisions will be based on criminal statistics...which may provide a result that seems discriminatory but how does one create a process of that magnitude without statistical data??? Human input, observation...all which also may tend to inject discriminatory bias. Why??? Because we are humans and humans have biases. Statistical data is cold (e.g. Credit data reports unbiased information but leaves out situational factors that have an impact on the information). Basically if a human is creating the algorithm using a combination of statistical factors there might be less of a chance for emotional or discriminatory input but the results may be proven to discriminate based on the survey group/ information.

Good points, both SSCHURIG and AAJBKAIIT!
Regarding your point, SSCHURIG, about not having choices if you want to automate processes, I agree. But, I wasn't really thinking of a fully automated process. I was thinking more about letting an algorithm lead you along a path which potentially has some built-in biases but may pause for a human decision before it proceeds further. This probably wouldn't apply to an autonomous vehicle in most situations, but could apply to some business decisions.
Regarding your point, AAJBKAIIT, about depending more on statistical data based on facts, I agree somewhat. You know the old adage that you can make data support any view or story you might have. I deal with this daily, where direction from a senior manager is to help them spin their story with facts and data, which may be different from the story or position desired by someone else. It's not falsifying the data, it's more like emphasizing certain points over others and choosing which data to use and how. When you also add Data Visualization, there are many more ways to emphasize certain points and use psychology to help spin your story. It's just being a good Spin Doctor, especially in the political arena today.
I use a lot of BI and Analytics in what I do and I have 2 solutions for the situations I described:
A firm called TAMR has developed some unique BI algorithms which have a lot of AI routines built-in. But, they also incorporate human input at certain critical junctures, which is then leveraged to arrive at the best solution for you. Someone else will potentially make other decisions which are right for them.I don't like to be put in a situation where I am telling a radically different story with the same factual data. But, as long as both parties are dependent on me to help tell their stories, it's unavoidable. My solution here is to put self-service BI tools into the hands of executives, so they can analyze the data themselves and form their own conclusions. Unfortuneately, many executives today are of an older generation and are much less likely to apply this technology directly, depending on others and their biases to inform them. But most of our newer MBAs ask for the tools in their first week on the job, so there's hope!