Larger-scale international comparison that shows the same pattern of bias and discrimination in hiring that Canadian studies have shown, but interesting that Canadian numbers worse than countries such as Germany (whose overall representation of minorities is poor).

Blind-cvs may be more effective that longer qualification lists:

Canadian visible minorities are more likely to face discrimination in hiring than their American counterparts, according to a new survey of nine countries that found Canada is near the top for prejudice in hiring.

But the researchers behind the study have a theory that one way to address the problem may be as simple as requiring employers to request more detailed information from applicants at the start of the process.

In a study published in Sociological Science this week, Northwestern University sociologist Lincoln Quillian and colleagues analyzed the results of 97 “field experiments” in hiring, in which fictional job applicants were created to track how they fared in the job interview process.

In all, the researchers looked at more than 200,000 job applications, and broke down the results by race, to see whether minority candidates with similar qualifications to white ones got as many call-backs.

To no one’s surprise, they didn’t. The data “shows nearly ubiquitous discrimination against racial and ethnic minority groups,” the researchers concluded in a paper published Monday ― but there are notable differences between results in the nine countries surveyed.

France and Sweden were found to have the highest likelihood of discrimination. A job applicant from a visible minority group in France is 43 per cent more likely to be discriminated against than a similar applicant in the United States. In Sweden, they are 30 per cent more likely to encounter prejudice in hiring.

Canada and the U.K. tied for third place, with minorities there 11 per cent more likely to face discrimination in hiring.

It found that people of African, Asian and Middle Eastern descent all experience similar levels of discrimination.

”For white immigrants, by contrast, discrimination is lower and is often not statistically significant,” the study stated, adding that “there is no evidence of ‘reverse’ discrimination against white natives” in hiring.

Quillian points to “certain laws and institutional practices” to help explain why some countries experience far higher levels of discrimination. For instance, the U.S.’s laws on racial bias in the workplace likely contributed to its relatively positive score.

“No other countries require monitoring of the racial and ethnic makeup of ranks of employees as is required for large employers in the U.S.,” Quillian said in a statement. “For instance, large employers in the U.S. are required to report race and ethnicity of employees at different ranks to the Equal Employment Opportunity Commission.”

Meanwhile, in France, where discrimination is most common, employers aren’t allowed to inquire about the race of applicants. “The French do not measure race or ethnicity in any official ― or most unofficial ― capacities, which makes knowledge of racial and ethnic inequality in France very limited and makes it difficult to monitor hiring or promotion for discrimination,” Quillian said.

More detailed job applications?

And Quillian suggests that one solution to the problem may be to emulate how hiring is done in Germany, the country with the lowest incidence of discrimination. There, job applicants are typically required to provide very detailed job applications that often include high school grades.

The idea is that having a very detailed picture of an applicant leaves less room for hiring managers to “fill in the blanks” with their own pre-conceptions about that person, which may include racial prejudices.

“We suspect that this is why we find low discrimination in Germany ― that having a lot of information at first application reduces the tendency to view minority applicants as less good or unqualified,” Quillian said.

The racist photo in the medical school yearbook page of Gov. Ralph Northam of Virginia has probably caused many physicians to re-examine their past.

We hope we are better today, but the research is not as encouraging as you might think: There is still a long way to go in how the medical field treats minority patients, especially African-Americans.

A systematic review published in Academic Emergency Medicinegathered all the research on physicians that measured implicit bias with the Implicit Association Test and included some assessment of clinical decision making. Most of the nine studies used vignettes to test what physicians would do in certain situations.

The majority of studies found an implicit preference for white patients, especially among white physicians. Two found a relationship between this bias and clinical decision making. One found that this bias was associated with a greater chance that whites would be treated for myocardial infarction than African-Americans.

This study was published in 2017.

The Implicit Association Test has its flaws. Although its authors maintain that it measures external influences, it’s not clear how well it predicts individual behavior. Another, bigger systematic review of implicit bias in health care professionals was published in BMC Ethics, also in 2017. The researchers gathered 42 studies, only 15 of which used the Implicit Association Test, and concluded that physicians are just like everyone else. Their biases are consistent with those of the general population.

The researchers also cautioned that these biases are likely to affect diagnosis and care.

A study published three years earlier in the Journal of the American Board of Family Medicine surveyed 543 internal medicine and family physicians who had been presented with vignettes of patients with severe osteoarthritis. The survey asked the doctors about the medical cooperativeness of the patients, and whether they would recommend a total knee replacement.

Even though the descriptions of the cases were identical except for the race of the patients (African-Americans and whites), participants reported that they believed the white patients were being more medically cooperative than the African-American ones. These beliefs did not translate into different treatment recommendations in this study, but they were clearly there.

In 2003, the Institute of Medicine released a landmark report on disparities in health care. The evidence for their existence was enormous. The research available at that time showed that even after controlling for socioeconomic factors, disparities remained.

There’s significant literature documenting that African-American patients are treated differently than white patients when it comes to cardiovascular procedures. There were differences in whether they received optimal care with respect to a cancer diagnosis and treatment. African-Americans were less likely to receive appropriate care when they were infected with H.I.V. They were also more likely to die from these illnesses even after adjusting for age, sex, insurance, education and the severity of the disease.

The report cited some systems-level factors that contributed to this problem. Good care may be unavailable in some poor neighborhoods, and easily obtained in others. Differences in insurance access and coverage can also vary by race.

But the report’s authors spent much more time on issues at the level of care, in which some physicians treated patients differently based on their race.

Physicians sometimes had a harder time making accurate diagnoses because they seemed to be worse at reading the signals from minority patients, perhaps because of cultural or language barriers. Then there were beliefs that physicians already held about the behavior of minorities. You could call these stereotypes, like believing that minority patients wouldn’t comply with recommended changes.

Of course, there’s the issue of mistrust on the patient side. African-American patients have good reason to mistrust the health care system; the infamous Tuskegee Study is just one example.

In its report, the Institute of Medicine recommended strengthening health plans so that minorities were not disproportionately denied access. It urged that more underrepresented minorities be trained as health care professionals, and that more resources be directed toward enforcing civil rights laws.

In practice, it endorsed more evidence-based care across the board. It noted the importance of interpreters, community health workers, patient education programs and cross-cultural education for those who care for patients.

All of this has met with limited success.

In 2017, the Agency for Healthcare Research and Quality issued its 15th yearly report on health care quality and disparities, as called for by the medical institute in 2002. It found that while some disparities had gotten better, many remained. The most recent data available showed that 40 percent of the quality measures were still worse for blacks than whites. Other groups fared worse as well. Measures were worse for 20 percent of Asian-Americans, 30 percent of Native Americans, and one third of Pacific Islanders and Hispanics.

Of the 21 access measures tracked from 2000 to 2016, nine were improving. Nine were unchanged. Three were worsening.

It would be easy to look at a racist photo from the 1980s and conclude that it was a different time and that things have changed. Many things have not. We know that racism, explicit and implicit, was pervasive in medical care back then. Many studies show that it’s still pervasive today. The recommendations from the medical institute in 2003 still hold. Any fair assessment of the evidence suggests much work remains to be done.

Of note. These kinds of studies are important to expose the bias inherent in some corporate facial recognition systems:

Over the last two years, Amazon has aggressively marketed its facial recognition technology to police departments and federal agencies as a service to help law enforcement identify suspects more quickly. It has done so as another tech giant, Microsoft, has called on Congress to regulate the technology, arguing that it is too risky for companies to oversee on their own.

Now a new study from researchers at the M.I.T. Media Lab has found that Amazon’s system, Rekognition, had much more difficulty in telling the gender of female faces and of darker-skinned faces in photos than similar services from IBM and Microsoft. The results raise questions about potential bias that could hamper Amazon’s drive to popularize the technology.

In the study, published Thursday, Rekognition made no errors in recognizing the gender of lighter-skinned men. But it misclassified women as men 19 percent of the time, the researchers said, and mistook darker-skinned women for men 31 percent of the time. Microsoft’s technology mistook darker-skinned women for men just 1.5 percent of the time.

A study published a year ago found similar problems in the programs built by IBM, Microsoft and Megvii, an artificial intelligence company in China known as Face++. Those results set off an outcry that was amplified when a co-author of the study, Joy Buolamwini, posted YouTube videos showing the technology misclassifying famous African-American women, like Michelle Obama, as men.

The companies in last year’s report all reacted by quickly releasing more accurate technology. For the latest study, Ms. Buolamwini said, she sent a letter with some preliminary results to Amazon seven months ago. But she said that she hadn’t heard back from Amazon, and that when she and a co-author retested the company’s product a couple of months later, it had not improved.

Matt Wood, general manager of artificial intelligence at Amazon Web Services, said the researchers had examined facial analysis — a technology that can spot features such as mustaches or expressions such as smiles — and not facial recognition, a technology that can match faces in photos or video stills to identify individuals. Amazon markets both services.

“It’s not possible to draw a conclusion on the accuracy of facial recognition for any use case — including law enforcement — based on results obtained using facial analysis,” Dr. Wood said in a statement. He added that the researchers had not tested the latest version of Rekognition, which was updated in November.

Amazon said that in recent internal tests using an updated version of its service, the company found no difference in accuracy in classifying gender across all ethnicities.

The M.I.T. researchers used these and other photos to study the accuracy of facial technology in identifying gender.

With advancements in artificial intelligence, facial technologies — services that can be used to identify people in crowds, analyze their emotions, or detect their age and facial characteristics — are proliferating. Now, as companies begin to market these services more aggressively for uses like policing and vetting job candidates, they have emerged as a lightning rod in the debate about whether and how Congress should regulate powerful emerging technologies.

Proponents see facial recognition as an important advance in helping law enforcement agencies catch criminals and find missing children. Some police departments, and the Federal Bureau of Investigation, have tested Amazon’s product.

But civil liberties experts warn that it can also be used to secretly identify people — potentially chilling Americans’ ability to speak freely or simply go about their business anonymously in public.

Over the last year, Amazon has come under intense scrutiny by federal lawmakers, the American Civil Liberties Union, shareholders, employees and academic researchers for marketing Rekognition to law enforcement agencies. That is partly because, unlike Microsoft, IBM and other tech giants, Amazon has been less willing to publicly discuss concerns.

Amazon, citing customer confidentiality, has also declined to answer questions from federal lawmakers about which government agencies are using Rekognition or how they are using it. The company’s responses have further troubled some federal lawmakers.

“Not only do I want to see them address our concerns with the sense of urgency it deserves,” said Representative Jimmy Gomez, a California Democrat who has been investigating Amazon’s facial recognition practices. “But I also want to know if law enforcement is using it in ways that violate civil liberties, and what — if any — protections Amazon has built into the technology to protect the rights of our constituents.”

In a letter last month to Mr. Gomez, Amazon said Rekognition customers must abide by Amazon’s policies, which require them to comply with civil rights and other laws. But the company said that for privacy reasons it did not audit customers, giving it little insight into how its product is being used.

The study published last year reported that Microsoft had a perfect score in identifying the gender of lighter-skinned men in a photo database, but that it misclassified darker-skinned women as men about one in five times. IBM and Face++ had an even higher error rate, each misclassifying the gender of darker-skinned women about one in three times.

Ms. Buolamwini said she had developed her methodology with the idea of harnessing public pressure, and market competition, to push companies to fix biases in their software that could pose serious risks to people.

Ms. Buolamwini, who had done similar tests last year, conducted another round to learn whether industry practices had changed, she said.CreditTony Luong for The New York Times

“One of the things we were trying to explore with the paper was how to galvanize action,” Ms. Buolamwini said.

Immediately after the study came out last year, IBM published a blog post, “Mitigating Bias in A.I. Models,” citing Ms. Buolamwini’s study. In the post, Ruchir Puri, chief architect at IBM Watson, said IBM had been working for months to reduce bias in its facial recognition system. The company post included test results showing improvements, particularly in classifying the gender of darker-skinned women. Soon after, IBM released a new system that the company said had a tenfold decrease in error rates.

A few months later, Microsoft published its own post, titled “Microsoft improves facial recognition technology to perform well across all skin tones, genders.” In particular, the company said, it had significantly reduced the error rates for female and darker-skinned faces.

Ms. Buolamwini wanted to learn whether the study had changed overall industry practices. So she and a colleague, Deborah Raji, a college student who did an internship at the M.I.T. Media Lab last summer, conducted a new study.

In it, they retested the facial systems of IBM, Microsoft and Face++. They also tested the facial systems of two companies that were not included in the first study: Amazon and Kairos, a start-up in Florida.

The new study found that IBM, Microsoft and Face++ all improved their accuracy in identifying gender.

By contrast, the study reported, Amazon misclassified the gender of darker-skinned females 31 percent of the time, while Kairos had an error rate of 22.5 percent.

Melissa Doval, the chief executive of Kairos, said the company, inspired by Ms. Buolamwini’s work, released a more accurate algorithm in October.

Ms. Buolamwini said the results of her studies raised fundamental questions for society about whether facial technology should not be used in certain situations, such as job interviews, or in products, like drones or police body cameras.

Some federal lawmakers are voicing similar issues.

“Technology like Amazon’s Rekognition should be used if and only if it is imbued with American values like the right to privacy and equal protection,” said Senator Edward J. Markey, a Massachusetts Democrat who has been investigating Amazon’s facial recognition practices. “I do not think that standard is currently being met.”

Has the self-styled “party of the Charter,” as Prime Minister Justin Trudeau still, curiously, calls the Liberals, actually even read the Charter? Have the Liberals, for that matter, paid much attention to what their own prime minister has been saying?

Canada’s impaired driving laws underwent a major overhaul last month, courtesy of the federal Liberal government. Some of the changes were necessary to recognize the changed reality of legalized cannabis. Others were simply intended to further reduce rates of impaired driving, by drug or alcohol, on our roads. This is a goal everyone shares — impaired driving is the leading criminal cause of death in Canada, way ahead of anything else. It’s a stubborn problem that governments are right to try to address, particularly a government that has recently legalized a whole new category of intoxicant.

But the new laws have given to police significant new powers. In a free society, that’s never something to be done lightly. And in this particular case, what is being done is especially bizarre because the Liberals are now insisting that such powers will not be abused even while insisting, in a slightly different context, that they inevitably will be.

One of the new powers given to police is the right, under certain circumstances, to demand a breath sample from someone who has not provided any sign that they might be impaired. Previously, a police officer needed at least some grounds to insist on such a test — the officer could have observed erratic driving before pulling the car over, for instance, or suspected a whiff of alcohol on a driver’s breath. Under the new law, a driver stopped by police for any lawful reason whatsoever (which is a very low bar) may be subjected to a breath test. Refusing to provide one is itself a criminal offence. Canadians effectively have no choice but to comply.

This is a meaningful expansion of police search powers, and it will absolutely be challenged — hopefully successfully — as a violation of Canadians’ fundamental protections against unreasonable searches. This is also an expansion of police authority that the Liberals were explicitly warned would result in abuses of power, most likely taking the form of racial discrimination. “There will be nothing random with this breath testing,” defence lawyer Michael Spratt told a parliamentary committee reviewing the bill before it became law. “Visible minorities are pulled over by the police more often for no reason. That’s what is going to happen here.” The Canadian Civil Liberties Association sounded a similar warning in its own filing, writing, “Experience has also unfortunately demonstrated that ‘random’ detention and search powers are too often exercised in a non-random manner that disproportionately targets African-Canadian, Indigenous, and other racial minorities.” It continued, “… the reality of racial profiling and the increased invasiveness that attends a mandatory alcohol screening means that the practice will adversely impact those disproportionately targeted by police for vehicular stops, in particular African-Canadian, Indigenous, and other racial minorities.”

The ratcheting-up of systemic racism might normally be an issue you would expect the gloriously woke federal Liberals to be falling all over themselves to fix, or at least to tweet piously about. That’s not the case here. The Liberals have readily acknowledged that they expect that this new law will be challenged in court, but say they will defend it, and are confident it will survive the challenges.

There’s reason enough to be alarmed at the expanded use of police powers, even if they weren’t bound to be targeted disproportionately at racial minorities. Random, groundless searches conducted by whim of the authorities are manifestly a gross violation of Canadians’ fundamental rights. Now that the law is finally being used, there are already unsettling stories of such mandatory searches starting to emerge: Global News reported this week that a Toronto-area man, who was not in the slightest bit impaired, was given a breath test after a police officer observed him returning empty beer bottles to a store for recycling, as if he’d knocked them all back on the way over in his car.

But the thing that makes this so especially strange is how the Liberals, not long ago, were embracing the very same arguments they now say concern them not at all. During the run-up to the legalization of cannabis, no less an authority on right-thinking Liberal values than Justin Trudeau himself explained that it was important that Canada legalize cannabis because of — wait for it — racial factors, that saw police applying marijuana laws with disproportion and discrimination against minorities. The prime minister even shared an anecdote about how his own late brother, Michel, after being arrested for possession of cannabis, was able to have that charge quietly taken care of. It helps to be a powerful white guy, the prime minister confessed, especially one as well-connected as the son of a prime minister. “That’s one of the fundamental unfairnesses of this current system is that it affects different communities in a different way,” he said in 2017, acknowledging that random screenings are rarely truly random, and that discretion is rarely equally applied.

The prime minister was right. So was Mr. Spratt and the CCLA. Beyond the basic offence to everyone’s rights constituted by such random and baseless searches, these expanded police powers will obviously be applied unevenly, and that is fundamentally unfair. Why was that so true for cannabis that the prime minister used it to justify why legalization was necessary, but the Liberals deem it to be of no concern whatsoever for impaired driving?

In North America, we adore hearing about the scholarship student becoming a CEO, or of the person who immigrates with a few dollars to his name then ends up a mega-success. We are all about income mobility, and are happy to talk about it. What we do not talk about is “class,” maybe because it is a so distasteful a topic as to be taboo. And yet, class diversity exists and arguably should be a consideration in building a balanced and effective workplace, and by extension a productive economy.

The issue of “class migrants” was bravely taken on by researchers Joan Williams, Marina Multhaup and Sky Mihaylo in a recent piece in the Harvard Business Review. Making the argument that those who started out in what they call “working class” backgrounds bring unique skills to the workplace, the authors assert that savvy companies should actively seek out those from diverse income backgrounds, or at least stop discriminating against them.

But wait, goes the argument, no one is likely to ditch a résumé from someone who started out with humble beginnings, because no one would know that they did, right? It is not like ignoring every applicant with a female-sounding name, for example. And while it is true that economic discrimination may not be as easy as ditching everyone named Jill in favour of all the Jacks, it tends to happen even if those who are discriminating do not realize it.

In a study done by researchers Lauren River and Adras Tilscik, fictitious résumés were sent to 316 offices at law firms across the United States, ostensibly from students looking for summer positions. All listed hobbies, although some were “upper class” (sailing, polo and classical music) while others were”‘lower class” (pick-up soccer, track and field and country music). The result? Sixteen per cent of the first group got a callback, compared to 1 per cent of the second.

The managers at the law firms may not have gone as far as thinking that they did not want to hang with the kind of people who ran track, but more that they felt that the polo players would be a good fit with their firms. In economics, this is known as the “signalling” hypothesis, whereby some characteristics are considered signals of other qualities even if the characteristics themselves are not being sought (unless there are billable hours for polo, which there might be).

From a job-seeker’s perspective, the best advice seems to be to just leave your hobbies off your résumé, or at least to lie about what they are (which is to say, if you like Lady Antebellum, for goodness sake keep it to yourself). Or, if you did not grow up with the right bona fides, do as legions of women have been told and just learn to play golf and talk about it as much as you can if you want to succeed in the corporate sphere.

The thing is, this is not just a job-seeker’s problem, but a company’s problem as well. For firms looking for the best talent, limiting the pool in any way seems kind of foolhardy. Leadership qualities are often correlated with having transcended income levels (one study of those in the U.S. Army, for example, found that class migrants were the most effective leaders). If you are trying to build the best workforce, you want to have access to the best workers, so figuring out how to attract those from a wide swathe of economic backgrounds should arguably be part of any policy on diversity.

In their Harvard Business Review article, the researchers assert that to do this you have to do more than just go to the top schools and look for diverse candidates when bringing in entry-level candidates, you have to actually look at schools that might not be considered top-tier: a top student from one of those schools could be a much better hire than an average one from the usual choices. Perhaps more controversially, they also suggest going easy on referral hiring (which many find an effective way to get good candidates) since the friends and relatives of employees are likely to give you more of the same in terms of economic characteristics.

Looking at this from a wider perspective, we are not exactly serving the wider economy by making it difficult for people to be accepted and assimilate into the workforce in a way that makes the best use of their talents. We have long talked about the barriers that stop some from making their way through high school and getting into postsecondary institutions. More recently, there has been a recognition that those who come from a background where acquiring a postsecondary education is unusual are much more likely to drop out without finishing than those where it is the norm. In both cases, there has been a recognition that fully engaging people helps them, but also helps create a better labour force and a stronger economy.

And so perhaps we need to take it one step further and talk about the last thing we want to talk about.

Let’s get this out of the way first: There is no basis for the charge that President Trump leveled against Google this week — that the search engine, for political reasons, favored anti-Trump news outlets in its results. None.

Mr. Trump also claimed that Google advertised President Barack Obama’s State of the Union addresses on its home page but did not highlight his own. That, too, was false, as screenshots show that Google did link to Mr. Trump’s address this year.

But that concludes the “defense of Google” portion of this column. Because whether he knew it or not, Mr. Trump’s false charges crashed into a longstanding set of worries about Google, its biases and its power. When you get beyond the president’s claims, you come upon a set of uncomfortable facts — uncomfortable for Google and for society, because they highlight how in thrall we are to this single company, and how few checks we have against the many unseen ways it is influencing global discourse.

In particular, a raft of research suggests there is another kind of bias to worry about at Google. The naked partisan bias that Mr. Trump alleges is unlikely to occur, but there is a potential problem for hidden, pervasive and often unintended bias — the sort that led Google to once return links to many pornographic pages for searches for “black girls,” that offered “angry” and “loud” as autocomplete suggestions for the phrase “why are black women so,” or that returned pictures of black people for searches of “gorilla.”

I culled these examples — which Google has apologized for and fixed, but variants of which keep popping up — from “Algorithms of Oppression: How Search Engines Reinforce Racism,” a book by Safiya U. Noble, a professor at the University of Southern California’s Annenberg School of Communication.

Dr. Noble argues that many people have the wrong idea about Google. We think of the search engine as a neutral oracle, as if the company somehow marshals computers and math to objectively sift truth from trash.

But Google is made by humans who have preferences, opinions and blind spots and who work within a corporate structure that has clear financial and political goals. What’s more, because Google’s systems are increasingly created by artificial intelligence tools that learn from real-world data, there’s a growing possibility that it will amplify the many biases found in society, even unbeknown to its creators.

Google says it is aware of the potential for certain kinds of bias in its search results, and that it has instituted efforts to prevent them. “What you have from us is an absolute commitment that we want to continually improve results and continually address these problems in an effective, scalable way,” said Pandu Nayak, who heads Google’s search ranking team. “We have not sat around ignoring these problems.”

For years, Dr. Noble and others who have researched hidden biases — as well as the many corporate critics of Google’s power, like the frequent antagonist Yelp — have tried to start a public discussion about how the search company influences speech and commerce online.

There’s a worry now that Mr. Trump’s incorrect charges could undermine such work. “I think Trump’s complaint undid a lot of good and sophisticated thought that was starting to work its way into public consciousness about these issues,” said Siva Vaidhyanathan, a professor of media studies at the University of Virginia who has studied Google and Facebook’s influence on society.

Dr. Noble suggested a more constructive conversation was the one “about one monopolistic platform controlling the information landscape.”

So, let’s have it.

Google’s most important decisions are secret

In the United States, about eight out of 10 web searches are conducted through Google; across Europe, South America and India, Google’s share is even higher. Google also owns other major communications platforms, among them YouTube and Gmail, and it makes the Android operating system and its app store. It is the world’s dominant internet advertising company, and through that business, it also shapes the market for digital news.

Google’s power alone is not damning. The important question is how it manages that power, and what checks we have on it. That’s where critics say it falls down.

Google’s influence on public discourse happens primarily through algorithms, chief among them the system that determines which results you see in its search engine. These algorithms are secret, which Google says is necessary because search is its golden goose (it does not want Microsoft’s Bing to know what makes Google so great) and because explaining the precise ways the algorithms work would leave them open to being manipulated.

But this initial secrecy creates a troubling opacity. Because search engines take into account the time, place and some personalized factors when you search, the results you get today will not necessarily match the results I get tomorrow. This makes it difficult for outsiders to investigate bias across Google’s results.

A lot of people made fun this week of the paucity of evidence that Mr. Trump put forward to support his claim. But researchers point out that if Google somehow went rogue and decided to throw an election to a favored candidate, it would only have to alter a small fraction of search results to do so. If the public did spot evidence of such an event, it would look thin and inconclusive, too.

“We really have to have a much more sophisticated sense of how to investigate and identify these claims,” said Frank Pasquale, a professor at the University of Maryland’s law school who has studied the role that algorithms play in society.

In a law review article published in 2010, Mr. Pasquale outlined a way for regulatory agencies like the Federal Trade Commission and the Federal Communications Commission to gain access to search data to monitor and investigate claims of bias. No one has taken up that idea. Facebook, which also shapes global discourse through secret algorithms, recently sketched out a plan to give academic researchers access to its data to investigate bias, among other issues.

Google has no similar program, but Dr. Nayak said the company often shares data with outside researchers. He also argued that Google’s results are less “personalized” than people think, suggesting that search biases, when they come up, will be easy to spot.

“All our work is out there in the open — anyone can evaluate it, including our critics,” he said.

Search biases mirror real-world ones

The kind of blanket, intentional bias Mr. Trump is claiming would necessarily involve many workers at Google. And Google is leaky; on hot-button issues — debates over diversity or whether to work with the military — politically minded employees have provided important information to the media. If there was even a rumor that Google’s search team was skewing search for political ends, we would likely see some evidence of such a conspiracy in the media.

That’s why, in the view of researchers who study the issue of algorithmic bias, the more pressing concern is not about Google’s deliberate bias against one or another major political party, but about the potential for bias against those who do not already hold power in society. These people — women, minorities and others who lack economic, social and political clout — fall into the blind spots of companies run by wealthy men in California.

It’s in these blind spots that we find the most problematic biases with Google, like in the way it once suggested a spelling correction for the search “English major who taught herself calculus” — the correct spelling, Google offered, was “English major who taught himself calculus.”

Why did it do that? Google’s explanation was not at all comforting: The phrase “taught himself calculus” is a lot more popular online than “taught herself calculus,” so Google’s computers assumed that it was correct. In other words, a longstanding structural bias in society was replicated on the web, which was reflected in Google’s algorithm, which then hung out live online for who knows how long, unknown to anyone at Google, subtly undermining every female English major who wanted to teach herself calculus.

Eventually, this error was fixed. But how many other such errors are hidden in Google? We have no idea.

Google says it understands these worries, and often addresses them. In 2016, some people noticed that it listed a Holocaust-denial site as a top result for the search “Did the Holocaust happen?” That started a large effort at the company to address hate speech and misinformation online. The effort, Dr. Nayak said, shows that “when we see real-world biases making results worse than they should be, we try to get to the heart of the problem.”

Google has escaped recent scrutiny

Yet it is not just these unintended biases that we should be worried about. Researchers point to other issues: Google’s algorithms favor recency and activity, which is why they are so often vulnerable to being manipulated in favor of misinformation and rumor in the aftermath of major news events. (Google says it is working on addressing misinformation.)

Some of Google’s rivals charge that the company favors its own properties in its search results over those of third-party sites — for instance, how it highlights Google’s local reviews instead of Yelp’s in response to local search queries.

Regulators in Europe have already fined Google for this sort of search bias. In 2012, the F.T.C.’s antitrust investigators found credible evidence of unfair search practices at Google. The F.T.C.’s commissioners, however, voted unanimously against bringing charges. Google denies any wrongdoing.

The danger for Google is that Mr. Trump’s charges, however misinformed, create an opening to discuss these legitimate issues.

On Thursday, Senator Orrin Hatch, Republican of Utah, called for the F.T.C. to reopen its Google investigation. There is likely more to come. For the last few years, Facebook has weathered much of society’s skepticism regarding big tech. Now, it may be Google’s time in the spotlight.

Now the Pew Research Center has released a new study that takes a step back. They wondered: How good are Americans at telling a factual statement from an opinion statement — if they don’t have to acknowledge the factual statement is true?

By factual, Pew meant an assertion that could be proven or disproven by evidence. All the factual statements used in the study were true, to keep the results more consistent, but respondents didn’t know that.

An opinion statement, in contrast, is based on values and beliefs of the speaker, and can’t be either proven or disproven.

Pew didn’t provide people with definitions of those terms — “we didn’t want to fully hold their hands,” Michael Barthel, one of the authors of the study, told NPR. “We did, at the end of the day, want respondents to make their own judgment calls.”

The study asked people to identify a statement as factual, “whether you think it’s accurate or not,” or opinion, “whether you agree with it or not.”

They found that most Americans could identify more than three out of five in each category — slightly better than you’d expect from random luck.

In general they found people were better at correctly identifying a factual statement if it aligned with or supported their political beliefs.

For instance, 89 percent of Democrats identified “President Barack Obama was born in the United States” as a factual statement, while only 63 percent of Republicans did the same.

Republicans, however, were more likely than Democrats to recognize that “Spending on Social Security, Medicare, and Medicaid make up the largest portion of the U.S. budget” is a factual statement — regardless of whether they thought it was accurate.

And opinions? Well, the opposite was true. Respondents who shared an opinion were more likely to call it a factual statement; people who disagreed with the opinion, more likely to accurately call it an opinion.

Pew was able to test that trend more precisely with a followup question: If someone called a statement an opinion, they asked if the respondent agreed or disagreed with that opinion.

If the opinion was actually an opinion, responses varied.

“If it wasn’t an opinion statement — it was a factual statement that they misclassified — they generally disagreed with it,” Barthel says.

Some groups of people were also more successful, in general, than others.

The “digitally savvy” and the politically aware were more likely to correctly identify each statement as opinion or factual. People with a lot of trust in the news media were also significantly more likely to get a perfect score: While just over a quarter of all adults got all five facts right, 39 percent of people who trust news swept that category.

But, interestingly, there was much less of an effect for people who said they were very interested in news. That population was slightly more likely to identify facts as facts — but less savvy than non-news-junkies at calling an opinion an opinion.

The results suggest that confirmation bias is not just a question of people rejecting facts as false — it can involve people rejecting facts as something that could be proven or disproven at all.

But Barthel saw a silver lining: In almost all cases, he said, a majority of people did classify a statement correctly — even with the trends revealing the influence of their beliefs.

“It does make a little bit of difference,” he said. “But normally, it doesn’t cross the line of making a majority of people get this wrong.”

If you’re anything like me, you probably didn’t have to think very hard before the names Albert Einstein and Isaac Newton popped up.

But what if I asked you to think of a female physicist? What about a black, female physicist?

You may have to think a bit harder about that. For years, mainstream accounts of history have largely ignored or forgotten the scientific contributions of women and people of color.

This is where Buffalo — a card game designed by Dartmouth University’s Tiltfactor Lab — comes in. The rules are simple. You start with two decks of cards. One deck contains adjectives like Chinese, tall or enigmatic; the other contains nouns like wizard or dancer.

Draw one card from each deck, and place them face up. And then all the players race to shout out a real person or fictional character who fits the description.

Hmm. If everyone is stumped, or “buffaloed,” you draw another noun and adjective pair and try again. When the decks run out, the player who has made the most matches wins.

It’s the sort of game you’d pull out at dinner parties when the conversation lulls. But the game’s creators says it’s good for something else — reducing prejudice. By forcing players to think of people that buck stereotypes, Buffalo subliminally challenges those stereotypes.

“So it starts to work on a conscious level of reminding us that we don’t really know a lot of things we might want to know about the world around us,” explains Mary Flanagan, who leads Dartmouth University’s Tiltfactor Lab, which makes games designed for social change and studies their effects.

Buffalo might nudge us to get better acquainted with the work of female physicists, “but it also unconsciously starts to open up stereotypical patterns in the way we think,” Flanagan says.

In one of many tests she conducted, Flanagan rounded up about 200 college students and assigned half to play Buffalo. After one game, the Buffalo players were slightly more likely than their peers to strongly agree with statements like, “There is potential for good and evil in all of us,” and, “I can see myself fitting into many groups.”

Students who played Buffalo also scored better on a standard psychological test for tolerance. “After 20 minutes of gameplay, you’ve got some kind of measurable transformation with a player — I think that’s pretty incredible,” Flanagan says.

Buffalo isn’t Flanagan’s only bias-busting game. Tiltfactor makes two others called “Awkward Moment” and “Awkward Moment At Work.” They’re designed to reduce gender discrimination at school and in the workplace, respectively.

“I’m really weary of saying things like, ‘Games are going to save the world,'” Flanagan says. But she adds, “it’s a serious question to look at how a little game could try to address a massive, lived social problem that affects so many individuals.”

Maanvi Singh for NPR

Scientists have tried all sorts of quick-fix tactics to train away racism, sexism and homophobia. In one small study, researchers at Oxford University even looked into whether Propranolol, a drug that’s normally used to reduce blood pressure, could ease away racist attitudes. Unsurprisingly, it turns out that there is no panacea capable of curing bigotry.

There are, however, good reasons to get behind the idea that games or any other sort of entertainment can change the way we think.

“People aren’t excited about showing up to diversity trainings or listening to people lecture them. People don’t generally want to be told what to think,” explains Betsy Levy Paluck, a professor of psychology at Princeton University who studies how media can change attitudes and behaviors. “But people like entertainment. So, just on a pragmatic basis, that’s one reason to use it to teach.”

There’s a long history of using literature, music and TV shows to encourage social change. In a 2009 study, Paluck found that radio soap opera helped bridge the divides in post-genocide Rwanda. “We know that various forms of pop-culture and entertainment help reduce prejudice,” Paluck says. “In terms of other types of entertainment — there’s less research. We’re still finding out whether and how something like a game can help.”

Anthony Greenwald, a psychologist at the University of Washington who has dedicated his career to studying people’s deep-seated prejudices, is skeptical. Like Flanagan, he says, several well-intentioned researchers have proved a handful of interventions — including thought exercises, writing assignments and games — can indeed reduce prejudice for a short period of time. But, “these desired effects generally disappear rapidly. Very few studies have looked at the effects even as much as one day later.”

After all, how can 20 minutes of anything dislodge attitudes that society has pounded into our skulls over a lifetime?

Flanagan says her lab is still looking into that question, and hopes to conduct more studies in the future that track long-term effects. “We do know that people play games often. If it really is a good game, people will return to it. They’ll play it over and over again,” Flanagan says. Her philosophy: maybe a game a day can help us keep at least some of our prejudices away.

Black applicants may have a harder time finding an entry level service or retail job in Toronto than white applicants with a criminal record, a new study has found.

For a city that claims to be multicultural, the results were “shocking,” said Janelle Douthwright, the study’s author, who recently graduated with a Masters of Arts in Criminology and Socio-Legal Studies from the University of Toronto.

Douthwright read a similar study from Milwaukee, Wis., during her undergraduate courses and she was “floored” by the findings.

“I thought there was no way this would be true here in Toronto,” she said.

She pursued her graduate studies to find out.

Douthwright created four fictional female applicants and submitted their resumes for entry level service and retail positions in Toronto over the summer.

She gave two of the applicants Black sounding names — Khadija Nzeogwu and Tameeka Okwabi — and gave one a criminal record. The Black applicants also listed participation in a Black or African student association on their resumes.

She gave the two other applicants white sounding names — Beth Elliot and Katie Foster — and also gave one of them a criminal record. The candidates with criminal records indicated in their cover letters that they had been convicted of summary offences, which are often less serious crimes.

Both Black applicants applied to the same 64 jobs and the white applicants applied to another 64 jobs.

Douthwright explained that she didn’t submit all four applications to the same jobs because the applications for the two candidates with criminal records and the two applicants without criminal records were almost identical except for the elements she used to indicate race, so they might have aroused suspicions among the employers if they were all submitted for the same jobs.

Though the resumes were nearly identical — each applicant had a high school education and experience working as a hostess and retail sales associate — the white applicant who didn’t have a criminal record received the most callbacks by far.

Of the 64 applications, the white applicant with no criminal record received 20 callbacks, a callback rate of 31.3 per cent. The white applicant with a criminal record received 12 callbacks, a callback rate of 18.8 per cent.

The Black applicant with no criminal record, meanwhile, received seven callbacks, a rate of 10.9 per cent. The Black applicant with a criminal record received just one callback out of 64 applications, a rate of 1.6 per cent.

Lorne Foster, a professor in the Department of Equity Studies at York University said Douthwright’s study bolsters the thesis that “the workplace is discriminatory on a covert level.”

“We have a number of acts that protect us against discrimination and many people think that because of that strong infrastructure discrimination is gone,” he said.

That’s not the case. “Implicit” or unconscious bias is a persistent issue.

“All of these implicit biases are automatic, they’re ambivalent, they’re ambiguous, and they’re much more dangerous than the old-fashioned prejudices and discrimination that used to exist because they go undetected but they have an equally destructive impact on people’s lives,” Foster said.

“It’s an invisible and tasteless poison and it’s difficult to eliminate.”

Individual employers, he said, should take a proactive approach to ensure their hiring practices are inclusive or at least adhering to the human rights code by testing and challenging their processes to uncover any hidden prejudices.

He pointed to the Windsor Police Service, who shifted their hiring practices when they discovered their existing process was excluding women, as an example.

They were one of the first services to do a demographic scan of who works for them, said Foster, who worked on a human rights review of the service.

Through that process they realized there was a “dearth” of female officers. They realized that the original process, which involved a number of physical tests “where there was all this male testosterone flying around,” was inhibiting women from attending the session.

In response they organized a series of targeted recruitment sessions and were able to hire five new women at the end of that process, Foster said.

“We all need to be vigilant about our thoughts about other people, our hidden biases and images of them,” he said.

Hadn’t thought of this aspect of bias in reference checks. When hiring in government, I was always conscious of the selection bias in the references submitted – people generally do not submit negative references! When asked if I was willing to be a reference, I would flag if I had any issues that I would have to include in the reference:

As much as we’d like to think we’ve refined the hiring process over the years to carefully select the best candidate for the job, bias still creeps in.

Candidates who come from privileged backgrounds are more able to source impressive, well-connected referrers and this perpetuates the cycle of privilege. While the referrer’s reputation and personal clout make up one aspect of the recommendation, what they actually say – the content – completes the picture.

Research shows gender bias even invades in the content of recommendations. In this study female applicants for post-doctoral research positions in the field of geoscience were only half as likely as their male counterparts to receive excellent (as opposed to just good) endorsements from their referees. Since it’s unlikely that of the 1,200 recommendation letters analysed, female candidates were less excellent than the male candidates, it means something else is going on.

A result like this may be explained by the gender role conforming adjectives that are used to describe female versus male applicants. Women are more likely to be observed and described as “nurturing” and “helpful”, whereas men are attributed with stronger, more competence-based words like “confident” and “ambitious”. This can, in turn, lead to stronger recommendations for male candidates.

Worryingly, in another study similar patterns emerged in the way black versus white, and female versus male, medical students were described in performance evaluations. These were used as input to select residents.

In both cases the members of minority groups were described using less impressive words (like “competent” versus “exceptional”), a pattern that was observed even after controlling for licensing examination scores, an objective measure of competence.

Recommendations aren’t good predictors of performance

Let’s put the concerns about bias aside for a moment while we examine an even bigger question: are recommendations actually helpful, valid indicators of future job performance or are they based on outdated traditions that we keep enforcing?

Even back in the 90s, researchers were trying to alert hiring managers to the ineffectiveness of this as a tool, noting some major problems.

The first problem is leniency, referees are allowed to be chosen by the candidate and tend to be overly positive. The second is too little knowledge of the applicant, as referees are unlikely to see all aspects of a prospective employees’ work and personal character.

Reliability is another problem. It turns out there is higher agreement between two letters written by the same referee for different candidates, than there is for two letters (written by two different referees) for the same candidate!

There is evidence that people behave in different ways when they are in different situations at work, which would reasonably lead to different recommendations from various referees. However, the fact that there is more consistency between what referees say about different candidates than between what different referees say about the same candidate remains a problem.

The alternatives to the referee

There are a few initiatives that are currently being used as alternatives to standard recruitment processes. One example is gamification – where candidates play spatial awareness or other job-relevant games to demonstrate their competence. For example, Deloitte has teamed up with software developer, Arctic Shores, for a fresh take on recruitment in an attempt to move away from the more traditional methods of recruitment.

However, gamification is not without its flaws – these methods would certainly favour individuals who are more experienced with certain kinds of video games, and gamers are more likely to be male. So it’s a bit of a catch-22 for recruiters who are introducing bias through a process designed to try to eliminate bias.

If companies are serious about overcoming potential bias in recruitment and selection processes, they should consider addressing gender, racial, economic and other forms of inequality. One way to do this is through broadening the recruitment pool by making sure the language they use in position descriptions and jobs ads is more inclusive. Employers can indicate flexible work options are available and make the decision to choose the minority candidates when they are equally qualified as other candidates.

Another option is to increase the diversity of the selection committee to add some new perspectives to previously homogeneous committees. Diverse selectors are more likely to speak up about and consider the importance of hiring more diverse candidates.

Job seekers could even try running a letter of reference through software, such as Textio, that reports gender bias in pieces of text and provides gender-neutral alternatives. But just as crucial is the need for human resources departments to start looking for more accurate mechanisms to evaluate candidates’ competencies.