Doubt-raising Phrases, Public Chauvinism, and a Cautionary Tale of Garbage In, Garbage Out

5 Ally Actions | Oct 12, 2018

Photo courtesy of Pixabay

Each week, we share five simple actions to create a more inclusive workplace and become a better ally.

1. Give wholehearted recommendations

Recommendations come in many forms. Formal letters, social media endorsements, verbal reference checks, and back-channel casual conversations. And when giving any kind of recommendation, we should show complete confidence. No hedging (“she might be good”), faint praises (“she’ll do okay”), or other phrases that undermine (“she needs only minimal guidance”).

Take a minute to think how you would have pushed back if you had been in the audience. Unfortunately, you might have the opportunity to use those lines sometime in the future.

3. If you design software, allow every gender to select every title

Seems obvious to us, but clearly not everyone thinks this way. Last week, we learned that the Royal Jordanian Airlines app prevented a female user from selecting “Dr.” as her title. And the error message was crystal clear: “Title DR. is not valid for this gender. Please change your Title option.”

They’ve since fixed their app, but what the heck were they thinking?

4. Ask about “pronouns.” But not “preferred pronouns”

We shouldn’t presume to know someone’s pronouns based on their physical appearance. Many non-binary people use “them” and “they.” Trans people decide when others should start addressing them using new pronouns, and what those pronouns should be. And some cisgender people forsake the traditional “he” or “she” pronouns for “they” or something else.

Not sure what pronouns to use? Try introducing yourself with a simple, “I use she and her. What pronouns do you use?” It’s respectful, and their answer provides clear guidance. (Thanks to Dom Brassey for recommending this approach.)

And please don’t ask someone for their preferred pronouns. That makes it sound like using their pronouns is optional, which it’s not.

5. Don’t train an AI with biased input

As Buzzfeed’s Ryan Broderick tweeted, “Amazon built an AI to rate job applications. It analyzed 10 years of (male dominated) hires. Then it started penalizing resumes that included the word ‘women’s,’ downgrading graduates from all women’s colleges, and highly rating aggressive language.” Whoops.