News Briefs for the {cat_name} industry

Contact Us

Employers aren’t worried about unethical AI, but maybe they should be

Artificial intelligence can make work easier, more efficient, and more accurate. It can also help companies make better decisions. What’s not to like? Well, for starters, AI can be used unethically.

However, this is not a concern for the majority of respondents in a recent survey by Genesys, which provides omnichannel customer experience and contact center solutions. The survey includes responses from employers and employees in the U.S., Germany, the U.K., Japan, Australia, and New Zealand.

By 2022, close to two-thirds of employers said their companies would be using AI or advanced automation in various capacities, such as operations, staffing, performance, and budgeting. However, 54% of employers and 17% of employees expressed no concern that AI could be used unethically by the companies or individual employees.

What are some of the unethical ways AI could be used?

While the majority of employers aren’t troubled, the potential does exist for AI to be used unethically in several ways. “For example, using fake voices or failing to mention that a person is speaking with a bot is unethical,” says Jean-Etienne Goubet, senior associate for product management/operations and artificial intelligence at Genesys.

“Another case could be exacerbating inequalities with routing optimizations based on biases in the data such as race, gender, age, or location,” he explains. Also, since metrics on agents (performance, skills, etc.) are generated on a continuous basis, Goubet says there is the potential for misuse in this area.

Other concerns include how customer data is collected, stored and used; whether these practices will be transparent; and if customers will have the option to provide consent.

Why there’s not much concern

So, why aren’t companies and employees more concerned about the potential for misuse? “They might not be fully aware of the capabilities of AI technologies or the associated risks that come along with it,” Goubet explains. “AI technology is not yet mature enough to provide a full picture of the risks.”

However, millennials — who are most comfortable with technology — are also most likely to favor guardrails in higher number than Gen X or baby boomers. However, those numbers are still low: 21% of millennials versus 12% of Gen X and 6% of baby boomers.

Also, while only a fraction of all employees are concerned that AI could be used unethically, they are concerned that AI could take their jobs. In fact, 52% of employees believe companies should be required to maintain a minimum percentage of human employees versus AI-powered robots and machinery.

AI policies

Only 20% of employees say their organization has a written policy on the ethical use of AI or bots. Twice that many (40%) and 54% of employers believe their organization should have a policy.

What would a written policy on AI entail? Goubet says Genesys has implemented the following guidelines:

Transparency: Customers should be informed when they are conversing with an AI bot.

Fairness: The company should take steps to ensure their AI systems do not introduce bias against race, ethnicity, gender, nationality, sexual orientation, ability, etc.

Accountability: The company is ultimately responsible for the AI systems we create, and the systems created by their AI. Businesses are responsible even if their bots build something harmful.

Data Protection: AI must not be used to diminish the data rights or privacy of individuals or communities.

Social Benefit: The company is committed to social benefit through the thoughtful use of AI.

These guidelines are designed to provide direction for any decisions Genesys employees may make regarding product strategy.

“We can imagine that a written policy could include more content on legal aspects of handling the data — privacy and security — along with rules regarding applications of AI technologies (being forbidden to use specific voices, or rules regarding the weight of specific variables used in AI models).”

Share this article

About the Author

Terri is a journalist/copywriter working with such brands as The Economist, Yahoo, USA Today, Realtor.com, US News & World Report, The Houston Chronicle, and Loyola University Chicago’s Center for Digital Ethics and Policy. You can keep up with her latest adventures @Territoryone.