Find chapters in your area

AI from IBM Knows—Better than Humans—If You Plan to Quit

Members may download one copy of our sample forms and templates for your personal use within your organization. Please note that all such forms and policies should be reviewed by your legal counsel for compliance with applicable law, and should be modified to suit your organization’s culture, industry, and practices. Neither members nor non-members may reproduce such samples in any other way (e.g., to republish in a book or use for a commercial purpose) without SHRM’s permission. To request permission for specific items, click on the “reuse permissions” button on the page where you find the item.

Page Image

Image Caption

Page Content

IBM is using artificial intelligence (AI) to predict when employees might leave and ping managers to intervene. That's a good thing—saving money and resources in finding and hiring new employees. But, as a result of that AI innovation, IBM has cut nearly a third of its HR department. The remaining jobs are higher quality, the company says, but there are certainly fewer HR jobs to be had.

Meanwhile, other tech giants and corporations are contemplating the ramifications of using AI in their hiring and decision-making. Some have tried establishing ethics boards to guide the use of AI, but that didn't work out so well for Google.

To delve deeper into the matter, we rounded up articles from SHRM Online, All Things Work and other trusted news sources.

IBM Artificial Intelligence Can Predict with 95% Accuracy Which Workers Are About to Quit Their Jobs

IBM HR has a patent for its "predictive attrition program" which was developed with Watson to predict employee flight risk and prescribe managers can engage employees. "It took time to convince company management it was accurate," CEO Ginni Rometty said, but the AI has so far saved IBM nearly $300 million in retention costs, she claimed.

The AI retention tool is part of a suite of IBM products designed to upend the traditional approach to human resources management. Rometty described the classic human resources model as needing an overhaul, and said it is one of the professions where humans need AI to improve the work.

The tech giant has reduced the size of its global human resources department by 30 percent. The remaining positions earn higher pay and the people in them are able to perform higher-value work.

At IBM, AI mines for patterns; it searches for employees who've been in a job longer than usual (which could signal flight risk) and can determine whether they need more training to move up.

AI allows managers to "cut through the noise" by crunching through the dozens of data points at a manager's disposal at any given time, including market conditions, talent scarcity, skill forecasts, salary ranges and raises—and it can explain its analysis.

"Ethical AI" has become a new corporate buzz phrase, slapped on internal review committees, fancy job titles, research projects and philanthropic initiatives. The moves are meant to address concerns over racial and gender bias emerging in facial recognition and other AI systems, as well as address anxieties about job losses to the technology and its use by law enforcement and the military.

But how much substance lies behind the ethics campaigns, which are drawing more and more attention? And who gets to decide which technological pursuits do no harm?

Emerging alongside reports of the many ways AI is benefiting HR is a more troubling narrative that algorithms may perpetuate bias in hiring or other talent decisions.

While well-designed AI can help eliminate unconscious human bias in processes like candidate screening, there also are risks that the technology can cause adverse impact as well. The bombshell revelation last year that Amazon created and later shut down a recruiting algorithm that was biased against women applying for software development jobs continues to reverberate in HR. Although it represented just a single case of an internally developed AI tool, the flawed Amazon algorithm captured the attention of the industry and led to a new wariness and some policy changes.

Automation is everywhere, and its penetration and sophistication are increasing. Artificial intelligence is expected to greatly expand the ability of robots and automated systems to learn, combine work functions and think outside the box.

Robotics and cognitive technologies are continuing to supplant a growing number of routine business functions that previously were handled by humans—including knowledge-worker tasks that many assumed would remain the domain of human beings for the foreseeable future.

The board survived for barely more than one week. Founded to guide "responsible development of AI" at Google, it would have had eight members and met four times over the course of 2019 to consider concerns about Google's AI program. Those concerns include how AI can enable authoritarian states, how AI algorithms produce disparate outcomes, whether to work on military applications of AI, and more. But it ran into problems from the start.

Members may download one copy of our sample forms and templates for your personal use within your organization. Please note that all such forms and policies should be reviewed by your legal counsel for compliance with applicable law, and should be modified to suit your organization’s culture, industry, and practices. Neither members nor non-members may reproduce such samples in any other way (e.g., to republish in a book or use for a commercial purpose) without SHRM’s permission. To request permission for specific items, click on the “reuse permissions” button on the page where you find the item.