Analytics is a top priority for savvy CIOs. But if implicit biases are hiding within your most trusted data sets, your algorithms could be leading you to make bad decisions.

Human beings are inherently biased. So when companies began using computer algorithms to guide their critical business processes, many people believed the days of discriminatory hiring practices, emotion-fuelled performance reviews and partisan product development were coming to an end.

"That was an optimistic, naive hope," says Matt Bencke, CEO of Spare5, a startup that helps companies train algorithms for artificial intelligence systems. "Of course, we would all like to remove bias from certain parts of our lives, but that's incredibly difficult."

Just ask city officials in Boston. As part of an effort to shore up the municipality's aging infrastructure, city hall released a smartphone app that collects GPS data to help detect potholes. However, because people in lower income groups are less likely to own smartphones, the program didn't include data from large segments of city neighborhoods.

Even Pokemon Go isn't immune to big data bias. Recently, the Urban Institute called out the wildly popular game for featuring fewer Pokemon stops in primarily black neighborhoods than in it did in white communities. The Washington-based think tank speculates that location-based data for Pokemon Go originally came from an earlier game, Ingress, which was popular among "younger, English-speaking men," many of whom contributed relevant portal locations to the game's database.

But potholes and Pokemon are the least of what makes data bias dangerous. These days, businesses rely on sophisticated computer algorithms to hire new employees, determine product inventory, shape marketing campaigns and predict consumer behavior. If algorithms can't be trusted to provide honest and impartial insights, businesses could make misguided and discriminatory decisions.