Big data is increasingly viewed as a strategic asset that can transform organizations through its use of powerful predictive technologies.But when it comes to systems that help make such decisions, the methods applied may not always seem fair and just to some, according to a panel of social researchers who study the impact of big data on public and society.The event, organized recently by New York University's Politics Society and Students for Criminal Justice Reform, centered on issues arising out of big data's use in machine learning and data mining to drive public and private sector executive decisions.The panel that included a mix of policy researchers, technologists, and journalists, discussed ways in which big data—while enhancing our ability to make evidence-based decisions—does so by inadvertently setting rules and processes that may be inherently biased and discriminatory.The rules, in this case, are algorithms, a set of mathematical procedures coded to achieve a particular goal. Critics argue these algorithms may perpetuate biases and reinforce built-in assumptionsGovernment agencies have recently begun scrutinizing the ethical implications of the emerging field. Last week, a White House report cautioned that data collection could undermine civil rights, if not applied correctly. The report called for a conversation to determine "how best to encourage the potential of these technologies while minimizing risks to privacy, fair treatment, and other core American values."In his 2014 report titled "Big Data's Disparate Impact", Solon Barocas, a panel member and a research fellow with the Center for Information Technology Policy at Princeton University, points out that "advocates of algorithmic techniques like data mining argue that they eliminate human biases from the decision-making process. But an algorithm is only as good as the data it works with."Barocas studies the impact of emerging applications of machine learning and the ethical and epistemological issues that they raise. He added that "data mining can inherit the prejudices of prior decision-makers or reflect the widespread biases that persist in society at large."In other words, machine learning systems that run on data produced by humans are based on algorithms designed by humans. Hence that very data carries the implicit biases of the humans that create it.

Thousands of the most influential business articles on analytics, including our own published Thought Leadership articles - all fully searchable back to 2013 and updated almost every day.Search by keyword (using box below) or by category or date. Get them delivered to your inbox by subscribing to our RSS feed below.