CIO Insights and Analysis from DeloitteCONTENT FROM OUR SPONSORPlease note: The Wall Street Journal News Department was not involved in the creation of the content below.

Text Size

Regular

Medium

Large

Google+

Print

Managing Algorithmic Risks

Companies can benefit significantly from algorithms based on advanced analytics and machine learning, but it’s often all too easy to overlook the risks that may come with them.

The rise of advanced data analytics and cognitive technologies has led to an explosion in the use of complex algorithms across a wide range of industries and business functions. Whether they’re deployed to process loan applications or develop marketing campaigns, these continually evolving sets of rules for automated or semiautomated decision-making can give companies new ways to achieve business goals, accelerate performance, and differentiate their offerings (Figure 1).

However, there can be a potential downside. Even as many decisions enabled by algorithms have an increasingly profound impact on companies and individuals, growing complexity can turn those algorithms into inscrutable black boxes. Although often enshrouded in an aura of objectivity and infallibility, algorithms can be vulnerable to a wide variety of risks, including accidental or intentional biases, errors, and fraud.

As the leaders of the IT function, which frequently shares responsibility for developing and implementing new algorithms, CIOs have an important role to play in helping organizations to harness this new capability while keeping the accompanying risks at bay.

Balancing Risks and Rewards

From Silicon Valley to the industrial heartland, many organizations are increasingly relying on data-driven insights powered by algorithms. Growth in sensor-generated data and advancements in data analytics and cognitive technologies have been among the biggest drivers of this change, enabling businesses to produce rich insights to guide strategic, operational, and financial decisions.

Business spending on cognitive technologies such as machine learning is growing rapidly, and it’s expected to continue at a five-year compound annual growth rate of 55 percent, reaching nearly $47 billion by 2020. Today, algorithms are often used to help make important decisions, such as detecting crime and assigning punishment, determining millions of dollars of investments, and saving the lives of patients. In the coming years, machine-learning algorithms will also likely power countless new internet of things (IoT) applications.

While such change is transformative and impressive, instances of algorithms going wrong have also increased, typically stemming from human biases, technical flaws, usage errors, or security vulnerabilities. For instance:

In the 2016 U.S. elections, social media algorithms were cited for shaping and swaying public opinion by creating opinion echo chambers and failing to clamp down on “fake news.”

In several instances, employees have manipulated algorithms to suppress negative results of product safety and quality testing.

During the 2016 Brexit referendum, algorithms were blamed for the flash-crash of the British pound by 6 percent in two minutes.

Investigations have found that the algorithm used by criminal justice systems across the United States to predict recidivism rates is biased against certain racial groups.

According to a recent study,¹ online ads for high-paying jobs were shown more often to men than to women.

Typically, machine-learning algorithms are first programmed and then trained using existing sample data. Once training is complete, they are used to analyze new data, providing outputs based on what they learned during training and any other data they’ve analyzed since. When it comes to algorithmic risks, three stages of that process can be especially vulnerable:

Input data. Problems can include biases in the data used for training the algorithm as well as incomplete, outdated, or irrelevant input data; insufficiently large and diverse sample sizes; inappropriate data collection techniques; or a mismatch between training data and actual input.

‘The immediate fallout from algorithmic risks can include inappropriate or even illegal decisions. And because of the speeds at which algorithms operate, the consequences can quickly get out of hand.’

Many hackers have also begun targeting machine-learning algorithms by manipulating the data used to train them, for instance, leading to erroneous output and unintended actions and decisions. A recent report revealed that cybercriminals are making close to $5 million per day by tricking ad-purchasing algorithms with fraudulent ad-click data generated by bots rather than humans.

Taking the Reins

The immediate fallout from algorithmic risks can include inappropriate or even illegal decisions. And because of the speeds at which algorithms operate, the consequences can quickly get out of hand. Among the potential long-term implications for organizations are reputational, financial, operational, regulatory, technology, and competitive risks.

Meanwhile, many of the checks and balances designed for managing traditional risk aren’t sufficient for these newer types, due to the complexity and proprietary nature of many algorithms as well as the current lack of standards and tools. Effectively managing algorithmic risks may require organizations to modernize their risk management processes with a focus on strategy and governance; design, development, deployment, and use; and monitoring and testing.

For example, organizations can create an algorithmic risk management strategy and governance structure to manage technical and cultural risks. It will likely include principles, policies, and standards; roles and responsibilities; control processes and procedures; and appropriate personnel selection and training. Providing transparency and processes to handle inquiries can also help organizations use algorithms responsibly. Similar approaches aligned with the governance structure can be implemented to address the algorithm life cycle from data selection to algorithm design, integration, and live use in production, as well as monitoring and testing.

Given their technical savvy and frequent involvement, CIOs can play a leading role, particularly when it comes to algorithm design, development, deployment, use, monitoring, and testing. Educating business users about the risks and potential consequences is one example; helping to establish a collaborative governance council is another. Also important is conducting an inventory and classification of algorithmic risks.

In addition, using their knowledge of technical standards, IT leaders can play an essential part in developing guidelines for algorithm use within the organization. They can also help increase transparency by ensuring there is proper documentation for the algorithms used, including an explicit statement of any underlying assumptions, and help implement training programs, continuous monitoring, and security and operational controls.

*****

Managing algorithmic complexity can be an opportunity to lead, navigate, and disrupt, but it typically requires careful evaluation and planning. Rather than assuming today’s algorithms always spit out the truth, it may be time to open those black boxes and start asking questions.

Related Deloitte Insights

Internal audit functions can perform with greater assurance and confidence while gaining considerable efficiencies over time by bolstering their analytics capabilities; however, the function cannot make headway on its own. By working with IT and other business stakeholders, internal audit can set a strategy for the future state of such an analytics program and develop a road map for how to get there.

Advanced analytical capabilities, a data-first corporate culture, and a strategy for gradually deploying data analytics across all business functions can help companies gain an analytical edge over the competition. According to Tom Davenport, external senior advisor for Deloitte Analytics, and Jeanne Harris, faculty member at Columbia University, businesses that can compete on advanced analytical capabilities will likely be able to predict challenges and develop solutions ahead of others in their industries.

New tools and emerging data sources can provide global manufacturers with increased insights into product safety and quality issues. But only if they can decipher it. Organizations should assess their talent to ensure they can effectively glean insights from the increasingly vast array of incoming data, according to Greg Swinehart, a partner with Deloitte Risk and Financial Advisory.

Editors Choice

CIOs with a bold vision can transform IT operations with emerging technologies and demonstrate to other leaders how to do the same across the enterprise, says Bill Briggs, CTO of Deloitte Consulting LLP. By providing business context that can help their peers understand and evaluate technology’s potential, CIOs can help drive enterprisewide business transformation.

Incoming CIOs may face a raft of decisions about technology projects, business initiatives, and hiring or promoting talent, but the first 100 days of a new CIO’s tenure are a time for learning about and evaluating the business, IT function, talent, and culture. Long- and short-term strategic IT plans built on this solid foundation of knowledge can help new CIOs succeed, according to a recent analysis of data from Deloitte’s CIO Transition Lab.

CIOs transitioning into new IT leadership roles often encounter different opportunities and challenges depending on whether they are internal hires from within the IT team or outside the IT function, external hires, or are leading a team through an M&A or divestiture.

About Deloitte Insights

Deloitte Insights for CIOs couples broad business insights with deep technical knowledge to help executives drive business and technology strategy, support business transformation, and enhance growth and productivity. Through fact-based research, technology perspectives and analyses, case studies and more, Deloitte Insights for CIOs informs the essential conversations in global, technology-led organizations. Learn more