Block That Algorithm

Companies across industries are allured by algorithms
to find the next business opportunity with consumers. Google (for searches),
Amazon (for book recommendations), Facebook (for newsfeeds), and many others
spoil us, as consumers, by the power of the algorithm. Now, we not only expect
but assume that our bank, retailer, utility provider, and almost every company
that we do business with will provide curated experiences to us. Thus,
algorithms have become like a treasure map for companies, but one wrong
interpretation of map instructions, or a small mistake in the map itself, can
quickly make companies lose their direction. The gradual reduction of human oversight
regarding many automated processes poses pressing issues of accountability and
respect for human sensitivities in this new digital-first world. As algorithms
spread to every part of our lives, the consequent ethical challenges will
become more apparent.

For instance, banks use an analytical algorithm to
evaluate customers’ profiles and their loan payback abilities. Their highly
trained data scientists are busy finding a pattern of loan defaulters and then
something expected happens—data scientists find a pattern of loan defaulters
for a certain racial community and accidentally include this variable in the
analytical model. Suddenly, customers from a particular ethnic background or
particular area are denied loans/financial products based on a machine’s
recommendation. This bank could now face a potential lawsuit for racial
discrimination that can lead to loss of reputation and business.

In another example, a crowdsourced traffic app that
provides alternative routes to avoid traffic on highways has disturbed the life
of quiet neighborhoods. And what if your traffic-sensitive GPS makes an error?
To err is human, but when an algorithm makes a mistake, are we likely to trust
it again? Probably not. Although it’s true that algorithms solve many of today’s
problems and can predict things with great accuracy, they often introduce new,
sometimes unanticipated ones.

The age of the algorithm has deified data scientists,
and they are the darlings of recruiters for big businesses. But many of them
fail to consider the ethical implications of their everyday actions, as there
are no ethics guidelines set forth at most companies, and that is the
fundamental issue. According to Gartner, by 2018, half of business ethics
violations will occur through the improper use of big data analytics. The
recent book, Weapons of Math Destruction, highlights how the algorithm has
become a pervasive and destructive force in our society. It further explains
how current algorithm models intensify inequality and endanger democracy, and
how we might rein them in. It’s scary!

There is a thin line between designing a good
algorithm and crossing into unethical practices, and companies are increasingly
struggling to draw that line. It is critical for companies to add humanization
to their existing analytical capabilities. A concrete step that companies must
take is to develop an ethics framework for their specific industry and add it
as a tool to their current analytics solutions to avoid unwanted situations. No
matter how accurate your algorithm is, there will still be times when manual
human interference is required.

In this video, author and journalist, Andreas Ekström
explains that unbiased, cleaned search results are likely to remain a myth.
Behind the development of every algorithm there is always a person with
personal beliefs. That’s where people building algorithms need to identify
their own personal bias and take responsibility for how it influences
consumers. The blend of humanity and technology is what makes a good algorithm.
It’s time for companies to start injecting ethics into the core of their
algorithm creation process. In fact, ethics must become a key performance
indicator for every employee who has a direct or indirect connection with
customer data. The starting point should be initiating a company-wide program
to help people understand the legal and business consequences of unethical data
practices, ways to avoid or mitigate risk, and to reinforce the outcomes of
their ethical framework.

What’s your take on the future of algorithm?

(This Blog originally appeared on LinkedIn. It has been re-posted here with prior permission from Manish Bahl.)

Event Gallery

Tag Cloud

ABOUT

dynamicCIO is the brand name for the first community that Grey Head Media has decided to serve.
The community stakeholders are CIOs and senior IT decision makers. Should you choose to become a member of dynamicCIO, you will have access to a...
Read More