Predictive systems: When your boss becomes an algorithm?

In recent years, we are experiencing an explosion of applications and services that revolve around big data and predictive systems. We are surrounded by systems that, based on algorithms, are able to process large volumes of information, learn and operate autonomously to the point of making predictions or directly tell us what we need to do to certain situation.

If we think about simple things, our car we already indicates when to check the pressure in our tires and we must make a change engine oil; if we think much more complex environments, there are algorithms that are able to predict the crime of a city and, for example, tell the police how to organize the routes of their patrol cars.

Algorithms, mathematical models and big data

The first step in developing a system of this kind is to design a mathematical model that represents, by formulas, the behavior of complex systems in situations that are not always easy to observe in reality.

Image Source: Google Image

To model mainly we need historical data that allow us to build relationships between variables and processes and, at this point, enters the game the big data to process the data, apply the mathematical model and, thus, “predict the future”.

We live in an environment where we have access to vast amounts of data from multiple sources, data that we can process and allow us to find patterns that make predictions varied: detection of “hot spots” where there is more likely attack in war, the planning bus routes, guess when a tsunami will occur, assess which book will be a bestseller or even get to ask whether a startup is a company liable to be reversed.

When an algorithm becomes our chief

It is clear that predictive systems are increasingly present in our environment as an instrument that can help us decide or even to tell us what we have to do.

Are we working on algorithms without realizing it? Will there come a day that were place our boss for a system that tells us what we have to do? To answer these questions, we have been talking to some experts in the field.

On the one hand, with Pedro Carrillo, CEO of ec2ce a startup that develops predictive systems for application in the agricultural sector and know in advance the spread of pests that affect a crop, which farmers can plan ahead the placing on the market of production or phytosanitary applications to control the risk of pests.

On the other hand, to understand the implications of these systems in decision making, we talked with Dr. Umberto Leon , a professor of Technological Innovation in Clinical Psychology Department of Health Sciences at the University of Monterrey (Mexico) and founding partner a startup that applied algorithms for the prediction of neurological diseases.

Pedro Carillo, CEO of ec2ce, told us that the technology of prediction system is based on algorithms of artificial intelligence and neural networks:

Our algorithms are designed, trained and put into production from the data collected on farms (meteorological data, pest conditions, crop production, etc.). In fact, the developed model can be applied to other agricultural areas of similar characteristics but there is no data.

Thinking of neural networks and thus emulating the behavior of our brains, Dr. Umberto Leon raised us that our brain follows principles similar to those of an algorithm operation; however, the fundamental difference is in the intent:

Wonder if we will work to artificial intelligence algorithms provide these algorithms is certain intentionality. So far it not supposed to have intentionality, although there are certain-biological psychological theories that indicate that the intention or consciousness is an emergent property of multifunctional complex systems and who knows if this technology has not reached or are near arriving as a point critical where intentionality or consciousness appear alone …

So? Can we delegate to a predictive decision-making system? Is an algorithm able to tell us what we should do or there is some margin of error? Pedro Carrillo de ec2ce told us that they could glimpse two visions; one short – term, in which these systems serve us support decision-making and other medium to long term, which will run automatically:

We, ec2ce offer a predictive system that helps the farmer to make decisions.However, we believe that the future of these systems, and where we are directing our strategy is to generate an automatic decision-making system that also develop recommendations for action and automatically applied if possible.

Nuria Oliver, scientific directo, also raised not long ago a similar situation Pedro Carrillo; that is, the ability to delegate decision-making algorithms if the impact of the decision and the complexity of it were assumable:

There will come a case where programs make decisions because they are things that can automatism

And what will be the future? Will we end up working for algorithms or have the capacity to intervene in the decision – making process? Dr. Umberto Leon considers that the human factor will weigh heavily in the decision-making because a system lacks intuition or the ability to handle uncertainty:

Will they be able these systems have intuitions as we have? Will they be able to quantify chance and intuition that rely on micro-moments that only the human brain is able to capture? How many successful decisions have relied on our intuition? What would carry more weight? Does our intuition or data coldly analyzed by a machine? Who would be responsible in case of failure? Is the machine that could not foresee correctly or human who has not followed your intuition? I fear that the answers to these questions can only find walking …

The human factor in decision-making

Indeed, intuition posed by Dr. Umberto Leon is something that is being discussed in the field of human resources. Amish Shah, founder of Millennium Search (a specialized staffing firm), commented the New York Times that the recruitment was closely related to intuition, the “chemistry” or instinct, something like what happened when you met your current partner:

When I interview a candidate, what I look for is passion and I fear that there is no algorithm that can get to the bottom of it

However, companies like GapJumpers or Gild offer automatic tools to select candidates for selection processes and even there are studies that suggest that a system can perform processes recruiters much more accurate than would a human resources specialist.

In fact, this contrast between intuition (or our experience) and what it says an algorithm is something we live much more often than perhaps we can imagine.

The risk management in banking, for example, relies on algorithms credit scoring and something similar happens in the risk assessment of car insurance (where the base price of insurance depends largely on the outcome throws a system based on our age, driving age, our historical claims or the type of vehicle we will ensure).

The algorithms can be wrong, where is the border?

If we consider all real examples we have reviewed, it is easy to find a common thread in all of them: prediction , i.e. advance information of what is expected will happen ; therefore, the “human factor” has weight in making the decision.

Cases like Broward County Florida, where they have arrested innocent people simply because they say an algorithm, are a good example of the importance that there is someone to weigh the information provides a system and confronted with their own knowledge and experience.

In this regard, Dr. Umberto Leon told us that the human factor is key throughout the process: select the inputs and interpret the outputs to provide information that makes sense:

We must not forget that these algorithms are used, for example, for decision-making in areas the finance department of a large company. The number of inputs they work with these companies can be between 15-25, but some of these inputs are unable to predict … A system could not predict a decline of 25% of sales in London was due to a strike on the subway, or the fall of 75% of sales in Paris was due to the latest attacks are linked to the chaotic realm of the psyche and human unpredictability events

Security experts are already talking about the need for contingency plans for as predictive systems that can also be vulnerable and therefore can have a negative influence on our decision making.

The Obama administration has already announced that it will conduct a public consultation to open the debate on artificial intelligence and the need for a regulation to define responsibilities and that systems are controllable and, above all, safe.

Is our new boss will be an algorithm? Today, perhaps, we are somewhat far from that as these systems are a decision support that, for now, continue to take humans based on their experience, wants or ethics.

However, an interesting debate approaches to mark the boundaries of what can be automated without human intervention and what it requires, without doubt, a final decision by a person able to analyze the pros and cons of their decision.