Automation is more complex than people think. Here’s why

Oct 21, 2017, 10:05 AM

Automation is a topic on which most people have an opinion. The level of knowledge on the subject varies greatly, as does the amount of fear that people feel towards the technological revolution that is taking place.

I get the impression that even informed writers - including myself - in this field often take, for numerous reasons, convenient short-cuts when it comes to writing and talking about automation.

It is therefore time to address the abundance of factors that are influencing and will continue to influence how humanity moves forward with automation. It is an intertwined and complex network of factors that allow for multiple outcomes. This article will shine the light on a selection of these parameters.

Automation is not mere labour automation

We tend to mostly focus on the automation of labour, since we are dependent on the wages we earn and are concerned about potential job losses, the rule of robots and a universal basic income.

Even though labour is highly important, automation should be seen more broadly, since automation can and currently does permeate our private lives, how we communicate, where we eat, what we buy, whom we date, which party we choose at an election and much, much more. Thus, automation should be discussed in a more holistic sense, addressing the economic, environmental, political, ethical, cultural, legal and social dimensions. So far, this happens only on the fringe.

The relationship between artificial intelligence and automation

Currently, Artificial Intelligence (AI), the intelligence enabler for robots, is rather narrow (some might argue limited), which is why automation is very nuanced and focused on certain tasks. Yet, in the future, AI is likely to develop more general capabilities, which will translate into multidisciplinary robotic labour and more software solutions with human-like reasoning. This development is accelerated by other fields of research such as sensor technologies, super-computing, neuroscience, 3D-printing or the usage of new materials such as graphene.

What is the goal of automation?

The goal is to automate a diverse set of tasks, which will - in theory - benefit us. Currently, the focus is on automating numerically complex and monotonous, repetitive tasks. Yet, if AI would allow it, this spectrum would certainly be extended. Technological possibility and legal boundaries are hence the true limitations.

A corporate’s motivation lies within scalability, higher profits, less man-made failures and less regulation (AI doesn’t form unions, at least for now). Humanity’s motivation lies within avoiding certain arduous tasks (like developing a trading strategy in a volatile stock-market), enhancing decision processes (optimal time for harvesting a crop) or, in some cases, to outsource responsibility (autonomous driving).

Why automation is more complex than most people think

It is not just technological feasibility that influences automation. There is a much longer list of parameters, including costs and regulation, that need to be considered.

The cost of automation: As long as the marginal cost of computerized labour is higher than human labour, it is not cost-efficient to automate. This is currently the case for heterogenous and non-repetitive labour intensive tasks. Yet, one has to bear in mind that computerized labour is becoming more capable and affordable from one development cycle to the next.

Wage elasticity: Automation might lead to an excess supply of workers willing to work for less than they are currently being paid. Minimum wages could be deregulated in order to make human labour competitive with computerized labour. Hence, employers facing the decision between relative high upfront investment in computerized labour might opt for cheaper human labour.

Complexity: Tasks requiring interdisciplinary and transfer knowledge, empathy or creativity might be hard to automate in the long run due to their high complexity and insufficient scientific understanding.

Analogue-attractiveness: Humans and other highly developed forms of life are social animals. Hence, consumers and entrepreneurs alike are certain to create a counter movement of analogue, hand-made and non-digital products as well as services.

Some forms of AI, such as supervised machine learning, depend on vast and highly-granular data-sets, which might not be available in certain fields. In other instances we have an abundance of data, especially unstructured data, yet these data-sets lie hidden in heterogenous data-silos and decision-makers simply do not ask the right questions. Furthermore, there is also the danger of using biased training data, leading to unintended outputs.

Data security: Furthermore, we live in a society that does not necessarily understand the implications of datafication in the context of privacy and information security. Yet, the awareness grows and in some cases people might opt out of a datafied system, limiting the availability of data (health-related data is one example).

Regulations: Even though it might be possible to automate certain tasks from a technological and scientific perspective, it might be the case that regulations inhibit these developments. Lags in revisions of certain laws or regulations, unsolved ethical questions or security threats (which are crucial when discussing digital technologies) pose a particular risk.

Adoption speeds: Across industries we see varying adoption rates of technologies, which might be traced back to a lack of awareness, knowledge gaps, lack of talent, or lack of vision. Currently, we see a lack of engineers and talent at the intersection of non-technical and highly-technical domains, which leads to communication issues, false expectations and poor strategic decisions. Adding to this, adoption speeds are influenced by cultural openness and tolerance.

Ethics: Every decision that could lead to job losses is going to pose a huge ethical dilemma. The way to answer it, at least in my opinion, is by evaluating the consequences of not automating a certain task/business-unit today and what it means for the long term employment-capacity of a company. Does avoiding automating jobs today lead to greater job losses tomorrow?

Implications for society: Many jobs will be automated, along with many of the decisions that we make on a daily basis. This will change the societies we live in, on a personal as well as on a macro-level. Recommendation systems and intelligent filtering-algorithms might fortify our opinion-bubbles, detaching us from reality.

Conflicts: With multiple unsolved conflicts, fear of full labour automation could be seized upon by populists. Yet, I think that unless humanity doesn’t evolve towards greater humanism, future technologies are the least of our problems. We should instead use them to cope with climate change, stop environmental pollution and solve world hunger. Nevertheless, it is realistic that certain technologies could lead to protests and even violence. This would only be the case if society, technology and politics evolve heterogeneously. If society has time to accustom itself, define a benevolent legal framework and adapts to technology, doomsday scenarios are pure fear-mongering.

Politics: Even though politicians often lag behind current technology trends, they can still influence the proliferation of technologies. But nowadays the influence can occur also the other way around. Social bots and micro-targeted ads can influence elections, so we desperately need a solution to this issue.

Liability: Who is responsible if a computer-based decision is wrong? The user, the data-scientist selecting the training data, the machine learning engineer, the company providing the service, or someone else? This has to be clearly regulated, otherwise this might open Pandora’s box to costly liability problems.

Insurability: The aforementioned risks might be insured in the future, yet there is still a lack of adequate solutions, which might inhibit or postpone a decision related to automatization.

In addition to these parameters, there are further variables influencing the future of automation, such as culture, religion, the environment, energy, data storage, and others. Some of them are industry specific, people-driven, or of social, economic or of an environmental nature. Furthermore, we have to recognize that automation, or for that matter, any technological development, is not deterministic, but influenced by every human being.

On the one hand we need a new, unprecedented level of transparency and knowledge-based discussion, addressing the aforementioned points and beyond. On the other hand, it is vital that we acknowledge that not everyone will want and be able to participate in a constructive discourse. Despite this fact, corporations, academia and governments should openly communicate their visions for our future and any technological developments in a relatable manner. Then each individual can decide for her- or himself how much of an automated life she or he really wants.