Recent developments in academic AI

Recently, one of the world’s most prestigious academic conferences took place in Stockholm.
As Stockholm is merely a couple of hundred kilometers from where I live (and because I managed to land two workshop presentations), I went to take a look.
In this blog post, I present my observations from a (software) product developer’s perspective.

While scientific conferences are informal gatherings in some disciplines, in computer science–and in particular in artificial intelligence–the top conferences have double-blind peer-reviewed proceedings and getting a paper accepted for presentation provides greater career benefits to a researcher than publishing an article in most journals.

The joint conference event provided a good overview of current trends and highlights in artificial intelligent research.
Below, I list some of the current AI challenges and opportunities, as well as a set of suggestions for industry-relevant action items.

AI challenges

Artificial intelligence lacks a clear definition.

Even scientists can’t agree on a definition of artificial intelligence.
AI was Expert systems in the 80’s and chess engines in the 90’s.
Now it’s neural networks and self-driving cars.
The most pragmatic definition for practical purposes might be AI is machines doing things we currently associate with human-level intelligence, which of course implies that our perception of what AI is and is not will keep changing.

There is at best a machine learning (ML) hype and at worst a ML bubble.

During the last years, business leaders like Elon Musk or the ex-Cambridge Analytica CEO Alexander Nix have greatly inflated the current capabilities of AI (in particular of neural networks) to advance their business agenda, without mentioning the following key limitations:

Lack of explainability: we don’t really know why a neural network produces a given output. This makes it risky to apply neural networks in any scenario in which safety and reliability are important. I don’t know why it works, but it works is simply not good enough.

Lack of generalizability: as for now, machine learning systems can only absolve highly specific tasks. We have no clue on how to achieve general artificial intelligence.

We can expect that AI research will fail to deliver at least some of the promises grand standers have made, and that the hype will slow down.
In the worst case, funding will dry up despite of continuous progress in AI, simply because public expectations are not met.

Non-ML AI suffers from brain-drain.

AI is more than just machine learning.
Traditional AI sub-fields like knowledge representation and reasoning have a track record of providing industry-relevant research findings (for example: the basis for business rule engines) and complement recently popular methods like neural networks by providing mathematical rigor and traceability.
However, because of the machine learning hype, traditional AI has a recruitment problem: many young, high-potential academics focus their intellectual energy on machine learning, and often play safe by publishing tweaks to existing neural network architectures before switching to industry for a bigger paycheck.

AI opportunities

There is continuous progress in machine learning and neural networks.

While the progress on AI applications like self-driving cars falls short of what some business leaders have promised, there are continuous advances in machine learning and neural networks, for example by devising networks that autonomously filter out insignificant features to improve performance.
Even if the AI hype slows down soon because of false industry promises, there are no signs that research progress has hit a wall.

While it’s still not clear how these research findings can be transferred to practice, they are at least promising first steps towards solving some state-of-the-art machine learning techniques’ key problems.

Funding for AI research is abundant.

Both industry and government organizations are (for now) throwing money at anything AI.
While some funding initiatives throw pearls before swine, the abundance of funding allows researchers and entrepreneurs to operate with a large degree of creative freedom, which will hopefully facilitate innovation.

Industry-relevance

Although academic computer science conferences are notorious for math-loaded slides, discussions about pointless details, and esoteric presentation topics, it is worth trying to cut through the academic noise, to anticipate relevant trends, and derive action items that help your organization stay competitive.
Below you find some examples of what you can do.

Tap the growing AI talent pool as soon as possible.

A product-oriented software engineer will less and less need to care about low-level programing details like garbage collection (not at all) and the JavaScript features that work in both Chrome and Internet Explorer/Edge (rarely).
In contrast, data analysis literacy is becoming a mandatory skill, not only for creating smart products, but also to drive smart product development decisions.

However, the body of relevant data science and machine learning knowledge grows faster than low-level programing knowledge becomes obsolete.
That’s why the industry demand for computer science (and in particular: machine learning) PhDs, who have shown the eagerness for continuous learning and learned to stay afloat in a competitive environment, grows.

Of course, finding PhDs who are pragmatic enough to be good software engineers is hard and the big software companies pick the best candidates straight from the academic conferences.
Instead of competing machine for learning PhDs, you can simply allow your most motivated engineers to spend some of their work time learning about the benefits AI can have for your product (or ask your boss to give you this opportunity).

From all things AI, apply the simple stuff that works.

AI researchers often present findings that are a far cry from what is applicable in practice.
For example, reinforcement learning researchers know how to create a machine that beats the world’s best human player in Go.
But does this really matter for your business?

A good AI researcher or engineer knows they are by now reinforcement learning algorithms that are suitable to be deployed in production, in particular multi-armed bandits for recommender systems or as a more effective alternative to A/B testing. At ICML, Spotify presented a large-scale trial of multi-armed bandits as the basis for explainable recommender systems - with overwhelmingly positive results (not yet published).

Distinguish visionaries from snake oil salesmen.

Nowadays, business leaders often join a race to the bottom when reacting to the overconfident promises of a competitor.
I recently experienced this when I attended a (public) engineering presentation of an automobile OEM on autonomous driving.
The presentation’s last slide showed a fairly bold timeline of when the OEM would provide equipment for fully autonomous cars.
When I questioned the timeline, the presenter answered:
I can’t comment. Marketing told me to include this slide.

Having good knowledge of the current bleeding edge of AI allows you to know whether a competitor is bluffing or sincere with their announcements–so even if you will join the race, anyway, you will at least know that you are lying ;-)