6 Trends and Takeaway Messages from the 2018 AI World Conference

AI World speakers described bringing human traits, ethics, and lots more data to machine learning applications.

BOSTON — Cold weather this week didn’t matter to the crowds at the AI World conference here, as activity around artificial intelligence continues to heat up. Over three days, more than 2,200 attendees learned about the latest advances in machine learning, deep learning, and the industries being affected by AI.

1. Adding humanity to AI

During several general plenary keynotes, speakers noted that in order for AI to advance, more “human traits” needed to be added to the algorithms. Andrew Lo, a professor at the MIT Sloan School of Management, noted that a student of his referred to this as “artificial stupidity,” but then softened it by saying he prefers the term “artificial humanity.”

Danny Lange from Unity discusses reinforcement learning models at the AI World conference in Boston. Source: AI World

In his AI World session covering algorithmic models of investor behavior, Lo noted that decision-making in humans relies a lot on emotions such as fear, greed, and anxiousness, and those traits would need to be factored into any AI algorithms.

In citing research about the psychophysiology of professional investors, he noted that the most successful trades would occur when skin conductivity was high, indicating tension on the part of the investors. Lo also noted that professional investors had the ability to “move on” from losses in comparison with amateur investors.

Danny Lange, vice president of AI and machine learning at Unity, talked about adding human traits like curiosity to reinforcement learning models to achieve more successful results.

When researchers programmed machine learning algorithms to achieve a specific goal — such as rewards when they found something in a maze of rooms — it wasn’t until the system was programmed to explore more that better results occurred.

However, Lange also noted that too much curiosity would be a problem, comparing it to someone watching Netflix on a TV and just continuing to watch show after show. He said that algorithms would need to add traits like impatience and boredom to offset an AI’s curiosity.

“There’s a lack of formal rigor in understanding deep neural networks,” observed Nicholas Roy, a professor at MIT’s Computer Science Artificial Intelligence Laboratory (CSAIL). MIT’s “Quest for Intelligence” combines the efforts of CSAIL students with the expertise of brain scientists, linguists, and social scientists to better understand intelligence itself, he said.

“It’s a core set of people looking at fundamental questions,” added Cynthia Breazeal, director of the personal robotics group at the MIT Media Lab and associate director of the Bridge for Strategic Initiatives in MIT’s Quest for Intelligence.

2. AI models will enhance software, back-office functions

At a session focusing on where investment funds are flowing, AI World speakers mentioned two specific areas of growth for the next few years. First, machine learning models will be used to enhance existing software services, making those more efficient and optimized. With lots of companies using cloud-based software services, efficiencies will improve as AI is added to the software.

Second, many back-office functions are being automated through the use of AI and machine learning. Routine tasks such as bookkeeping, accounting, and expense management will become automated. One panelist noted that 80% of a bookkeeper’s job is routine tasks or functions.

Like many in the robotics space, AI World presenters didn’t say whether the AI will replace humans in those jobs. Instead, they claimed that those workers would be freed up to handle more tasks that couldn’t be automated.

“Medicine is likely to see the biggest transformation in the near future,” said CSAIL’s Roy.

“I don’t believe that there will be no doctors in 30 years,” said John Mattison, chief medical information officer at Kaiser Permanente. “Even if 95% of today’s work may be automated, that will liberate humans for empathy … to do the things that got them into medicine in the first place.”

3. AI will need to rely on other AI

As machine learning models move closer to 100% confidence in their decision-making, more and more data is needed to feed those algorithms. Interestingly, one AI World speaker noted that when he asked his engineers about how much data would be needed to fix operational errors, they came back with the answer of, “We don’t know.”

Nathaniel Gates, CEO of Alegion, said that as the models get closer to 100% confidence, humans ill no longer be able to supervise the training of the models and that other AI models would be needed to assist the first AI model.

Without sounding the doom and gloom bell that you hear when people talk about “the singularity,” he said, machines talking with other machines will help those models get closer to the 100% confidence levels.

Gates also showed a chart that listed the confidence level needed to deploy specific AI models:

Model / application

Confidence needed to deploy

Advertising sentiment

60%

Customer service chatbot

80%

Diagnostic medicine

90%

AI-augmented 911

95%

Autonomous vehicles

99%

“For good decisions, you want to avoid expertise bias and not need billions of images,” said Heather Ames Versace, chief operating officer of Neurala, whose Brain Builder product is designed to accelerate AI development by tagging and annotating simultaneously. “You need the right data in the right way.”

“Humans are still involved more in robotics development than in AI,” said Phil Duffy, vice president of innovation at Brain Corp. “And in usage, Brain designed its robots used by Walmart to include janitors in the operational loop. Keeping humans in the loop helps with adoption.”

4. In the world of IoT, AI means optimization

In a discussion about using deep learning in industrial applications, AI World panelists described using neural networks to help optimize energy usage within “smart facilities.” They also mentioned adding sensors and retrofitting older buildings to take advantage of the latest technologies.

While optimization on a heating or cooling system could mean just shutting it off during certain times of the day, presenters also mentioned the need for “comfort.” This led to discussion of occupancy levels and where people were located in a building to make sure they weren’t complaining that an office is too hot or too cold.

One AI World speaker said deep learning is becoming part of what he called “IoP” – the Internet of People. By giving employees a wearable tracker, employers could track where workers moved during the day and what types of actions they were doing.

Through this analysis, retailers and warehouse companies could determine if shorter employees were trying to reach products located on higher shelves, indicating a need to rearrange operations for better efficiency.

Session participants also mentioned that “digital twin” technology wasn’t just for manufacturing. Simulation software can be used to make a digital twin of an entire building or even a process.

One speaker mentioned that a logistics company was using a digital twin at its innovation center to test the designs of a new distribution center, adding simulations such as what would happen to its processes if products arrived late.

5. AI at the edge

Computing at the edge of networks will continue to become more important, especially for devices and machines that have difficult connectivity options for cloud-based AI processing. Several companies, including Germany’s Bragi, displayed edge AI products and services at the show.

However, some AI World attendees noted that processing at the edge and IIoT still have limitations, even with approaching 5G connectivity.

“Businesses assume that big data is all together and ready for analysis, but it’s not static; it’s a living, breathing thing,” said Raj Minhas, vice president and director of the Interactions and Analytics Lab at PARC.

Autonomous mobile robots indoors cannot use GPS for localization and positioning like self-driving cars, Duffy told Robotics Business Review. As a result, they need to map differently, stop instantly, and “can’t always solve edge cases from remote observation,” he said. “Indoor navigation is still a complex problem.”

6. Ethics a consideration at AI World, but standards also important

Several AI World speakers said the “explainability of AI” would be big in the next few years – not just for legal teams, but to make sure that humans understood why certain decisions were being made. In the healthcare space, a few panelists mentioned that the “why” of a decision would be more important for doctors than the “what” decision or treatment was made.

MIT’s Lo mentioned that humans often make decisions based on demographic data points, but often the decisions have innate biases and very sparse data. “It’s human nature that we are able to make split-second decisions based on so little data,” he said.

The “Morning Coffee” panel on “The Future of AI: Views From the Frontier” also discussed the goal of “democratizing” AI to non-Ph.D.s, as well as concerns about how to build systems that respect privacy, particularly of children, as systems such as Amazon Alexa and Google Home constantly gather user data.

“Informed consent and transparency are at the core of ethical AI,” said MIT CSAIL’s Roy.

“We need to involve all groups — data science and ethics — in interdisciplinary efforts,” said Arif Virani, chief operating officer at DarwinAI, in another panel.

“We need best practices for exposing and sharing flaws,” said Matthew Carroll, CEO at Immuta. “It’s not about government regulations but how to build standards.”

“Regulations should be at the level of the outcome,” said PARC’s Minhas, in reference to autonomous vehicles and AI for state and federal rules, which lag behind technology innovations. As an example of learning about AI behavior, he described self-driving cars turning left more often during purple skies, ultimately because they were turning into home driveways at sunset.

“We need to move data science from skunk works to an engineering discipline, with guidelines and best practices,” Minhas said.

Avoiding negative bias is also important as AI is increasingly used in healthcare, insurance, lending, and criminal justice, noted Abby Everett Jaques, a postdoctoral associate in the MIT Department of Linguistics & Philosophy.

“Ethics should not be an add-on at the end; it should be part of the collaborative development process,” she said. “Little projects seem benign, but we should be aware of how they will connect with the larger ecosystem.”

“Instead of trying to understand a deep neural network from Layer 132, we should test AI like human on a job interview,” said Neurala’s Versace. “You wouldn’t give a job candidate an MRI. Based on the data inputs, what outcomes can it produce?”

“Government has a lot of learning to do, and some vendors have to stop overselling AI,” said Versace. “We’re still an early-stage industry, and we need to work together.”