Intel’s Sentiment Algorithm Needs Less Training Data

George Leopold

via Shutterstock

Intel’s AI lab has released an open-source version of a sentiment analysis algorithm designed to boost natural language processing applications that currently lack the ability to scale across different domains such as restaurant or hotel reviews.

Sentiment analysis is used to glean subjective information from text. That task is made easier when labeled training data is available. The chip maker’s Aspect-Based Sentiment Analysis (ABSA) algorithm released in April also addresses the shortfall in annotated training data required for commercial NLP deployments.

ABSA refers to the machine translation task of extracting aspects, or attributes, of a given domain. In a common example such as a restaurant review, the restaurant is the domain and specific aspects would include the menu, the quality of the food and the attentiveness of the wait staff.

The problem with current sentiment analysis approaches is that aspects within the same domain are semantically close—food, menu, desserts, etc.—while aspects from different domains are semantically different. The Intel researchers note that supervised learning algorithms can handle this domain sensitivity if labeled data is available for training. But labeled date tends to be sparse, and generating it is labor intensive.

Hence, ABSA is being promoted as a “lightly-supervised” alternative to standard sentiment analysis approaches by enabling “a wide variety of users to generate a detailed sentiment report,” the researchers noted in a blog post released on Thursday (May 2).

The goal is to develop a sentiment analysis framework requiring little or no labeled training data, thereby making it faster and cheaper to deploy commercial NLP systems.

ABSA is touted as improving the ability to extract aspect terms and their “sentiment polarity,” as in, the service was excellent or lousy.

Intel used a standard training and inference approach in developing its sentiment analysis algorithm. Unlabeled text documents were used in the training phase for a particular “target” domain. The outputs were opinion and aspect “lexicons” from a specific domain. “The user can edit the domain-specific lexicons which makes this a lightly-supervised approach,” the researchers said.

During the inference phase, opinions and aspects generated during model training were combined with an “unseen inference data set” of restaurant reviews. Together, they were used to generate a report compiling negative and positive sentiments about a product or service. (“The beet salad was terrific,” “the steak was tough.”)

“The algorithm does not require the training of a new model for each domain and can continuously learn from new data coming in,” Intel said.

Along with reducing the requirement for labeled training data and its ability to span different domains, the Intel researchers said the ABSA algorithm also represents an NLP approach in which a model can explain how it arrived at a conclusion or a recommendation.

Intel noted that emerging NLP applications illustrate the shift to “large pre-trained models with relatively small amounts of data instead of the traditional approach of training from-scratch with large amounts of data per task and then performing inference.”

That bodes well for commercial NLP applications, the researchers added. “We see that the field of computer vision has gone through a set of accelerated adoptions with the rise of transfer learning (e.g. ImageNet), which enabled the productization of the technology. We expect a similar development in the field of NLP.”