Is there anything left unsaid about social media research and marketing?: Part 3

It took a bit longer than anticipated to write Part 3 of a series of posts about the content proliferation around social media research and social media marketing. In the previous two parts, we talked about Enterprise Feedback Management (December 2013) and Short -event-driven- Intercept Surveys(February 2014). This post is about sentiment and semantic analysis: two interrelated terms in the “race” to reach the highest sentiment accuracy that a social media monitoring tool can achieve. From where we sit, this seems to be a race that DigitalMR is running on its own, competing against its best score.

The best academic institution in this field, Stanford University, announced a few months ago that they had reached 80% sentiment accuracy; they since elevated it to 85% but this has only been achieved in the English language, based on comments for one vertical, namely movies -a rather straight-forward case of: “I liked the movie” or “I did not like it and here is why…”. Not to say that there will not be people sitting on the fence with their opinion about a movie, but even neutral comments in this case, will have less ambiguity than other product categories or subjects. The DigitalMR team of data scientists has been consistently achieving over 85% sentiment accuracy in multiple languages and multiple product categories since September 2013; this is when a few brilliant scientists (engineers and psychologists mainly) cracked the code of multilingual sentiment accuracy!

Let’s dive into sentiment and semantics in order to have a closer look on why these two types of analysis are important and useful for next-generation market research.

Sentiment Analysis

The sentiment accuracy from most automated social media monitoring tools (we know of about 300 of them) is lower than 60%. This means that if you take 100 posts that are supposed to be positive about a brand, only 60 of them will actually be positive; the rest will be neutral, negative or irrelevant. This is almost like the flip of a coin, so why do companies subscribe to SaaS tools with such unacceptable data quality? Does anyone know? The caveat around sentiment accuracy is that the maximum achievable accuracy using an automated method is not 100% but rather 90% or even less. This is so, because when humans are asked to annotate sentiment to a number of comments, they will not agree at least 1 in 10 times. DigitalMR has achieved 91% in the German language but the accuracy was established by 3 specific DigitalMR curators. If we were to have 3 different people curate the comments we may come up with a different accuracy; sarcasm -and in more general ambiguity- is the main reason for this disagreement. Some studies (such as the one mentioned in the paper Semi-Supervised Recognition of Sarcastic Sentences in Online Product Reviews) of large numbers of tweets, have shown that less than 5% of the total number of tweets reviewed were sarcastic. The question is: does it make sense to solve the problem of sarcasm in machine learning-based sentiment analysis? We think it does and we find it exciting that no-one else has solved it yet.

Automated sentiment analysis allows us to create structure around large amounts of unstructured data without having to read each document or post one by one. We can analyse sentiment by: brand, topic, sub-topic, attribute, topic within brands and so on; this is when social analytics becomes a very useful source of insights for brand performance. The WWW is the largest focus group in the world and it is always on. We just need a good way to turn qualitative information into robust contextualised quantitative information.

Semantic Analysis

Some describe semantic analysis as “keyword analysis” which could also be referred to as “topic analysis”, and as described in the previous paragraph, we can even drill down to report on sub-topics and attributes.

Semantics is the study of meaning and understanding language. As researchers we need to provide context that goes along with the sentiment because without the right context the intended meaning can easily be misunderstood. Ambiguity makes this type of analytics difficult, for example, when we say “apple”, do we mean the brand or the fruit? When we say “mine”, do we mean the possessive proposition, the explosive device, or the place from which we extract useful raw materials?

Semantic analysis can help:

extract relevant and useful information from large bodies of unstructured data i.e. text.

find an answer to a question without having to ask anyone!

discover the meaning of colloquial speech in online posts and

uncover specific meanings to words used in foreign languages mixed with our own

What does high accuracy sentiment and semantic analysis of social media listening posts mean for market research? It means that a 50 billion US$ industry can finally divert some of the spending- from asking questions to a sample, using long and boring questionnaires- to listening to unsolicited opinions of the whole universe (census data) of their product category’s users.

This is big data analytics at its best and once there is confidence that sentiment and semantics are accurate, the sky is the limit for social analytics. Think about detection and scoring of specific emotions and not just varying degrees of sentiment; think, automated relevance ranking of posts in order to allocate them in vertical reports correctly; think, rating purchase intent and thus identifying hot leads. After all, accuracy was the only reason why Google beat Yahoo and became the most used search engine in the world.