Text analytics application areas typically fall into one or more of three broad, often overlapping domains:

Understanding the opinions of customers, prospects, or other groups. This can be based on any combination of documents the user organization controls (email, surveys, warranty reports, call center logs, etc.) — in which case — or public-domain documents such as blogs, forum posts, and tweets. The former is usually called Voice of the Customer (VotC), while the latter is Voice of the Market (VotM).

Detecting and identifying problems. This can happen across many domains — VotC, VotM, diagnosing equipment malfunctions, identifying bad guys (from terrorists to fraudsters), or even getting early warnings of infectious disease outbreaks.

For several years, I’ve been distressed at the lack of progress in text analytics or, as it used to be called, text mining. Yes, the rise of sentiment analysis has been impressive, and higher volumes of text data are being processed than were before. But otherwise, there’s been a lot of the same old, same old. Most actual deployed applications of text analytics or text mining go something like this:

A bunch of documents are analyzed to ascertain the ideas expressed in them.

A count is made as to how many times each idea turns up.

The application user notices any surprisingly large numbers, and as result of noticing pays attention to the corresponding ideas.

Often, it seems desirable to integrate text analytics with business intelligence and/or predictive analytics tools that operate on tabular data is. Even so, such integration is most commonly weak or nonexistent. Apart from the usual reasons for silos of automation, I blame this lack on a mismatch in precision, among other reasons. A 500% increase in mentions of a subject could be simple coincidence, or the result of a single identifiable press article. In comparison, a 5% increase in a conventional business metric might be much more important.

But in fairness, the text analytics innovation picture hasn’t been quite as bleak as what I’ve been painting so far. While standalone, passively-reported text analytics is indeed the baseline, there are some interesting exceptions. For example:

I once confirmed that SPSS customer Cablecom‘s statistical models for churn and the like absolutely included text data; Cablecom even assigned different weights to the same apparent level of emotion depending on whether the text was in German, French, or Italian. Vertica recently told me of a Vertica/Hadoop customer doing something similar, except for the multilingual aspect. And the end of a 2008 SAS-based paper makes similar claims.

There long* have been some examples of fact extraction that don’t really fit into my three buckets above. For example, researchers mine collections of articles to try to determine biochemical or biological pathways that would not be apparent from examining single research studies alone.

It also has long* been the case that some bad-guy-finding applications — especially in the anti-terrorism area — used text analytics to populate state-of-the-art graph-oriented data analysis tools.

*When it comes to text analytics, “long” means “at least for the past several years.”

In more recent examples:

Greenplum built a document recommender for law firms that does hard-core statistical analysis to determine which .1% of a document set lawyers might actually want to see, and which then learns from users’ feedback after they respond to initial result sets.

And in one example that did not actually get into production, a very large social networking company correlated word usage (e.g., choice among different synonyms) against user characteristics such as age and gender.

Finally there are some applications that, while fitting the standard template, just strike me as getting to unusually sophisticated levels of analysis. For example, Vertica told me of another Vertica/Hadoop case where VotM document analysis is carried out to the level of observing which order brand names appear in, and adjusting that for whether or not it was just an alphabetical list.

[…] technologies as applied to non-tabular data types such as text or […]

Greg on
June 9th, 2011 12:51 am

I’d like to see a journal that could cipher sentiment analysis and guide the journalist/individual to the motivational aspirations and offer inspirational content to help them enrich their perspective and chance of moving to a greater state of well-being.

I have a model to help just need the programmer to design the site in a collaborative and altruistic endeavor.

[…] are in telecommunications, specifically in churn prevention. My favorite may be the case of multilingual text analytic integration in Switzerland: I once confirmed that SPSS customer Cablecom‘s statistical models for churn and the like […]

[…] Flashing forward to 2009, I unearthed a list of specific marketing uses for analytics, originally compiled by Mike Ferguson. That same post starts with a Teradata-supplied list of cases in which you’d want the benefits of your analytics to be delivered near-real-time. And finally, a few months ago, I opined that text analytics application areas typically fall into one or more of three broad, often overlapping d…: […]