The recent Dreamforce conference (i.e, salesforce.com’s extravaganza) focused attention on “the social enterprise” or, more generally, enterprises’ uses of social technology. salesforce is evidently serious about this push, with development/acquisition investment (e.g. Chatter, Radian 6), marketing focus (e.g. much of Dreamforce) and sales effort (Mark Benioff says he got thrown out of a CIO’s office because he wouldn’t stop talking about the “social” subject) all aligned.

It’s a cool story, and worthy of attention. But I’d like to step back and remind us that there are numerous different ways to use social technology in the enterprise, which probably shouldn’t be confused with each other. And then I’d like to discuss one area of social technology that’s relatively new to me: integration between social and operational applications.

Text analytics application areas typically fall into one or more of three broad, often overlapping domains:

Understanding the opinions of customers, prospects, or other groups. This can be based on any combination of documents the user organization controls (email, surveys, warranty reports, call center logs, etc.) — in which case — or public-domain documents such as blogs, forum posts, and tweets. The former is usually called Voice of the Customer (VotC), while the latter is Voice of the Market (VotM).

Detecting and identifying problems. This can happen across many domains — VotC, VotM, diagnosing equipment malfunctions, identifying bad guys (from terrorists to fraudsters), or even getting early warnings of infectious disease outbreaks.

For several years, I’ve been distressed at the lack of progress in text analytics or, as it used to be called, text mining. Yes, the rise of sentiment analysis has been impressive, and higher volumes of text data are being processed than were before. But otherwise, there’s been a lot of the same old, same old. Most actual deployed applications of text analytics or text mining go something like this:

A bunch of documents are analyzed to ascertain the ideas expressed in them.

A count is made as to how many times each idea turns up.

The application user notices any surprisingly large numbers, and as result of noticing pays attention to the corresponding ideas.

Often, it seems desirable to integrate text analytics with business intelligence and/or predictive analytics tools that operate on tabular data is. Even so, such integration is most commonly weak or nonexistent. Apart from the usual reasons for silos of automation, I blame this lack on a mismatch in precision, among other reasons. A 500% increase in mentions of a subject could be simple coincidence, or the result of a single identifiable press article. In comparison, a 5% increase in a conventional business metric might be much more important.

But in fairness, the text analytics innovation picture hasn’t been quite as bleak as what I’ve been painting so far. Read more

TechTaxi points out that it’s at least theoretically possible to, by polluting the Web, pollute somebody’s web-wide information gathering. (Hat tip to Daniel Tunkelang.) They further assert this is a relatively near-term threat.

The theory can’t be denied. What’s more, bad actors have other motives to pollute the Web. For example, if they plant favorable automated comments about their own products or unfavorable about the competition’s, Voice of the Customer/Market applications will naturally be confused. And if automated reputation-checkers get more prominent, there will be a major incentive to game them, just as there has been for Google’s PageRank. So VOTC/VOTM market research tools could polluted as a side effect.

Similarly, if somebody wants to test your e-commerce site by throwing a ton of searches at it, your search logs will lose value.

But disinformation of competitors for the sake of disinformation? Or, as the article suggestions, vandalism/extortion? Off the top of my head, I’m not thinking of a serious near-term threat scenario.

I had a brief chat with the Attensity guys at their Teradata Partners Conference booth – mainly CTO David Bean, although he did buck one question to sales chief Jeff Johnson. The business trends story remained the same as it was in June: The sweet spot for new sales remains Voice of the Customer/Voice of the Market, while on-premise/SaaS new-name accounts are split around 50-50 (by number, not revenue).

David’s thoughts as to why the SaaS share isn’t even higher – as it seems to be for Clarabridge* – centered on the point that some customers want to blend internal and external data, and may not want to ship the internal part out to a SaaS provider. Besides, if it’s tabular data, I suspect Attensity isn’t the right place to ship it anyway.

*Speaking of Clarabridge, CEO Sid Banerjee recently posted a thoughtful company update in this comment thread.

When I challenged him on ease of use, David said that Attensity is readying a Microstrategy-based offering, which is obviously meant to compete with Clarabridge and any of its perceived advantages head-on.

I emailed a bit with Olivier Jouve last week, and chatted with him at the Text Analytics Summit yesterday. He cited a figure of 2400 SPSS text mining users (unique user organizations). The majority of these are for a low-cost, desktop-based surveys product. But when I pressed him, he eventually gave a 500-1000 figure for actual Text Mining For Clementine users. Read more

I was at the Text Analytics Summit yesterday. After the sessions and theoretically* before the drinks, there was a group of subject- or industry-specific “roundtables.” The three best-attended roundtables by far — each with at least 20% of the total roundtable attendees — were on “Voice of the Market”, “Competitive Intelligence”, and “Sentiment Analysis”. (Yes, those are in practice pretty close to being the same thing.) Thus, over half of the show attendees who voted with their feet on a particular subject area of interest picked one in the customer/marketing area. Read more

I chatted a bit with Attensity’s CTO David Bean and sales VP Jeff Johnson yesterday at the Text Analytics Summit. Jeff confirmed what has colleagues had already told me — most of the action is now in Voice of the Customer/Market, he expects a very strong June quarter, etc. But one thing I posted last week wasn’t quite right. Hosted implementations (i.e., SaaS) haven’t yet reached the 50% level at Attensity. However, they are indeed growing fast, and they’re all (or almost all) in the Voice of the Customer/Market area.

Jim D. of UPS asked in the comment thread to the recent Attensity update post how one should decide between Attensity and Clarabridge. I wrote an answer, and then decided to just split it out in a separate post. Here are five ideas about how to pick between Attensity and Clarabridge for the kind of Voice of the Customer/Market application both companies are focusing on.

1. Attensity is the older company than Clarabridge, and is good at more things. Is Clarabridge really good at everything you want them to be?

2. In particular, Attensity has more overall sophistication at linguistic extraction. Do any of the differences matter to you?

3. Both companies are working hard on ease of use, for multiple kinds of user (business user tweaking linguistic rules, IT user, etc.). Whose approach and feature set do you like better?

4. Usually, buying one of these products involves some professional services. Whose organization do you like better?

5. Attensity’s default database schema for its exhaustive extraction is pretty flat and normalized, as befits a happy Teradata partner. Clarabridge’s is more of a star schema, as befits a bunch of ex-Microstrategy guys. Either can be straightforwardly translated into the other, so you may not care — but do you?