Ninety percent, or more, of the data produced by enterprise-class businesses and organizations in the public and not-for-profit sectors is entirely structured. But content published on web pages is unstructured text data and, therefore, a difficult challenge. So should these consumers think differently about how to process this information? According to an article published on Computerworld January 30, 2015 titled Test shows big data text analysis inconsistent, inaccurate, they should. The article is written by Kevin Fogarty.

The point of Fogarty’s article is to expose the actual inaccuracy of a key component of most “modern” analytical tools for text information, a “modeling technique” called the “Latent Dirichlet allocation (LDA)”. Fogarty writes the LDA has been recently proven to be highly inaccurate, at least according to some research attributed to Luis Amaral, a physicist “from Northwestern University”.

Fogarty quotes Amaral admonishing ISVs offering text data analytical tools to these enterprise consumers to come clean about just how useful (or useless) their tools may prove to be before money changes hands to the ultimate dissatisfaction of the group making their purchase.

But, at another level, are enterprise consumers already thinking differently about text data? Microsoft SharePoint and SharePoint Online both offer Managed Metadata Services (MMS), the term store and taxonomy support. OpenText and Microsoft’s own circle of “Managed Partners” (meaning ISVs who work closely with Microsoft to fill in the blanks on high value solutions for enterprise consumers) have already come to market with complete solutions to cultivate useful data from content published on SharePoint sites. These platforms are ubiquitous across enterprise consumers, with, perhaps, as much as 80% of Fortune 500 businesses supporting an instance of one or the other of these solutions.

If the points Fogarty presents in his article prove to be true, then it should not be much of a stretch for stakeholders in a serious effort to mine high value business intelligence (BI) from web sites and social media to decide to pursue Microsoft’s solutions as a best possible choice.

ISVs looking to challenge Microsoft in this space may want to think seriously about providing a cloud, PaaS offer. After all, if these ISVs are already succeeding at this game, why shouldn’t consumers do better by simply “hitching a ride” on these platforms? As Fogarty points out DIY isn’t cutting it. At least not yet.

An important segment of the productivity theme for computing in the fall of 2014 is composed of big data, business intelligence and predictive analytics. Early stage ISVs with solutions in one of these three high demand needs will benefit by crafting market messages around the same productivity theme articulated by their more mature ISV siblings.

Here’s why:

Big Data

the information filtering implicit to the productivity theme, as this writer presented in a prior post to this blog, is a mission-critical component of the complete solution. So Big Data methods of collecting, reposing, categorizing, and, ultimately, processing information are invaluable to a successful effort to enhance productivity for the entire computing “ecosystem” from individual user to collections of organizations.

Business Intelligence (BI)

The BI toolset provides the user interface for the same range of computing users (meaning from individual to sets of organizations) to depict the comparative importance of segments of information and, subsequently, to assimilate it. The charts and other dashboard elements typical of BI presentations render the information into a form users can easily understand. This information, in turn, provides users with bases of action, as required.

Predictive Analytics (PA)

Machine learning is a popular term, which is widely used by players in the productivity market. Machine learning can be applied to the PA computing task. But PA can also be manually expressed by users. The objective of PA is consistently expressed across most productivity messaging as an effort to heighten the value of computing activity, and, ultimately, to increase return on investment and the value of computing activity.

The above points are merely suggestions for how an early stage ISV with a solution in one, or all three portions of this brand segment might choose to articulate a message. If you would like to hear more about how your business might benefit by building your brand within the context of the productivity theme articulated by each of the major ISVs, please don’t hesitate to contact us. We would be eager to learn more about what you are after. As well, we pursue opportunities to contribute to the success of this kind of marketing communications effort on a consulting basis.

Is there a mass market for data visualization? Tableau Software thinks so. If Tableau Software’s fiscal Q1, 2014 results, as reported on Monday, May 5, 2014, are any indicator, their bet may be a winner. The company reported total revenue for its first fiscal quarter, 2014 of $74.6, which represents an 86% year-over-year increase.

The concept of process development through workflows is a core component of quite a number cloud, and on premise software offers from more mature ISVs. Microsoft® SharePoint® is, apparently, a highly desirable computing platform as the result of its “no code” application development capability. Salesforce.com’s Enterprise subscription includes a workflow feature. SAP and Oracle both offer workflow development options.

These same workflows can be applied to the task of building data visualizations. Microsoft’s SharePoint Online offers a PowerBI component. Reporting Services for SharePoint enable communities of SharePoint users to work with SQL Server databases, on the backend, through browser clients, in real time. Part of this work can, and does include data visualization requirements, including dashboards, etc.

So Tableau Software is not the only ISV in this market for point and click data visualization. But Tableau’s estimate of the size of the market for graphically depicting data is much larger, it would seem, from Mr. Cabot’s recent remarks (I am referring to his presentation at the Pacific Crest Summit, as well as his answers to questions raised from the fiscal Q1, 2014 earnings report), than is the case for his competitors.

Perhaps Mr. Cabot sees a much larger bottom segment to the market. If he is correct in this assumption, then average consumers of smart phones, mobile devices, cloud SaaS offers, etc., may exhibit a substantially more lucrative potential than one would otherwise assume to be the case.

How, if it all, could Silverpop leverage other components of IBM to deliver substantial return on this investment?

How big a market are we talking about when we consider Silverpop’s niche?

1) Does Silverpop’s stated business model promise to leverage other IBM components? and does this synergy look like a promising, substantial, net positive contributor to IBM’s bottom line?

The answer to the first question is “yes”:

Silverpop presents what I would read as the core of its market message in a short video available for viewing on its web site. This video (which speaks to the efforts of marketing communications teams within a larger business) contrasts the telltale emblems of a mediocre marketing campaign (without Silverpop), to a personalized campaign, targeted to prospects (presumably with Silverpop’s software). IBM’s Watson could be the perfect complement for Silverpop, promising to provide a much higher level of personalization than could be achieved via other methods.

Cognos, and other pieces of IBM’s data analytics offer can add more value to Silverpop as clients look for metrics on campaign performance, and more

finally, a quick glance at Silverpop’s client list reveals a number of firms where IBM’s consulting teams are likely to be already established, and trusted providers.

Should IBM provide Watson and its data analytics tools as a backend to Silverpop, then corporate marketing communications should be able to produce, over time, a number of useful case studies, success stories, etc. illustrating how this backend played an essential role in the effort.

How big is Silverpop’s market?

The answer to this question is, in my opinion, “not big enough”. I point to an article written by Jack Hough, and published on the Barrons web site late last month: Google, Facebook, Twitter: Not Enough Dollars to Go Around. Keep in mind: Silverpop’s niche is a subset of the online advertising market, and, necessarily, of a much smaller size. Further, Silverpop has a couple of competitors in its market, Marketo and Oracle’s Eloqua. So, even if one assumes Silverpop emerges as the market leader, the actual contribution to IBM’s broad revenue performance may not be substantial.

Bottom Line

Nevertheless, IBM needs methods of demonstrating the power of Watson and its data analytics tools to the much larger enterprise business market for business intelligence solutions. The Silverpop acquisition promises to give them another show piece for this effort.

Disclaimer: I have neither a position in IBM, nor any verified statistics to substantiate claims I make in this post

The top end market for “mobile phones” (Gartner’s term) continues to contract, while most of the growth in overall users is to be found at the low end, with smart phones powered by Android a clear leader

The continued shrinkage in the number of PC users will carry with serve to further define the remaining market. This segment will be “more engaged” (quoted from Gartner’s release), which I interpret to mean more inclined to purchase higher end PCs

Enterprise Software sales will grow the most, 6.9% year-over-year, and 12% from 2013 to 2014

1) Continued Shrinkage in the market for top end smart phones

Nothing new here, though I would caution interested readers to avoid the pitfall of assuming this, necessarily means lower sales growth for Apple’s top of the line smartphones. We need some glimpse into iPhone sales into the China Mobile market before we can reach a defensible conclusion on this one.

2) Buyers acquire fewer, more powerful and full featured PCs

If the PC sales forecast included in this report proves true, then a lot of the low end devices PC OEMs have pushed onto the market (as tablet competitors) may end up sitting on shelves. Acer and Asus may feel more of the pain. But HP has made some potentially risky changes in online buying options for the SMB and Home markets, which could contribute to some problems for them, as well (I will write a post to this blog shortly with further detail).

3) Tablet and Ultramobile computer sales look to be very robust for the year

The forecast of the number of new units sold is very impressive, but the actual dollar impact on overall IT spending from this segment is comparatively insignificant. Despite explosive growth, Gartner sees 4.4% growth, year over year from the devices segment. So is there much money to be made in these devices? From these figures I would say it make sense to answer this important question with some caution.

4) Enterprise Software Sales are the fastest growing segment

The big news here, which is to be found in the note at the bottom of the release, is the growing enterprise appetite for databases and analytics. This may very well point to good years for Mature ISVs, including Oracle®, Microsoft®, IBM®, EMC and SAP. Microsoft’s Cloud offers — Office 365 and Azure — may also continue to record very healthy sales figures, while IBM scrambles to increase its cloud real estate in a catch up mode.

Technology products should never be developed simply on a hunch about important problems markets need to solve. Business intelligence must be gathered from markets to support allocating any resources to build products.

I think products like Google Glass are an example of how not to develop products. Google appears to have designed this product to solve a common need for a less obtrusive method of maintaining a constant bi-directional connection to electronic communications than currently offered by smartphones, tablets and PCs. But I don’t think this need actually exists. In fact, if one reads an article authored by Matt Haber and published to The New York Times website back on Sunday, July 7, 2013, A Trip to Camp to Break a Tech Addition one gets the sense the market is waking up to the debilitating nature of constant bi-directional electronic communication. Numerous studies point to much lower levels of productivity for people who do not take breaks from work.

My conclusion also applies to a new trend in automotive features development, which is proceeding in precisely the same direction. In other words, the major automobile manufacturers are adding text to speech computing systems to vehicles. The purpose of these systems is to permit drivers and passengers to listen to email, and text messages read to them while in transit. Once again, I don’t think there is a pressing need in the market for mobile data communications for this type of device.

Interestingly enough, in both the case of Google Glass and the automotive text to speech systems, there are serious questions about whether regulatory agencies will even permit the marketing of either device. Regulatory agencies are seriously concerned about the chance for driver distraction implicit to either product. So whether market participants actively pass up on buying these products, or not, may be entirely inconsequential. Regulatory agencies may prohibit sales of these products, altogether.

Perhaps no other point so vexes me when I read about “Big Data” than the now familiar absence of a clear definition of the term. I just read an article published today, Thursday, September 5, 2013 in the “CIO” blog of the Online Wall Street Journal, Financial Services a ‘Real Leader’ in Leveraging Big Data. Michael Hickens, the author of this short post, makes a point about the new proclivity of financial services firms to adopt “big data”: “Banks and other financial services firms are further along than most other industries in making use of predictive analytics, according to a study reviewed by CIO Journal.”

In fact, this statement appears at the very start of Hickens’ article. But there is no connection made between “predictive analysis” and “big data”, so I’m left wondering about the point at hand, and where the author of this post would like to lead me. I’m also recollecting the late 1990s, when neural networks were, once again, a really strong area of interest of financial services firms. In fact, these businesses actively pursued the design of neural networks in an effort to advance the accuracy and utility of “predictive analysis”.

So what’s new about this time around? Beyond a mention of Hadoop as the data repository of choice, there is no mention whatsoever about the features I’m following on the topic of “big data”, meaning unstructured data, metadata tagging, etc. Are these financial services firms doing new work in these areas? Are they implementing taxonomies as a way of organizing unstructured data? Are they using metadata tagging techniques?

Unfortunately, Hickens short article does not include detail on these points. I would hope authors, going forward, try to be more specific about just what they mean by “big data” so reader like me can derive more benefit from articles on this kind of topic.

The Bloomberg Big Data Conference, held on March 14, 2013 in Washington DC included a panel discussion on the impact of big data on the healthcare industry. The panel was moderated by Matt Berry, Director of Healthcare Analysis, Bloomberg Government. Panelists included Mr. Oliver Kharraz, Founder and COO of ZocDoc, Ed Park, EVP and COO of athenahealth, and David Riley, Chief of Informatics, Harris Healthcare Solutions.

Matt Berry kicked off the discussion with a question. Why should doctors connected to medical practices of any size care about big data? The first panelist to respond to the question was Oliver Kharraz.

Mr. Kharraz prefaced his answer with two points:

First, he observed that big data plays a familiar role in clinical aspect of the healthcare industry; second, he made reference to Cassandra, the character in Homer’s Iliad who had the gift of predicting the future. Despite her predictions, Kharraz noted, the Trojans failed to act on them, with grim result. We aren’t as familiar with the well known role big data (circa 2013) plays in clinical medicine. Nor do we get the point of the Cassandra analogy. If Mr. Kharraz is correct, and someone is not acting on a highly accurate depiction of future reality based on big data, then someone is either foolish, or, perhaps, big data is not as convincing as Mr. Kharraz claims it to be.

He described a highly connected world, where medical services, events and participants interact within closed loop systems. He claimed these systems make a “huge impact” by providing us with concrete evidence, rather than something abstract, of the efficacy (or lack thereof) of specific treatments for specific individuals. He answered Berry’s question, in part, with a conclusion: access to the big data collected from these close loop systems benefits healthcare professionals. Once they can use the data, they gain a better understanding of “what’s in it for me.” We are less than convinced on this point.

He also described “the social engineering piece of [big data]”. Kharraz summed up this aspect as “just getting the patient to do it”, meaning his or her prescribed treatment, through another application of big data. He noted how his product, ZocDoc, is used to send out “reminders of preventative screenings”. He described how the program can be configured to send out a birthday notice to a patient just turning 50 along with a reminder to schedule a first colonoscopy. ZocDoc, he explained, can pull on a big data repository including “millions and millions” of records of messages sent, to select a likely persuasive message for an individual recipient. But, we ask, at what cost need we outfit our practice with this complex social media system to merely send treatment reminders?

Part of the problem, Ed Park of athenahealth observed, is getting healthcare professionals to stop doing things, for a moment, “to gain insight” with a look at big data. Park took a few moments to demystify big data with a reference to an unnamed clinical organization, which paid an outside firm to analyze terrabytes of data about its patients. The objective of the effort was to come up with a picture of the top 5% of people using its services. The results didn’t tell management at the organization anything new. Park concludes, if we are going to point out to healthcare professionals the benefits they can capture with big data, we need to start, by verifying the value of the data itself. We agree with Park.

In the next post to this blog we will continue a description of this panel discussion.

Michael R. Nelson, Analyst, Technology and Internet Policy, Bloomberg Government, moderated a panel discussion, “Rethinking Risks and Opportunities in Big Data: Energy and Utilities”, last Thursday, March 14, 2013, during the Bloomberg Big Data Conference, held in Washington, DC in the United States.

Amit Narayan, CEO of Autogrid, Inc. noted that “Big Data is all about breaking down silos . . . ” Clearly, if he is correct, then ISVs have an opportunity to develop lots of tools for big data for the energy sector. Power generation and consumption businesses present a unique combination of Information Technology (IT) and Industrial Control System (ICS) computing. The combination of these two highly dissimilar computing architectures amounts to a barrier to entry ISVs can depend upon to retard the rate of commoditization for this market.

Mr. Narayan described an ambitious objective, to transform utility meters from their present role as obstacles to business intelligence gathering, into powerful bridges between what he referred to as “the physics of the grid” and the “economics of the grid.” Power producers own “the physics of the grid” including Distribution Architecture (DA) and the Smart Grid, while their counterparts on the business side work with customers and the “economics of the grid”.

Few if any businesses in this category have connected DA, the Smart Grid and Smart Meters to provide an end to end big data collection method free of silos. Without enterprise wide access to the data, these businesses can’t implement the efficiencies promised by the advanced technology built into each of the components of the system. The result is an experience like Hurricane Sandy, unpleasant for energy producers and consumers, alike.

Some of the tools required to breakdown the big data silos in the energy business must deliver a better method of managing data privacy to users. The panel spent time discussing the question of how data collected from smart meters should be handled. The consensus of the participants (including Mr. Narayan, Mr. Paul Rogers, Chief Development Officer, GE Global Software Headquarters, Mr. Robert W. Bechtel, CTO and Senior Policy Advisor, Office of Energy Efficiency and Renewable Energy, U. S. Department of Energy, and Michael A. Farber, SVP, Strategic Technology and Innovation Group, Booz | Allen | Hamilton) was that the data belongs to the consumer. The energy provider has the right to work with the data, but then needs to return the data to the consumer.

We think the system they described for managing the privacy of data relies too heavily on trust. ISVs can add value by developing tools that transparently provide producers an opportunity to work with data, but, nevertheless, safeguard the ownership rights of the consumer. Any tools in this category should provide a capability to render the data anonymous.

We read with interest an article on the Computing dot Co dot UK website, RBS: crisis means organisations are focusing on data architecture. When we read how Colin Gibson, Head of Data Architecture for RBS, characterized the intention of his large enterprise to “to take information from disparate systems and provide insight into potential crises.” we had a sense of deja vu. In our opinion Mr. Gibson’s comment simply adds one more voice to the collective enterprise “cry,” which has echoed over the last 25 years or more for someone to find the “holy grail,” meaning a method to produce truly accurate indicators of future business performance from data collected across disparate sources, regardless of silos, within enterprise business.

While we are skeptical of the ultimate value for enterprise business of large IT projects in this area of business performance prediction we have no choice but to affirm that customers are “still out there” searching for these solutions. We think that innovative tech businesses can carve out defensible market niches by addressing the components of the overall sought after solution. For example, we have considerable recent experience with the markets for Microsoft® SharePoint®. We have noted a strong interest on the part of enterprise SharePoint users to avail of taxonomy, the term store and meta data to enhance the accuracy and usefulness of SharePoint search as a means of exposing as much critical data as possible to scrutiny by management decision-makers. Tech innovators with a solution to connect Business Connectivity Services (BCS) to non Microsoft databases like Oracle®‘s MySQL will likely have a healthy market opportunity in this same space as many of these enterprises support a disparate group of databases, including the very popular open source MySQL. Of course, it is difficult to build an entire business around simply one connector, but expanding the market to include enterprise IT organizations looking to empower SAP, Oracle and IBM users with the same type of capability would likely make for much more of a business from this type of effort.

If you are tossing around some notions as to how to best position, or re-position your business for enterprise IT markets interested in data analytics, business intelligence gathering and the like, then we would like to hear from you. Please telephone Ira Michael “Mike” Blonder at +1 631-673-2929 to further a discussion. You may also email Mike at imblonder@imbenterprises.com.