Cognitive Computing in 2014 has certainly captured a lot of buzz, thanks in part to IBM’s strong push on Watson, but also largely thanks to consumer trends – both wearables and the rising use and sophistication of consumer agent interfaces on mobile devices such as Google Now, Microsoft Cortana and Siri (not to mention the Lowe’s robot). These systems are not truly cognitive, but in their limitations they illustrate to the consumer the art of the possible, and frequently consumer shifts drive enterprise IT shifts.

Currently, though, the promise of rich experiential data from the Internet of Things (consumer and industrial) is still a largely untapped opportunity for enterprises. Product companies from appliance manufactures to automakers to aircraft manufacturers have provisioned their products with sometimes tens of thousands of sensors, but they are as yet unable to intelligently tap into insights these sensors can provide — not just individually but in the aggregate.

The finalization of JSON-LD 1.0, combined with the increasing use of schema.org, have been in my opinion the most important developments this year. (This is until the Linked Data Platform becomes a Recommendation, of course!) JSON-LD provides developers with a simple way to get into the semantic web without having to learn some of the more obscure features of RDF that tend to scare people away. This, along with other RDF serialization formats such as Turtle and N-Quad that also became W3C Recommendations this year, will hopefully make it clear to anyone that RDF is first and foremost a data model and not a data format, and that this data model is actually quite simple and powerful. Turned off by RDF/XML, many people got the wrong impression of RDF. I encourage anyone to have a look at JSON-LD or Turtle for a new perspective. At the same time, the increasing level of adoption of schema.org demonstrates the power of semantic markup to the masses.

We are decidedly biased because we are trying to build a Web 3.0 platform. But in our mind, we are seeing the development of a plug-and-play ecosystem around data that will underpin a true Web 3.0 where contextual data (and data in general) is a first-class citizen and not an afterthought like we saw in Web 1.0 and Web 2.0. By this we mean we are seeing pieces like data extraction and APIification (Kimono Labs, Import.io), API directory services (Mashape), online machine learning (BigML), NLP and text analytics (AlchemyAPI) now not only being offered as services, but also are becoming easier to tie together to create rich pages and presentation layers of data.

We love this because at Silk we specifically built our product for a world where data, text and visualizations all blend into a more structured environment. It’s all coming together faster than we had imagined.

The ultra-hype in media on artificial intelligence and ensuing debate. Everything under the sun is being claimed and reported on the web and traditional media, much of which is false or misleading, but also [there’s an] occasional grain of truth. Collectively, this represents the most important development in my view, which of course includes the use of semantics and cognitive computing, for both better and worse – better due to raised awareness of the current and potential use of AI, and worse due to false and misleading reporting on benefits and threats. (Though dated, this is one of the better sources available on the topic I’ve found: Yudkowsky, Eliezer. 2008. “Artificial Intelligence as a Positive and Negative Factor in Global Risk,” which is one of 22 chapters/topics from the book Global Catastrophic Risks, intro of which is here.)

Seeing how effective cognitive computing has been for so many companies, especially healthcare, demonstrates that it’s more than just a trend. It’s a way to better understand the way data can positively impact how we all work and live. The definition may go through a few iterations but the value it delivers can’t be ignored.

If anyone had a question about the value of background knowledge and the importance of focusing on entities and relationships (which are the basis of any semantic approach) rather than keywords and statistics (only), Google’s use of knowledge graph and semantic search shows they should stop questioning. The same approach will make its way from meeting the demand for more contextual and personalized information to more demanding integration and analysis solutions. Correspondingly, there is an upswing in efforts focused on automated and faster development of high quality vocabularies, knowledge bases and ontologies.

Solutions relying on natural language processing (NLP), especially for enterprise and vertical markets, have found (domain-specific and domain-independent) ontologies to be indispensable. And this will spread to many vertical markets.

Last year, I talked about increasing attention to smart data (introduced in 2004), as the need to convert big data into contextual, personalized and actionable information increases. With the growth in data, I see a rapid increase in the amount of attention spent towards addressing these key attributes of information: contextual, personalized and actionable.

2014 was the year that “Cognitive Computing” became a buzzword. We also saw the wider release of a number of cognitive computing frameworks for deep learning, AI, and advanced data science, such as IBM’s Watson platform. Deep Learning also became a hot topic as major incumbents (Google, Facebook) revealed significant Deep Learning investments. (Nova Spivack also authored a guest column on the recent past and the near future, which you can read here.)

Please feel free to weigh in with your own thoughts on the most significant events, trends or deployments you’ve seen in the last year!

About the author

Jennifer Zaino is a New York-based freelance writer specializing in business and technology journalism. She has been an executive editor at leading technology publications, including InformationWeek, where she spearheaded an award-winning news section, and Network Computing, where she helped develop online content strategies including review exclusives and analyst reports. Her freelance credentials include being a regular contributor of original content to The Semantic Web Blog; acting as a contributing writer to RFID Journal; and serving as executive editor at the Smart Architect Smart Enterprise Exchange group. Her work also has appeared in publications and on web sites including EdTech (K-12 and Higher Ed), Ingram Micro Channel Advisor, The CMO Site, and Federal Computer Week.