2017: Bigger, faster data makes for smarter machines

For me, those predictions set the tone for the year to come and help me focus my attention on the trends that really matter. This year, the topic that has me most inspired is the advent of big, fast data – data with high volume and velocity.

Big, fast data will improve machine intelligence by being a better source of training for machine-learning algorithms. This new machine intelligence will give rise to an unprecedented growth in innovation and enterprise productivity.

Over the next year, we are likely to see:

Leading-edge companies use huge, fast streams of data as fuel to simulate new innovations

New wearables that are intelligent enough to interpret their user’s intent

Companies that improve loyalty and trust with customer experiences based entirely on virtual agents

More companies providing cognitive (human-level) intelligence as an API service

Machine-learning algorithms become a ubiquitous and essential part of business operations

Rise of the machines

Machine learning gives computer programs the ability to adapt and grow without being explicitly programmed. These algorithms look for patterns in data and use those patterns to adjust their behavior. For example, spam-filtering algorithms learn to recognize spam simply by reading emails.

Better data may mean bigger data, i.e., higher volume or variety. And big data is a common enterprise asset now thanks to the wide availability of massively distributed file systems. But better data can also mean faster data or higher velocity. Data-streaming systems capable of massive throughput, also common today, feed this trend.

Last year, Dan predicted that both big data and fast data technologies would make the use of contextual data a common occurrence in the enterprise. As an example of this, in the last year, we saw a single set of raw data from social media posts used for fundraising, predicting rally protesters and guiding policy statements. The same data, filtered through a different context, resulted in different applications.

Now we are seeing the growth of massively distributed systems with extremely fast throughput. We are seeing the emergence of bigger, faster data.

Over the next year, this will give rise to smarter algorithms as models receive better training data. In the digital twin example I’ve talked about, we can use huge, fast streams of manufacturing data as fuel to simulate new innovations. Leading-edge manufacturers are starting to use this technology for innovations like digital wind farms and smarter product lifecycle management.

When machines get smart

In 2017, machines will get smarter because their training is better. In fact, it is very likely that, within three years, leading enterprises will devote as many resources to training machines as they do to training their people. Machines will get better at understanding human speech and intent; at intuiting relevance; at recognizing images of all kinds, from human faces to technical design drawings.

We will see the next wave of machine intelligence take such forms as wearables that can interpret the user’s intent. We will see companies improve loyalty and trust with customer experiences based entirely on interactions with virtual agents — or bots.

Bot technology will become easily accessible from smartphone and smartwatch devices. These bots will revolutionize online sales by allowing users to engage in conversational e-commerce. The technology will allow business transactions based on bot-to-bot (the new B2B) communication, where virtual agents are empowered to negotiate, buy and sell.

When smarter machines converge with the API economy, we should expect to see more companies providing cognitive intelligence as a service. The practice of plugging into interfaces designed to pipe human-level intelligence into digital apps of all kinds will become widespread.

Industrial Internet platforms are already allowing home devices to cooperate with each other to lower power consumption (and utility costs) on behalf of homeowners. We are likely to see human-level intelligence become the top differentiating feature for new appliances and applications.

Most importantly, we expect to see machine-learning algorithms become a ubiquitous and essential part of business operations. Algorithms, trained by faster, smarter data, will play a key role in determining customer demand, increasing customer trust and delivering unprecedented productivity and intelligence into the enterprise.

Figure 2: Most enterprise machine-learning projects will fail. But those companies that take advantage of services like Industrial Machine Learning will find it much easier to build and deploy successful advanced analytics initiatives.

But, it won’t all be smooth sailing. Companies that deploy machine learning often find it difficult to make a measurable impact on their business. Gartner has estimated the failure rate of corporate machine-learning projects to be nearly 60 percent. Capgemini found that only 27 percent of executives believe their analytics projects succeed. And of those, only eight percent were considered “very” successful.

Intelligent machines with real-world impact are hard to build. In 2017, look for forward-thinking IT service providers (like DXCC) to offer platforms and services that make it much easier for companies to build and deploy the next generation of intelligent machines.

I know it will be on the top of my mind!

Jerry Overton is a data scientist and Distinguished Technologist in DXC’s Analytics group. He leads the strategy and development for DXCs Advanced Analytics, Artificial Intelligence and Internet of Things offerings.

ABOUT THE AUTHOR

DXC’s distinguished technologists often see problems through a completely different lens and inspire innovation and excellence in everything they do. Take a look at tech through their unique perspectives in this ongoing blog series.