CIOs have been pummeled with requests for real-time analytics because people in the organization think they need it — in marketing, IT, security, fraud prevention, customer support, and other areas — and some of them actually do need it. In the not-so-distant past, very few reasons justified the expense of real-time analytics, but with the cloud, a new generation of solutions, and open source projects like Apache Hadoop and Spark, the economics have changed. As a result, the scope of the use cases is expanding.

Whether to choose real time, near real time, or batch “depends on the use case and how important it is to get an up-to-the-second response. It’s all about the response,” said John Bates, CMO and former CTO for intelligent business operations and big data at Software AG, in an interview. “Reports that used to be available at the end of the month or in a week are now available intraday, and then you’re getting into 5, 10, 15 minutes. That’s fine for people who want dashboards, but if you’re doing high-frequency trading or trying to stop a security or compliance threat before it causes damage, it’s critical to receive the lowest latency response.”

While it’s clear that the time-to-insights window is collapsing, it’s less clear what individuals or companies mean when they talk about real time and near real time, since the definition can vary depending on the need, the industry, and an individual’s point of view. Real time is often defined in microseconds, milliseconds, or seconds, and near real time in seconds, minutes, or hours — although the definitions can vary even more than that. More important than a universal definition of the categories is the business need, viewed in terms of cost and benefits (usually capitalizing on opportunities, minimizing risks, and satisfying customers).