Oracle Blog

Event Servers, A Disruptive Technology

Event processing has been around for quite some time - programmed trading, systems management software, and windowing systems are just a few examples of specialized event processing. Virtually every program maintains state and executes logic against changes in data state. So why there is such an industry buzz around event processing and what impact can an event server have on your enterprise?

For the first time event processing is being performed by off the shelf logic components that are currently converging toward a industry standard. This is in contrast to event processing traditionally being performed in custom logic. The paradigm shift in event processing is similar to what took place with databases. Data was housed in flat files and proprietary APIs accessed and manipulated that data. In the broadest of strokes, relational data theory in combination with indexes, transactions and the Structured Query Language revolutionized data storage and retrieval. Just as with databases, it is a combination of technologies that enables event processing’s disruptive effect.

The container allows for the aggregation of multiple data streams into a single memory space. Adapters translate heterogeneous protocol and data specific formats into plain-old-java-objects (POJOs) representing events that then flow through the event server. Stream based data fusion can now take place with data that previously could not be interrelated. The complex event processor can take these multiple data streams, aggregate them, filter, correlate, split and merge them. This single memory space is a major component of low-latency event processing – eliminating sources of latency in order to obtain a results in the shortest time possible.

Furthermore, a pluggable container allows for multiple types of event processing, not just the window-based processing of a complex event processor. A rules engine or custom logic in combination with more complicated structures such as Markov chains, Bayesian networks or machine learning algorithms are easily integrated into the container.

Event Processing Network:

The event processing network (EPN) enables various types of event processing within the same process space. An EPN is composed of nodes connected by streams. Events represented as POJOs can flow from node to node. At each node you can have separate type of processing (continuous query, rules based, etc). This modular approach can result an application that is more easily debugged and maintained. Each node of the network can be separately unit tested and performance tuned. Furthermore, the event processing network provides the granularity to bind various threading models to each node.

Expression Query Language / Continuous Query Language

EQL and CQL lie at the heart of modern event processing. These languages are Structured Query Language (SQL) like, however, instead of operating on tables, they operate on streams of POJOs. If you were to write an event processing from scratch, your application would implement the features (aggregation, filtering, correlation, etc.) found in these languages. EQL and CQL provide the following capabilities at a language level:

o Aggregation
+ As mentioned above, the container aggregates data within a process space, but from there the query engine performs a second level of aggregation where multiple streams can be inputted into a single event processing window. It is within these windows that events of differing origins and types can be filtered, correlated, split and merged.

o Filtering
+ Just as it is important to determine which events are relevant, removing “noise” in data streams allows for more efficient data processing down stream. Just as SQL returns results from a table that meet the criteria from the WHERE clause, EQL and CQL output events that meet the WHERE clause of the statements that they are executing.

o Correlation
+ Correlation can take place in the joining of two separate event streams based on shared attributes, or pattern matching can be applied to windows of events. Event correlation is the process of establishing a relationship between or among events.

o Merging and splitting data
+ Merging two data streams can acting as form of pre-processing, simplifying down-stream-processing . Only related data between the two streams will be forwarded on. For example, you may want the most recent readings (events) from two sensors, but that information is not valuable if the readings are more than one second apart. You can “merge” two streams of sensor data, creating a single output stream of objects that meet your time-based requirements.

o Language level time based processing
+ EQL/CQL languages have temporal qualities that allow them to establish time based windows, freeing the developer from managing time based operations within their own code.

o Local and distributed cache
+ Caching services can be used to share data among processing nodes. Or, when directly referenced in CQL/EQL queries, caching can aggregate less frequently changing data with continuously streaming data, thus reducing processing and memory resources. When placing data into a Coherence distributed-cache, data will automatically benefit from Coherence’s highly-available architecture.

Low latency processing

Many applications require low-latency processing. For example in algorithmic trading, if a trading decision takes longer to calculate and execute than market movement, the data is stale and the trading decision is no longer relevant.

JRockit Realtime, is the anchoring point in low latency processing. JVM pause times due to garbage collections (GC) are minimized by the deterministic GC algorithm. Furthermore, JRockit provides facilities for run-time latency analysis. You can record observations of the application as it is running, examine application execution for latency bottlenecks and tune your application code accordingly. Lastly, if your architecture includes distributed cache, Coherence’s cache latency can drop when running on top of JRockit Realtime.
JRockit Realtime enables Oracle's Event Server to meet the most demanding low latency event processing use cases.

As a disruptive technology, what existing technologies are Event Servers displacing?

1. Use cases that previously could not justify the cost of custom event processing software development and maintenance can now achieve a return on investment.
2. Existing event applications can benefit from migration to event servers, freeing resources working on event infrastructure to focus on business event logic - ultimately reducing application costs.
3. Batch processing of data resulting in exception reports can be translated into continuous event processing algorithms. Exceptions can be delivered within milliseconds of their occurrence. For a six sigma process this may be critical in monitoring, identifying and immediately responding to defect rates.
4. Event processing technology is now affordable at a departmental level. Event processing can be centrally managed or brought to any operational location of an enterprise.
5. As a processor of sensor based data, event servers bring down the overall cost of a sensor based solution.