FlinkCEP - Complex event processing for Flink

FlinkCEP is the Complex Event Processing (CEP) library implemented on top of Flink. It allows you to detect event patterns in an endless stream of events, giving you the opportunity to get hold of what’s important in your

The Pattern API

The pattern API allows you to define complex pattern sequences that you want to extract from your input stream.

Each complex pattern sequence consists of multiple simple patterns, i.e. patterns looking for individual events with the same properties. From now on, we will call these simple patterns patterns, and the final complex pattern sequence we are searching for in the stream, the pattern sequence. You can see a pattern sequence as a graph of such patterns, where transitions from one pattern to the next occur based on user-specified conditions, e.g. event.getName().equals("start"). A match is a sequence of input events which visits all
patterns of the complex pattern graph, through a sequence of valid pattern transitions.

Attention Each pattern must have a unique name, which you use later to identify the matched events.

Individual Patterns

A Pattern can be either a singleton or a looping pattern. Singleton patterns accept a single event, while looping patterns can accept more than one. In pattern matching symbols, the pattern "a b+ c? d" (or "a", followed by one or more"b"’s, optionally followed by a "c", followed by a "d"), a, c?, and d are singleton patterns, while b+ is a looping one. By default, a pattern is a singleton pattern and you can transform it to a looping one by using Quantifiers. Each pattern can have one or more Conditions based on which it accepts events.

Quantifiers

In FlinkCEP, you can specifiy looping patterns using these methods: pattern.oneOrMore(), for patterns that expect one or more occurrences of a given event (e.g. the b+ mentioned before); and pattern.times(#ofTimes), for patterns that expect a specific number of occurrences of a given type of event, e.g. 4 a’s; and pattern.times(#fromTimes, #toTimes), for patterns that expect a specific minimum number of occurrences and a maximum number of occurrences of a given type of event, e.g. 2-4 as.

You can make looping patterns greedy using the pattern.greedy() method, but you cannot yet make group patterns greedy. You can make all patterns, looping or not, optional using the pattern.optional() method. For a pattern named start, the following are valid quantifiers:

The latter refers to “looping” patterns, i.e. patterns that can accept more than one event, e.g. the b+ in a b+ c,
which searches for one or more b’s.

Conditions on Properties

You can specify conditions on the event properties via the pattern.where(), pattern.or() or the pattern.until() method. These can be either IterativeConditions or SimpleConditions.

Iterative Conditions: This is the most general type of condition. This is how you can specify a condition that accepts subsequent events based on properties of the previously accepted events or a statistic over a subset of them.

Below is the code for an iterative condition that accepts the next event for a pattern named “middle” if its name starts with “foo”, and if the sum of the prices of the previously accepted events for that pattern plus the price of the current event do not exceed the value of 5.0. Iterative conditions can be powerful, especially in combination with looping patterns, e.g. oneOrMore().

Attention The call to context.getEventsForPattern(...) finds all the
previously accepted events for a given potential match. The cost of this operation can vary, so when implementing
your condition, try to minimize its use.

Simple Conditions: This type of condition extends the aforementioned IterativeCondition class and decides
whether to accept an event or not, based only on properties of the event itself.

Combining Conditions: As shown above, you can combine the subtype condition with additional conditions. This holds for every condition. You can arbitrarily combine conditions by sequentially calling where(). The final result will be the logical AND of the results of the individual conditions. To combine conditions using OR, you can use the or() method, as shown below.

pattern.where(newSimpleCondition<Event>(){@Overridepublicbooleanfilter(Eventvalue){return...// some condition}}).or(newSimpleCondition<Event>(){@Overridepublicbooleanfilter(Eventvalue){return...// or condition}});

For looping patterns (e.g. oneOrMore() and times()) the default is relaxed contiguity. If you want
strict contiguity, you have to explicitly specify it by using the consecutive() call, and if you want
non-deterministic relaxed contiguity you can use the allowCombinations() call.

Attention
In this section we are talking about contiguity within a single looping pattern, and the
consecutive() and allowCombinations() calls need to be understood in that context. Later when looking at
Combining Patterns we’ll discuss other calls, such as next() and followedBy(),
that are used to specify contiguity conditions between patterns.

Pattern Operation

Description

where(condition)

Defines a condition for the current pattern. To match the pattern, an event must satisfy the condition.
Multiple consecutive where() clauses lead to their conditions being ANDed:

pattern.where(newIterativeCondition<Event>(){@Overridepublicbooleanfilter(Eventvalue,Contextctx)throwsException{return...// some condition}});

or(condition)

Adds a new condition which is ORed with an existing one. An event can match the pattern only if it
passes at least one of the conditions:

Combining Patterns

Now that you’ve seen what an individual pattern can look like, it is time to see how to combine them into a full pattern sequence.

A pattern sequence has to start with an initial pattern, as shown below:

Pattern<Event,?>start=Pattern.<Event>begin("start");

valstart:Pattern[Event, _]=Pattern.begin("start")

Next, you can append more patterns to your pattern sequence by specifying the desired contiguity conditions between
them. In the previous section we described the different contiguity modes supported by
Flink, namely strict, relaxed, and non-deterministic relaxed, and how to apply them in looping patterns. To apply
them between consecutive patterns, you can use:

next(), for strict,

followedBy(), for relaxed, and

followedByAny(), for non-deterministic relaxed contiguity.

or

notNext(), if you do not want an event type to directly follow another

notFollowedBy(), if you do not want an event type to be anywhere between two other event types

Relaxed contiguity means that only the first succeeding matching event will be matched, while with non-deterministic relaxed contiguity, multiple matches will be emitted for the same beginning. As an example, a pattern a b, given the event sequence "a", "c", "b1", "b2", will give the following results:

Strict Contiguity between a and b: {} (no match), the "c" after "a" causes "a" to be discarded.

Relaxed Contiguity between a and b: {a b1}, as relaxed continuity is viewed as “skip non-matching events
till the next matching one”.

Non-Deterministic Relaxed Contiguity between a and b: {a b1}, {a b2}, as this is the most general form.

It’s also possible to define a temporal constraint for the pattern to be valid.
For example, you can define that a pattern should occur within 10 seconds via the pattern.within() method.
Temporal patterns are supported for both processing and event time.

Attention A pattern sequence can only have one temporal constraint. If multiple such constraints are defined on different individual patterns, then the smallest is applied.

next.within(Time.seconds(10));

next.within(Time.seconds(10))

Pattern Operation

Description

begin()

Defines a starting pattern:

Pattern<Event,?>start=Pattern.<Event>begin("start");

next()

Appends a new pattern. A matching event has to directly succeed the previous matching event
(strict contiguity):

Pattern<Event,?>next=start.next("middle");

followedBy()

Appends a new pattern. Other events can occur between a matching event and the previous
matching event (relaxed contiguity):

Pattern<Event,?>followedBy=start.followedBy("middle");

followedByAny()

Appends a new pattern. Other events can occur between a matching event and the previous
matching event, and alternative matches will be presented for every alternative matching event
(non-deterministic relaxed contiguity):

Pattern<Event,?>followedByAny=start.followedByAny("middle");

notNext()

Appends a new negative pattern. A matching (negative) event has to directly succeed the
previous matching event (strict contiguity) for the partial match to be discarded:

Pattern<Event,?>notNext=start.notNext("not");

notFollowedBy()

Appends a new negative pattern. A partial matching event sequence will be discarded even
if other events occur between the matching (negative) event and the previous matching event
(relaxed contiguity):

Pattern<Event,?>notFollowedBy=start.notFllowedBy("not");

within(time)

Defines the maximum time interval for an event sequence to match the pattern. If a non-completed event
sequence exceeds this time, it is discarded:

pattern.within(Time.seconds(10));

Pattern Operation

Description

begin()

Defines a starting pattern:

valstart=Pattern.begin[Event]("start")

next()

Appends a new pattern. A matching event has to directly succeed the previous matching event
(strict contiguity):

valnext=start.next("middle")

followedBy()

Appends a new pattern. Other events can occur between a matching event and the previous
matching event (relaxed contiguity) :

valfollowedBy=start.followedBy("middle")

followedByAny()

Appends a new pattern. Other events can occur between a matching event and the previous
matching event, and alternative matches will be presented for every alternative matching event
(non-deterministic relaxed contiguity):

valfollowedByAny=start.followedByAny("middle");

notNext()

Appends a new negative pattern. A matching (negative) event has to directly succeed the
previous matching event (strict contiguity) for the partial match to be discarded:

valnotNext=start.notNext("not")

notFollowedBy()

Appends a new negative pattern. A partial matching event sequence will be discarded even
if other events occur between the matching (negative) event and the previous matching event
(relaxed contiguity):

valnotFollowedBy=start.notFllowedBy("not")

within(time)

Defines the maximum time interval for an event sequence to match the pattern. If a non-completed event
sequence exceeds this time, it is discarded:

pattern.within(Time.seconds(10))

Detecting Patterns

After specifying the pattern sequence you are looking for, it is time to apply it to your input stream to detect potential matches. To run a stream of events against your pattern sequence, you have to create a PatternStream. Given an input stream input, a pattern pattern and an optional comparator comparator used to sort events with the same timestamp in case of EventTime or that arrived at the same moment, you create the PatternStream by calling:

The input stream can be keyed or non-keyed depending on your use-case.

Attention Applying your pattern on a non-keyed stream will result in a job with parallelism equal to 1.

Selecting from Patterns

Once you have obtained a PatternStream you can select from detected event sequences via the select or flatSelect methods.

The select() method requires a PatternSelectFunction implementation.
A PatternSelectFunction has a select method which is called for each matching event sequence.
It receives a match in the form of Map<String, List<IN>> where the key is the name of each pattern in your pattern
sequence and the value is a list of all accepted events for that pattern (IN is the type of your input elements).
The events for a given pattern are ordered by timestamp. The reason for returning a list of accepted events for each pattern is that when using looping patterns (e.g. oneToMany() and times()), more than one event may be accepted for a given pattern. The selection function returns exactly one result.

A PatternFlatSelectFunction is similar to the PatternSelectFunction, with the only distinction that it can return an
arbitrary number of results. To do this, the select method has an additional Collector parameter which is
used to forward your output elements downstream.

The select() method takes a selection function as argument, which is called for each matching event sequence.
It receives a match in the form of Map[String, Iterable[IN]] where the key is the name of each pattern in your pattern
sequence and the value is an Iterable over all accepted events for that pattern (IN is the type of your input elements).

The events for a given pattern are ordered by timestamp. The reason for returning an iterable of accepted events for each pattern is that when using looping patterns (e.g. oneToMany() and times()), more than one event may be accepted for a given pattern. The selection function returns exactly one result per call.

The flatSelect method is similar to the select method. Their only difference is that the function passed to the
flatSelect method can return an arbitrary number of results per call. In order to do this, the function for
flatSelect has an additional Collector parameter which is used to forward your output elements downstream.

Handling Timed Out Partial Patterns

Whenever a pattern has a window length attached via the within keyword, it is possible that partial event sequences
are discarded because they exceed the window length. To react to these timed out partial matches the select
and flatSelect API calls allow you to specify a timeout handler. This timeout handler is called for each timed out
partial event sequence. The timeout handler receives all the events that have been matched so far by the pattern, and
the timestamp when the timeout was detected.

To treat partial patterns, the select and flatSelect API calls offer an overloaded version which takes as
the first parameter a PatternTimeoutFunction/PatternFlatTimeoutFunction and as second parameter the known
PatternSelectFunction/PatternFlatSelectFunction. The return type of the timeout function can be different from the
select function. The timeout event and the select event are wrapped in Either.Left and Either.Right respectively
so that the resulting data stream is of type org.apache.flink.types.Either.

To treat partial patterns, the select API call offers an overloaded version which takes as the first parameter a timeout function and as second parameter a selection function.
The timeout function is called with a map of string-event pairs of the partial match which has timed out and a long indicating when the timeout occurred.
The string is defined by the name of the pattern to which the event has been matched.
The timeout function returns exactly one result per call.
The return type of the timeout function can be different from the select function.
The timeout event and the select event are wrapped in Left and Right respectively so that the resulting data stream is of type Either.

Handling Lateness in Event Time

In CEP the order in which elements are processed matters. To guarantee that elements are processed in the correct order when working in event time, an incoming element is initially put in a buffer where elements are sorted in ascending order based on their timestamp, and when a watermark arrives, all the elements in this buffer with timestamps smaller than that of the watermark are processed. This implies that elements between watermarks are processed in event-time order.

Attention The library assumes correctness of the watermark when working in event time.

To guarantee that elements across watermarks are processed in event-time order, Flink’s CEP library assumes correctness of the watermark, and considers as late elements whose timestamp is smaller than that of the last seen watermark. Late elements are not further processed.

Examples

The following example detects the pattern start, middle(name = "error") -> end(name = "critical") on a keyed data
stream of Events. The events are keyed by their ids and a valid pattern has to occur within 10 seconds.
The whole processing is done with event time.

Migrating from an older Flink version

The CEP library in Flink-1.3 ships with a number of new features which have led to some changes in the API. Here we
describe the changes that you need to make to your old CEP jobs, in order to be able to run them with Flink-1.3. After
making these changes and recompiling your job, you will be able to resume its execution from a savepoint taken with the
old version of your job, i.e. without having to re-process your past data.

The changes required are:

Change your conditions (the ones in the where(...) clause) to extend the SimpleCondition class instead of
implementing the FilterFunction interface.

Change your functions provided as arguments to the select(...) and flatSelect(...) methods to expect a list of
events associated with each pattern (List in Java, Iterable in Scala). This is because with the addition of
the looping patterns, multiple input events can match a single (looping) pattern.

The followedBy() in Flink 1.1 and 1.2 implied non-deterministic relaxed contiguity (see
here). In Flink 1.3 this has changed and followedBy() implies relaxed contiguity,
while followedByAny() should be used if non-deterministic relaxed contiguity is required.