Extracts structured fields out of a single text field within a document. You choose which field to
extract matched fields from, as well as the grok pattern you expect will match. A grok pattern is like a regular
expression that supports aliased expressions that can be reused.

This tool is perfect for syslog logs, apache and other webserver logs, mysql logs, and in general, any log format
that is generally written for humans and not computer consumption.
This processor comes packaged with many
reusable patterns.

Grok sits on top of regular expressions, so any regular expressions are valid in grok as well.
The regular expression library is Oniguruma, and you can see the full supported regexp syntax
on the Onigiruma site.

Grok works by leveraging this regular expression language to allow naming existing patterns and combining them into more
complex patterns that match your fields.

The syntax for reusing a grok pattern comes in three forms: %{SYNTAX:SEMANTIC}, %{SYNTAX}, %{SYNTAX:SEMANTIC:TYPE}.

The SYNTAX is the name of the pattern that will match your text. For example, 3.44 will be matched by the NUMBER
pattern and 55.3.244.1 will be matched by the IP pattern. The syntax is how you match. NUMBER and IP are both
patterns that are provided within the default patterns set.

The SEMANTIC is the identifier you give to the piece of text being matched. For example, 3.44 could be the
duration of an event, so you could call it simply duration. Further, a string 55.3.244.1 might identify
the client making a request.

The TYPE is the type you wish to cast your named field. int and float are currently the only types supported for coercion.

For example, you might want to match the following text:

3.44 55.3.244.1

You may know that the message in the example is a number followed by an IP address. You can match this text by using the following
Grok expression.

The Grok processor comes pre-packaged with a base set of pattern. These patterns may not always have
what you are looking for. Pattern have a very basic format. Each entry describes has a name and the pattern itself.

You can add your own patterns to a processor definition under the pattern_definitions option.
Here is an example of a pipeline specifying custom pattern definitions:

Sometimes one pattern is not enough to capture the potential structure of a field. Let’s assume we
want to match all messages that contain your favorite pet breeds of either cats or dogs. One way to accomplish
this is to provide two distinct patterns that can be matched, instead of one really complicated expression capturing
the same or behavior.

Here is an example of such a configuration executed against the simulate API:

Both patterns will set the field pet with the appropriate match, but what if we want to trace which of our
patterns matched and populated our fields? We can do this with the trace_match parameter. Here is the output of
that same pipeline, but with "trace_match": true configured: