For questions about the plugin, open a topic in the Discuss forums. For bugs or feature requests, open an issue in Github.
For the list of Elastic supported plugins, please consult the Elastic Support Matrix.

The CSV filter takes an event field containing CSV data, parses it,
and stores it as individual fields (can optionally specify the names).
This filter can also parse data with any separator, not just commas.

Define a list of column names (in the order they appear in the CSV,
as if it were a header line). If columns is not configured, or there
are not enough columns specified, the default column names are
"column1", "column2", etc. In the case that there are more columns
in the data than specified in this column list, extra columns will be auto-numbered:
(e.g. "user_defined_1", "user_defined_2", "column3", "column4", etc.)

Define the column separator value. If this is not specified, the default
is a comma ,. If you want to define a tabulation as a separator, you need
to set the value to the actual tab character and not \t.
Optional.

If the event has field "somefield" == "hello" this filter, on success,
would add field foo_hello if it is present, with the
value above and the %{host} piece replaced with that value from the
event. The second example would also add a hardcoded field.

Add a unique ID to the plugin configuration. If no ID is specified, Logstash will generate one.
It is strongly recommended to set this ID in your configuration. This is particularly useful
when you have two or more plugins of the same type, for example, if you have 2 csv filters.
Adding a named ID in this case will help in monitoring Logstash when using the monitoring APIs.