Note: This is a beta release of Red Hat Bugzilla 5.0. The data contained within is a snapshot of the live data so any changes you make will not be reflected in the production Bugzilla. Also email is disabled so feel free to test any aspect of the site that you want. File any problems you find or give feedback here.

This site requires JavaScript to be enabled to function correctly, please enable it.

To send necessary information to the remote syslog, the following changes were made.
If setting the directive use_record to true (false, by default) in /etc/fluent/configs.d/dynamic/output-remote-syslog.conf,
1) hostname in the fluentd record is forwarded to the remote syslog. Otherwise, the hostname of the fluentd is forwarded.
2-1) application logs:
including "container_name", "namespace_name", and "pod_name" in the output content
facility - configured value in output-remote-syslog.conf is set.
'user' by default
severity - configured value in output-remote-syslog.conf is set.
'info' by default
2-2) operation logs:
facility - set record[systemd][u][SYSLOG_FACILITY] to facility if available
severity - set record[level] to severity if available
3) The directive tag_key improvement.
. tag_key takes multiple values.
e.g., tag_key ident,SYSLOG_IDENTIFIER
. tag_key takes the dot formatted nested tag key.
e.g., tag_key systemd.u.SYSLOG_IDENTIFIER
Notes: If the tag_key is not set, the fluentd tag ("output_ops_tag" for operation logs;
"output_tag" for container logs) is sent to rsyslog. When a tag_key is specified and
the value is found in the record key, the record value is used for the tag. E.g.,
tag_key ident
record['ident'] == "myTag"
then, "myTag" is set to tag in the packet to be sent to rsyslog.
If multiple tag_key values are configured, the first hit one is picked up and the rest
is ignored even if it's found in the record. E.g.,
tag_key systemd.u.SYSLOG_IDENTIFIER,ident
record['systemd']['u']['SYSLOG_IDENTIFIER'] == "myTag0"
record['ident'] == "myTag1"
then, "myTag0" is set to tag in the packet to be sent to rsyslog.
If none of tag_key value does not hit, it falls back to the default behaviour and
output_ops_tag for operation and output_tag for container logs are sent.
Sample logs in /var/log/messages when use_record is set to true.
Log test message by logger: rsyslogTestMessage-20180215-145124
3,17,Feb 15 14:52:25,ip-172-18-5-234.ec2.internal,rsyslogTestTag:, rsyslogTestMessage-20180215-145124
Log test message by kibana: testKibanaMessage-20180215-145225
6,16,Feb 15 14:52:41,ip-172-18-5-234.ec2.internal,output_tag:, namespace_name=logging, container_name=kibana, pod_name=logging-kibana-1-qr6pl, message=GET /testKibanaMessage-20180215-145225 404 4ms - 9.0B

Hi @Anping,
Could you please attach the fluentd syslog config file /etc/fluent/configs.d/dynamic/output-remote-syslog.conf?
The file should have remote_syslog config param, which value is supposed to be the one you passed with openshift_logging_mux_remote_syslog_host.
When I run ansible with the option "-e openshift_logging_mux_remote_syslog_host=10.11.12.13", the value is passed to the config file as expected.
output-remote-syslog.conf:
## This file was generated by generate-syslog-config.rb
<store>
@type syslog_buffered
remote_syslog 10.11.12.13
port 514
hostname logging-mux-2-bqnpx
facility local0
severity debug
</store>
Note: I'm adding this test case to remote-syslog.sh https://github.com/openshift/origin-aggregated-logging/pull/887
And I don't see any difference between mux and the standalone fluentd in this aspect. Do you observe the problem just in mux, not in fluentd?
Thanks!

Hi @Anping, the patch should be in v3.9.22-2 and newer.
Could you post your output-remote-syslog.conf? I'm interested in the use_record and tag_key values.
Regarding the pod name, it is supposed to be logged in the message as 'pod_name=<POD_NAME>'...
FYI, you can find the test case in the upstream CI tests:
origin-aggregated-logging/test/remote-syslog.sh
title="Test 6, use rsyslogd on the node"
This is the config file generated in the CI test.
==> /etc/fluent/configs.d/dynamic/output-remote-syslog.conf <==
## This file was generated by generate-syslog-config.rb
<store>
@type syslog_buffered
@id remote-syslog-input
remote_syslog ip-172-18-12-241.ec2.internal
port 601
hostname logging-mux-7-4dsnh
tag_key ident,systemd.u.SYSLOG_IDENTIFIER,local1.err
facility local0
severity info
use_record true
</store>

Thanks for the config file, Anping.
Could you add "use_record true" to your config and see if it changes the behaviour? It could be one by:
oc set env daemonset/logging-fluentd OR dc/logging-mux REMOTE_SYSLOG_USE_RECORD=true
Then, add "tag_key ident,systemd.u.SYSLOG_IDENTIFIER" and repeat the test?
oc set env daemonset/logging-fluentd OR dc/logging-mux REMOTE_SYSLOG_TAG_KEY='ident,systemd.u.SYSLOG_IDENTIFIER
Please see also "Doc Text" of this bug. If you could review it and provide your inputs to improve it, I'd greatly appreciate it.

> 1. How can I enable tag_key for the REMOTE_SYSLOG_HOST_BACKUP
It means you want to use the value of REMOTE_SYSLOG_HOST_BACKUP as tag_key?
Could you try setting hostname to tag_key?
<store>
@type syslog_buffered
remote_syslog 172.31.70.146
port 514
hostname logging-fluentd-czklb
tag_key hostname
use_record true
....
</store>
> 2. No pod_name/uuid when use docker json_file logs. A message is as below. I think the programname should be the pod_name.
Unfortunately, remote-syslog plugin is not implemented that way. It does not update the programname value. The PR for this bug is adding the namespace name, container name and pod name to the message. This is an example from upstream CI test remote-syslog.sh.
[2018-07-31T17:57:28.574+0000] 6,16,Jul 31 17:57:28,ip-172-18-7-210.ec2.internal,output_ops_tag:, namespace_name=openshift-logging, container_name=kibana, pod_name=logging-kibana-1-wvpbh, message=GET /deee3211c1c1454288923ec4eafe09f7 404 2ms - 9.0B

1. How can I enable tag_key for the REMOTE_SYSLOG_HOST_BACKUP
No, I want to add REMOTE_SYSLOG_TAG_KEY for the REMOTE_SYSLOG_HOST_BACKUP(the second rsyslog server). Could I use the Environment variable?
2. No pod_name/uuid when use docker json_file logs
It seems the "tag_key ident,systemd.u.SYSLOG_IDENTIFIER" only works for journald logs. I couldn't find the pod-name/namespace name. What tag_key should I use to colllect json-file container logs?

(In reply to Anping Li from comment #36)
> 1. How can I enable tag_key for the REMOTE_SYSLOG_HOST_BACKUP
> No, I want to add REMOTE_SYSLOG_TAG_KEY for the
> REMOTE_SYSLOG_HOST_BACKUP(the second rsyslog server). Could I use the
> Environment variable?
Well, now I'm confused... May I ask where the REMOTE_SYSLOG_HOST_BACKUP came from? Environment variables starting with REMOTE_SYSLOG_ are defined here.
https://github.com/openshift/origin-aggregated-logging/blob/master/fluentd/generate_syslog_config.rb#L16-L24
There is no HOST_BACKUP nor PORT_BACKUP...
> 2. No pod_name/uuid when use docker json_file logs
> It seems the "tag_key ident,systemd.u.SYSLOG_IDENTIFIER" only works for
> journald logs.
I don't think so. When I ran this test, the docker log driver was json-file and this log is from an application pod (In this example, it is a kibana pod :).
[2018-07-31T17:57:28.574+0000] 6,16,Jul 31 17:57:28,ip-172-18-7-210.ec2.internal,output_ops_tag:, namespace_name=openshift-logging, container_name=kibana, pod_name=logging-kibana-1-wvpbh, message=GET /deee3211c1c1454288923ec4eafe09f7 404 2ms - 9.0B
> I couldn't find the pod-name/namespace name. What tag_key
> should I use to colllect json-file container logs?
When I ran it, this series of values were set to tag_key, I believe.
REMOTE_SYSLOG_TAG_KEY='ident,systemd.u.SYSLOG_IDENTIFIER,local1.err'
If you are interested in, please take a look at this part of the CI-test.
https://github.com/openshift/origin-aggregated-logging/blob/master/test/remote-syslog.sh#L293-L413

Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.
For information on the advisory, and where to find the updated
files, follow the link below.
If the solution does not work for you, open a new bug report.
https://access.redhat.com/errata/RHBA-2018:2335

Note

You need to
log in
before you can comment on or make changes to this bug.