Syslog is plentiful in all environments, no matter what the size may be and Splunking your syslog is something that inevitably happens, one way or another. And believe it or not, there are tons of ways you can Splunk that data. As a Splunk Professional Services Consultant, I’ve seen many ways that customers send that data over to Splunk. In the below post, I’ll cover some of the Do’s and Don’ts to Splunking your syslog, and for what it’s worth, every environment is different. You may be limited to a particular method due to company policy, and that’s ok.

What Type of Syslog Data are We Talking About?

There are several types of devices that write to syslog. For the sake of this blog post, we are referring to data like network routers/switches/firewalls, or other applications where they natively syslog, but do not write a text log file that lives on a disk somewhere.

For Windows/Linux syslog where the log file lives on a disk, we will always recommend using a Splunk Universal Forwarder to collect that data. DO NOT send Windows/Linux syslog to a syslog server via some third-party software like NXLog.

What NOT to do When Splunking Your Syslog Data

Sending your data to a third party via SNMP traps. I’ve seen it time and time again. Network teams will configure their routers and switches to send SNMP traps to some third-party application like SolarWinds, then forward that data onto Splunk indexers from there. The issue with this method is the fact that it becomes very difficult to separate the original sources from each other. Ultimately what ends up happening is everything coming over from the third-party application ends up getting tagged with the same Splunk metadata (host, index, sourcetype) and it becomes difficult and cumbersome to separate them. ,/br>With everything getting tagged with the same metadata, field extractions become difficult and sometimes impossible and you cannot make use of apps on Splunkbase for most of this data.

Sending syslog data directly to a network port opened on Splunk. A lot of times, a Splunk admin will open a network port specifically for syslog data on a Splunk indexer or forwarder. There are several reasons that this is not the recommended approach. First, Splunk servers have to be restarted periodically. Any time you add an index, onboard data, install an application or otherwise update any configuration file, the Splunk service must be restarted. Since there is no acknowledgment, the origin source assumes that Splunk received the data. Essentially, any syslog data that was sent to Splunk during the restart is just lost data, and it cannot be recovered! The other major reason we do not recommend this approach is that many syslog devices cannot change from the default syslog port of 514. This means that Splunk would have to be running as root to be able to accept the traffic. This is not a best practice and actually violates security policy in many organizations.

Syslog forwarding log files. As mentioned earlier, there is no sense in configuring syslog forwarding on Linux/Windows servers where their logs are already being written to a file on disk. What happens with this approach, is the fact that the formatting of the logs can often get butchered in the process. This is important because there are TAs (technical add-ons) that live on Splunkbase that are used to extract fields and transform the data. Both the Linux TA and the Windows TA are common TAs used for data transformations. This data transformation is done via regex (regular expression) so if the log format does not match up, then your OOTB (out of the box) field transformations will not work.

So How Should We Handle Our Syslog Data?

Both Splunk and most any Professional Services Consultant will recommend funneling your syslog data through a syslog server like Syslog-ng, Rsyslog, or Kiwi (if you’re a Windows shop). Personally, I think Rsyslog is a bit limited in features, and I try to avoid Windows where possible when standing up Splunk architecture. Syslog-ng is completely free and easy to download and is preferred by many in the Splunk community.

Syslog-ng, or any syslog server for that matter, can fill several gaps or issues when it comes to getting data over to Splunk. If configured properly, your syslog server will rarely need to be restarted. This alleviates the stateless data issue that you get with sending UDP data directly to Splunk. If your syslog server does not go down and the service never (or hardly ever) needs to be rebooted, then the fact that UDP does not require an acknowledgment no longer impacts you, or the impact is severely reduced. And for what it’s worth, it is almost impossible to guarantee 0 data loss on UDP data.

Once you have your syslog server configured and running, you shouldn’t have to worry about Splunk running as root. Your syslog server should handle this for you, and you should be making your Splunk user the owner of the log files that your syslog server is writing. Once you do this, you will deploy a Universal Forwarder (UF) to your syslog server. Your UF will then monitor the log files that are generated by your syslog server. You will also use your UF to assign Index and Sourcetypes to each of your different log sources.

Ultimately, it is up to you and your company policy on how you handle syslog data. One of the biggest things that dictates how a company handles syslog data is a retention policy, usually written based on some sort of compliance that they have to adhere to. Minimizing data loss and meeting retention requirements of syslog data is extremely difficult without the proper use of a syslog server. There are many ways to handle your syslog data, and the recommended approach may not work for everyone. But if you have any questions or concerns, please don’t hesitate to reach out to Aditum’s Professional Services Consultants for advice.