http://operational.io/Ghost 0.11Sun, 11 Mar 2018 00:56:48 GMT60At Cisco Live this year in Las Vegas, a coworker (Paul Giblin Twitter: @dreamlessod) and I decided to attended a new kind of event hosted by Cisco. The event was titled "Cisco Capture the Flag: A Full-Stack, Team-Based Competition". The competition was a structured game where teams compete in order]]>http://operational.io/the-new-ccie-capture-the-flag-competition-cisco-live/a945f679-e372-467b-a0d1-9611a9795998Fri, 30 Jun 2017 06:22:44 GMTAt Cisco Live this year in Las Vegas, a coworker (Paul Giblin Twitter: @dreamlessod) and I decided to attended a new kind of event hosted by Cisco. The event was titled "Cisco Capture the Flag: A Full-Stack, Team-Based Competition". The competition was a structured game where teams compete in order to try and solve challenges related to networking, operating systems, applications, forensic analysis, and security; and the winning team gets Intel NUC computers. While CCIE is listed in the title, having a CCIE certification was not a prerequisite to attend and compete (good for me as I don't have a CCIE certification).

First of all, this was a VERY challenging competition, and I really enjoyed that it covered such a wide breadth of technical domains. While it was network-focused, it required the teams to really have a wide array of expertise in order to successfully navigate the challenges. With that said, I wanted to share the various challenges and solutions that were part of this competition in order to get the word out about this competition in hopes that Cisco will decide to do it again next year. In case you're wondering, the creators of this course are fine with me sharing the solutions.

The competition started with a web portal with a map of the world, and each challenge was a country. When you clicked on the country, you would be presented with the scenario and a clue as to what the flag was that you needed to capture. When you found the flag, you would enter it into the web portal and collect points.

The site was a global map similar to this and you were able to click on countries. Each country was a challenge to complete.

Challenge #1

United States (50 Points)

In this challenge you were presented with a router who had an adjacent router which was configured with IPv4 and IPv6 addressing, and running OSPF, EIGRP, ISIS, and BGP. The challenge was to configure your router to bring up the adjacencies and peerings with the other router, and then there would be a host route that you would learn via each protocol. One of those host routes would be the "flag". Once complete, you enter the host IP address into the CTF portal and you get 50 points.

Solution

The solution was pretty standard. Just configure routing for the four protocols and learn the prefixes.

Challenge #2

Spain (100 Points)

It was a similar scenario in the Spain challenge as to the US challenge. You need to bring up a BGP peering with an adjacent router. Here's the kicker, first, it is using IPv6 addressing, and second, you have to peer using a Linux server, not a Cisco router. Once the peering is up, there is a host route that your linux server will learn, and that is your flag.

Solution

The solution for this challenge was to first configure the linux server's second NIC for IPv6 and configure it with a static IPv6 address. The second part of the solution was to configure a BGP daemon on the linux server and peer with the router. In this case we used Quagga. With a little trial and error, and reading the manual, we were able to get the peering up.

Challenge #3

Australia (100 Points)

This challenge presented you with a linux server and gave a clue that some other device was transmitting the flag to you. That was the entirety of the clue; pretty vague, right?

Solution

So at first everyone struggled on this one because tcpdump showed zero packets coming or going from the NIC on the linux server. Eventually the proctors made an adjustment to the lab and packets started to flow. The flow of packets was a multicast stream of data. We figured the stream of data must contain the flag and we just needed to figure out what kind of data it was and read it. So we took a packet capture with tcpdump, SFTP'd it off the box and loaded up Wireshark. Once in Wireshark, we looked at the Statistics > Conversations and saw three distinct streams. Looking at one of the streams' meta-data showed that it was MPEG2 encoded data. From there, we used Decode As in Wireshark and selected RTP. Then, Telephony > RTP > RTP Streams. From there we analyzed the streams, and exported the stream bytes to a file. Once we had extracted the bytes from the packet capture and saved them to a file, we opened it with VLC media player and it was a video that showed an IPv6 address. This was our flag!

Challenge #4

China (100 Points)

This challenge's clue said that there was an adjacent router in which you needed to bring up a BGP peering. That was it. There was no other information given.

My Attempt

To start, we knew that the other router was already configured for BGP so we used the Embedded Packet Capture feature in the router to get a PCAP and export it to the laptop and open it in Wireshark. We could see the packets coming in on TCP/179. What we were hoping is that we could see the BGP OPEN message which would have the neighbor's AS number. Unfortunately though we did not see that. The reason why is BGP must have TCP established first, so we configured BGP on our router even though we knew that we didn't have the correct information. At this point, we still could not see the BGP OPEN message as we still could not get a TCP handshake established. We could see that option 19 (MD5 checksum of TCP packet) was present in the TCP SYN packet so we knew that we had to crack that password for that MD5 checksum in order to complete the challenge. At this point in the competition, we ran out of time.

Below is the solution that I came up with, but was not able to complete. And at the very end, I will give the other solution that one of the other teams came up with. They actually did complete the China challenge and shared their method after the competition ended.

My Proposed Solution

RFC 2385 defines TCP option 19 which is a MD5 checksum of the TCP pseudoheader, TCP header, TCP payload, and the password (source). I was going to take all of these fields from Wireshark and then brute force the password and try and find an MD5 checksum that matched what was extracted from the packet capture.

An Alternative Solution

One of the other teams brute-forced the BGP MD5 password by looping over the packet capture with the tcpdump -M option. The -M option allows you to specify a checksum password and tcpdump will tell you if it is valid or not. Much easier than attempting to do it my way.

But Wait, There's More

Now that the BGP password is known and now configured on your router, you can see what AS number the other router is configured for in the BGP OPEN message. Once you have configured both the remote-as and the password, you still will not be able to form a peering. This is because the other router only sends his own ASN, not what ASN he thinks the peer to be. So, then you must brute force your ASN and keep trying different ASNs on your router until you happen to guess the one that the remote router is configured to peer with. Only then will the peering come up and you can get the BGP learned prefix, which is the flag for the China challenge.

In Closing

I really enjoyed this competition and it is not quite like any CTF I have ever seen. Most are security or networking, but not a total mixture of so many different technical domains. I hope that Cisco continues this again next year. In case you were wondering, my team (Team Presidio) did win the competition. That said, I just want to say that there were so many skilled teams and it was really fun to discuss with everyone how they came about their solutions.

Here are the NUCs! I'm thinking about using it to run some containers with different services for my house (think Plex, and other such things).

]]>Have you have ever been in a situation where you have been given a block of addresses by your Internet provider and you have exhausted that space by publishing services to the Internet? Yea, me too and it sucks. The typical solution would be to get a larger block from]]>http://operational.io/reclaiming-public-ip-addresses-using-server-name-indication/e82845a8-fe35-4217-b191-34dd8f16981aThu, 18 Feb 2016 05:11:55 GMTHave you have ever been in a situation where you have been given a block of addresses by your Internet provider and you have exhausted that space by publishing services to the Internet? Yea, me too and it sucks. The typical solution would be to get a larger block from the provider. That seems easy enough until they tell you that there are no contiguous blocks to choose from and in order to get additional space, you would have to re-IP the outside interface of your firewall, change NAT, filtering, and DNS in order to make the migration successful. Put the scotch down, you won't need it. Just use Server Name Indication.

What is Server Name Indication?

Server Name Indication (SNI) is a standard defined in RFC3546 and subsequently in RFC6066 which allows for clients to signal to a server what hostname they are trying to initiate a secure connection to. This is useful for servers or reverse proxies that host multiple virtual hosts on the same address.

Using this information, it's possible to configure multiple web services using the same listener, thus allowing you to consolidate multiple services on a single IP address. In today's world, 90% of traffic or more runs over HTTP/HTTPS. This means you can consolidate almost all services into a single public IP address allowing you to free up your other IP addresses for other services that are not HTTP/HTTPS. Now break out the scotch and celebrate!

How does SNI work?

During the Client Hello phase of TLS negotiation, the client sends a hostname in the SNI field. In a browser, it is the hostname that is in the browser address bar.

Browser Requesting a TLS Site

TLS Client Hello Showing SNI

But wait, isn't TLS encrypted? How can the server or reverse proxy even see the SNI field?

It is not encrypted because SNI is transmitted from client to server before the TLS handshake is complete...meaning, the SNI field is not encrypted. Take a minute to look at the diagram below which shows the TLS negotiation process.

TLS Negotiation

Implementing SNI with F5 LTM

This post will outline the process on F5's LTM load balancer, but I'm pretty sure it's possible using other load balancer/reverse proxy solutions.

SNI is supported in the following browsers:

Opera 8.0 and later (the TLS 1.1 protocol must be enabled)

Internet Explorer 7 or later (under Windows Vista and later only, not under Windows XP)

Firefox 2.0 or later

Curl 7.18.1 or later (when compiled against an SSL/TLS toolkit with SNI support)

Chrome 6.0 or later (on all platforms - releases up to 5.0 only on specific OS versions)

Safari 3.0 or later (under OS X 10.5.6 or later and under Windows Vista and later)

SNI on the F5 Big-IP platform was introduced in the 11.1.0 release. Solution article SOL13452 is the official F5 guide for this implementing SNI.

Create a Client SSL Profile for Each FQDN

For each FQDN you will create a client SSL profile as shown below.

Also, you must create a fallback SSL profile to use if a client presents an SNI that does not match any other profile, or if the client does not present an SNI at all. Make sure you select Default SSL Profile for SNI, and if you want to deny all connections that do not support SNI, you can also select Require Peer SNI support.

Apply multiple SSL profiles to an HTTPS VIP

Now just apply multiple client SSL profiles as you would apply a single client SSL profile without SNI.

Create an iRule for Pool Selection

Now that the TLS is all taken care of, you have to create an iRule that will direct traffic destined for one host to one pool, and another host for another pool. Considering the HTTP host header field contains the same value as the SNI field, just use an iRule that reads in the HTTP::host variable and chooses a pool based on that.

Apply this iRule to the VIP in question, and you will see that traffic destined for www.operational.io will go to the first pool, and traffic destined for blog.operational.io will go to another pool. If it doesn't match anything, you can put the traffic to a pool of sorry servers to give the user a friendly error page.

]]>Introduction

In certain environments it is necessary to get flow data from different places in your network for compliance or security in general. Recently I ran across a situation in which the native flow gerenators within the Cisco Nexus platform were only able to do 1 in 1000 sampling due

In certain environments it is necessary to get flow data from different places in your network for compliance or security in general. Recently I ran across a situation in which the native flow gerenators within the Cisco Nexus platform were only able to do 1 in 1000 sampling due to hardware limitations when certain features were enabled. This created a real blind spot, and the requirements were such that a sampling was not good enough. All flows needed to be captured. As always, there were limited budgets, so I had to get creative. I ended up using NTOP's nProbe to generate IPFIX flows to ELK stack via zeroMQ to deliver line-rate flow monitoring solution with no sampling. I should note that this hasn't been pushed to the limit of 10 gig, but so far I have not seen flow drops with considerable load, and according to the nProbe documentation, it should be able to do full 10 gig.

SPAN to nProbe

In order to get the network traffic off the wire, a SPAN port was provisioned on the switch in question and plugged into a mediocre server that had a 10Gig NIC in it.

nProbe Configuration

The nProbe configuration was done via the nBox GUI for the most part and was pretty straight forward. There was one configuration option that I wasn't able to configure with the GUI and I had to just add it to the nProbe configuration file. That was the -i eth0 directive. I'm not sure why I couldn't set this via the nBox GUI, and after mucking around with it for a while, I just directly modified the text config file.

What Is The ELK Stack?

This is an update to my original article about ELK for Network Operations. This is built on the latest version of ELK (Elasticsearch 1.5, Logstash 1.5.0.rc2, and Kibana 4) on Centos 7.

What Is The ELK Stack?

ELK stack is a powerful set of tools being used for log correlation and real-time analytics. This post will discuss the benefits of using it, and be a guide on getting it up and running in your environment. ELK is actually an acronym that stands for Elasticsearch, Logstash, Kibana. In recent months I have been seeing a lot of interest in ELK for systems operations monitoring as well as application monitoring. It was really impressive and I thought of how useful it could be for network operations. Many environments just have the basics covered (up/down alerting and performance monitoring). Some companies go one step further and are logging syslog to a central server. For long time this has been acceptable, but things must change. While this guide is solely meant to show how network data can be captured and used, the real goal is to have all infrastructure and applications log to ELK as well.

Below are some screenshots showing real-time dashboards that would be useful in a NOC environment. With ELK stack, building a dashboard this amazing takes minutes. It's dynamic, so you can build a dashboard that is useful for your use case. In the examples below, our NOC was able to see issues before anyone even picked up the phone to report the issue.

Try It Out First

If you want to get a feel for the ELK stack first without having to set up the entire stack, you can with Virtualbox and Vagrant!

There are 9,000 firewall logs for this demo. Make sure you change the date range to 2015-03-19 00:04:10.000 - 2015-03-19 00:05:45.000.

Traffic Types Chart

NOC Dashboard

Interactive Area Charts

What Data is ELK Capturing?

Focusing just on network operations, ELK is great for capturing, parsing, and making searchable syslogs and SNMP traps. ELK is not really meant for up/down alerting or performance metrics like interface utilization. There are some things you can do in that arena, but that is beyond the scope of this post.

Order of Operations

To understand how a syslog goes from text to useful data, you must understand which components of ELK are performing what roles. First, the syslog server is collecting the raw, textual logs. Second, Logstash is filtering and parsing the logs into structured data. Third, Elasticsearch is indexing and storing the structured data for instantaneous search capability. Fourth, Kibana is a means to interact and search through the data stored in Elasticsearch.

For the sake of simplicity, all roles will be installed on a single server. If you need additional performance or need to scale out, then the roles should be separated onto different servers.

Syslog Server - Collect the logs

Logstash - Filter and parse the logs

Elasticsearch - Index and store the data

Kibana - Interact with the data (via web interface)

Collecting the Logs With a Syslog Server

You can actually collect syslogs directly with Logstash, but many places already have a central syslog server running and are comfortable with how it operates. For that reason I will use a standard syslog server for this post. Certain types of compliance standards, like PCI-DSS, require that you keep logs for a certain period of time. Native syslog logs take up less storage than logs processed with Logstash and Elasticsearch. Because of this, I chose to store them in gzipped text files for 90 days, and only have a few weeks indexed and searchable with ELK. In the event that there was an audit or a security incident you could search the old data in the raw syslog files or pull in old data into ELK. If you have more disk space to throw at Elasticsearch, then you could keep much more than a few weeks. You are only limited by the amount of storage available.

Setting Up syslog-ng:

For a central syslog server I chose Centos 6.5 running syslog-ng. Centos ships with rsyslog, but I think the syslog-ng configuration is much easier to understand and configure. On a default installation of Centos 6.5, first we need to install Extra Packages for Enterprise Linux (EPEL).

Sudo to root:

sudo -s

Download and install EPEL and tools needed for ELK stack:

yum install epel-release -y

yum install java rubygems vim -y

Stop and disable rsyslog and install syslog-ng:

service rsyslog stop

chkconfig rsyslog off

yum install syslog-ng-libdbi syslog-ng -y

Configure syslog-ng:

vim /etc/syslog-ng/syslog-ng.conf

Insert the following text into the file by pressing i, then paste the text. To save the file, first press the escape key, and then :wq and the enter key to write the file and quit.

In order to be able to receive syslog traffic, it must be permitted in iptables. For the sake of brevity, iptables will be turned off and disabled.

service iptables stop

chkconfig iptables off

SELinux is a security measure that enforces mandatory access control (MAC) on Linux. Sometimes this will not permit processes to function properly if the labels are not set up correctly. By installing syslog-ng with yum, all of the SELinux labels should be correct, but if you have issues you may need to fix them. I would highly suggest not disabling SELinux, instead, learn how to use it and fix whatever issues you may come across. That being said, if you don't want to mess around with it you can set it to permissive by modifying /etc/selinux/config and rebooting the server.

Setting Up Logstash, Elasticsearch, and Kibana

The easiest way to get ELK up and running is to use the Elasticsearch and Logstash repos and install using the yum package manager. Below are the steps to install everything as well as a video showing the installation, step by step.

Install the GPG key for the Elasticsearch repo:

rpm --import http://packages.elasticsearch.org/GPG-KEY-elasticsearch

Install the Elasticsearch repo for yum to use:

vim /etc/yum.repos.d/elasticsearch.repo

Insert the following text into the file by pressing i, then paste the text. To save the file, first press the escape key, and then :wq and the enter key to write the file and quit.

Insert the following text into the file by pressing i, then paste the text. To save the file, first press the escape key, and then :wq and the enter key to write the file and quit.

# Kibana is served by a back end server. This controls which port to use.
port: 5601
# The host to bind the server to.
host: "localhost"
# The Elasticsearch instance to use for all your queries.
elasticsearch_url: "http://localhost:9200"
...output suppressed...

The above configuration is just a test to make sure everything is working. The generator plugin will just generate a ton of messages that say "Hello World". The next section will discuss the steps in building a real configuration.

Start Logstash:

systemctl start logstash

Now you should be able to go to your browser and browse to http://localhost/. You will have to set up Kibana intially, which is pretty much just clicking next a couple times and it will set up your default index pattern.

Once verified that everything is working and you see logs in Kibana, go ahead and stop Logstash so it doesn't keep dumping test messages into Elasticsearch.

Stop Logstash:

systemctl stop logstash

Custom Log Parsing

Now that the ELK installation is functioning, we need to take it one step further and define an input to pull in the syslog file. Then create filters to parse and process the individual syslog messages, and finally output the data to Elasticsearch. For more detailed usage documents and filter modules available, please visit the Logstash website.

The following sections are excerpts from /etc/logstash/conf.d/logstash.conf and are meant to show what each individual section is doing. The full configuration will be available at the end of this section.

Defining the Inputs

To define the input as the syslog file, the file input is chosen and the appropriate directives are given.

sudo vim /etc/logstash/conf.d/logstash.conf

Insert the following text into the file by pressing i, then paste the text. To save the file, first press the escape key, and then :wq and the enter key to write the file and quit.

Grok and Custom Expressions

Grok is one of the main filters you will use to parse logs. If you receive a message that is not in a structured format like xml or json, then Grok is necessary to pull apart the text into different fields. Grok has lots of built-in expressions like "HOST" that matches a hostname, or "IP" which matches an IP address, but there are times when you will have to build your own. It requires writing regex expressions which is complicated, but if you learn how to do it, it will help you tremendously with a whole other host of tasks in IT operations. For this example Logstash configuration, it is parsing Palo Alto logs. For this I did write some custom expressions. In order to install those custom expressions you have do the following:

You must copy custom to /opt/logstash/vendor/bundle/jruby/1.9/gems/logstash-patterns-core-0.1.6/patterns/

All of the above snippets are meant to help explain what is going on in the configuration. Here is the full configuration for reference as well as the custom Grok expressions.

TIP : There is a great tool called Grok Debugger that helps build a Grok parse statement against a raw log file. I highly suggest you use the tool.

Start Logstash back up:

systemctl start logstash

Seeing It All In Kibana

Now that it is running, go to http://localhost and read the landing page. For additional information on using Kibana 4, please visit the Kibana Guide.

Now Go Build It

Hopefully you now have a decent understanding on how to build ELK stack. I promise, once you build it, others will see the tool and want to use it. They will likely want to put their server and application data in it as well. This will make it even more useful as you would then be able to correlate events across your entire infrastructure. I hope this post has been helpful. My contact information is listed below if you would like to reach me.

]]>In the networks that I run I typically try and follow the FCAPS model. The (C)onfiguration part of that is often overlooked. I have used paid and free products, and they all work fine, but I wanted something that was simple, version controlled, and fully open source. This solution]]>http://operational.io/foss-solution-for-network-configuration-backups/74444d82-bbd7-4c72-8489-6c0ba355c585Tue, 09 Sep 2014 20:23:20 GMTIn the networks that I run I typically try and follow the FCAPS model. The (C)onfiguration part of that is often overlooked. I have used paid and free products, and they all work fine, but I wanted something that was simple, version controlled, and fully open source. This solution uses Cisco's Embedded Event Manager (EEM) to ship configurations to the backup server, Git for version control, and Gitlist for browsing your network configurations.

The End Result

After this is deployed, all of your Cisco IOS devices will automatically backup their configurations after write mem or copy run start are issued from the command line. At midnight, git will commit all of the changes for that day. Then you will be able to browse version-controlled, network configurations via the Gitlist web interface. Here are some screenshots of what it will look like.

Set Up The Configuration Backup Server

To set up a server to house all of the network configurations that have been backed up, we will be using Centos 6.5. For the transport medium to ship the configurations to the server we will be using SFTP. Since SFTP is a subsystem of OpenSSH, setup is super simple. Just a vanilla install of Centos will already have SSH running. All we need to do is create a specific user account for the configuration backup process and install and configure a Git repository to store and version the configurations.

Do all of the following as root by using sudo -s or su - if you are not in sudoers.

Create a directory to store everything:

mkdir /var/data

Change permissions on the directory so everyone can read and execute (list the contents):

chmod 755 /var/data

Add the user configbackup and set home directory to /var/data/configbackup:

useradd -d /var/data/configbackup configbackup

Change the permissions on the new user folder. This is needed to permit Gitlist to view the Git repository.

chmod 755 /var/data/configbackup

Create a password for the user account:

passwd configbackup

Change user to configbackup:

su - configbackup

Make a directory for all of your configuration repositories:

mkdir git_repos

Make a directory specific for your network configurations:

mkdir git_repos/config_repo

exit

Set Up Git for Version Control

Install Git version control:

yum install git -y

Change user to configbackup:

su - configbackup

Set our Git username and email address, so when doing commits, it will show the correct user information.

git config --global user.name "configbackup"

git config --global user.email netops@example.com

Initialize the Git repository:

cd git_repos/config_repo

git init

Create a test file:

touch test

Add all files in the directory for tracking by Git:

git add .

Perform the initial Git commit:

git commit -m"initial commit"

Verify the first commit:

git log

Create a cronjob to perform a nightly commit of any changes:

vim /var/data/configbackup/nightly-git-commit.sh

Insert the following text into the file by pressing i, then paste the text. To save the file, first press the escape key, and then :wq and the enter key to write the file and quit.

Enable SSL for Apache so we don't send HTTP basic authentication in the clear:

Install the SSL module for Apache:

yum install mod_ssl -y

If you want to install your own certificate, then you need to modify the /etc/httpd/conf.d/ssl.conf file and point Apache to your certificate. I will not go into the details of this. I will just use the self-signed certificate that is installed when mod_ssl is installed.

SELinux is a security measure that enforces mandatory access control (MAC) on Linux. Sometimes this will not permit processes to function properly if the labels are not set up correctly. I would highly suggest not disabling SELinux, instead, learn how to use it and fix whatever issues you may come across. That being said, if you don't want to mess around with it you can set it to permissive by modifying /etc/selinux/config and rebooting the server.

Verify everything is functioning:

browse to https://host.domain.com/gitlist

Configuration Backup Using EEM on Cisco IOS Devices

In order to make your switches and routers ship their configurations to the server, you need to install an EEM script. This script listens for write mem or copy run start and when either of these commands is run at the cli, the script will execute and copy the startup-config to the remote server via secure copy protocol (SCP).

Install on all Cisco IOS devices you wish to back up:

First, we need to silence all of the prompts that IOS presents us when we use the copy command. If this is not done, your EEM script will hang indefinitely. Then install the EEM script.

Over the years I have had to do a lot of repetitive tasks in OpenSSL, and I've always had to hunt down what command I needed to use. So, I finally made a list of the most common use cases and commands, and now it's time to share.

A

Over the years I have had to do a lot of repetitive tasks in OpenSSL, and I've always had to hunt down what command I needed to use. So, I finally made a list of the most common use cases and commands, and now it's time to share.

A Word About Certificate Formats and Encoding

There are two main types of encoding of certificates; DER and PEM.

DER is a binary encoding of a certificate. Typically these use the file extension of .crt or .cer.

PEM is a Base64 encoding of a certificate represented in ASCII therefore it is readable as a block of text. This is very useful as you can open it in a text editor work with the data more easily. The data itself is contained between a prefix of:

-----BEGIN CERTIFICATE-----

and a postfix of:

-----END CERTIFICATE-----

Similarly, RSA keys have a prefix and postfix as well. They are denoted with:

-----BEGIN PRIVATE KEY-----

and

-----END PRIVATE KEY-----

Certificate Signing Requests use:

-----BEGIN CERTIFICATE REQUEST-----

and

-----END CERTIFICATE REQUEST-----

Typically these use the file extension of .pem. RSA private and public keys use the file extension of .key. Certificate Signing Requests (CSRs) use the file extension of .csr.

In the event that you are getting errors when running any OpenSSL commands, you may need to explicitly declare the input format and/or the output format. This can be done by adding the following flags to almost any command:

Showing Contents of Certificate Signing Requests

Showing Contents of Certificates

Print out the contents of the certificate in human-readable format:

openssl x509 -in name.pem -noout -text

Verifying Association of Private Key to Certificate

To compare whether a private key and certificate match you need to compare the modulus of both. Considering these are very long strings of text and numbers, it's easier to perform an MD5 checksum and compare the hashes.

The above command will show Files /dev/fd/63 and /dev/fd/62 are identical if the MD5 hashes match, and will show Files /dev/fd/63 and /dev/fd/62 differ if the MD5 hashes are different.

Combining Root CA and Intermediate CA Certificates into One File

In order to work with certificates that have more than one CA certificate in the issuance path, you have to combine all of the certificates into one single file. Most certificates will be issued by an intermediate authority, and then that intermediate will have been issued by a root authority.

To combine multiple PEM certificates, you just need to put the ASCII data from all of the certificates into one file. Below is an example of this.

EDIT: Reddit user zerouid mentioned that the order of the PEM certificates in the file matters for some older versions of Java. So, to be on the safe side make sure to put the key first (when applicable), then the certificate, then the intermediate, and finally the root certificate. Basically work your way up the chain to the root certificate.

Check out the latest version of this guide here. The updated article utilizes the latest version of the ELK stack on Centos 7.

What is ELK?

ELK is a powerful set of tools being used for log correlation and real-time analytics. This post will discuss the benefits of using it, and be a guide on getting it up and running in your environment. ELK is actually an acronym that stands for Elasticsearch, Logstash, Kibana. In recent months I have been seeing a lot of interest in ELK for systems operations monitoring as well as application monitoring. It was really impressive and I thought of how useful it could be for network operations. Many environments just have the basics covered (up/down alerting and performance monitoring). Some companies go one step further and are logging syslog to a central server. For long time this has been acceptable, but things must change. While this guide is solely meant to show how network data can be captured and used, the real goal is to have all infrastructure and applications log to ELK as well.

Below are some screenshots showing real-time dashboards that would be useful in a NOC environment. With ELK, building a dashboard this amazing takes less than a half an hour. It's dynamic, so you can build a dashboard that is useful for your use case. In the examples below, our NOC was able to see issues before anyone even picked up the phone to report the issue.

Real-Time Dashboard

Denial of Service Attack

Attempted DNS DDoS Participation

VOIP provider accidentally routed all voice traffic into our network

What Data is ELK Capturing?

Focusing just on network operations, ELK is great for capturing, parsing, and making searchable syslogs and SNMP traps. ELK is not really meant for up/down alerting or performance metrics like interface utilization. There are some things you can do in that arena, but that is beyond the scope of this post.

Order of Operations

To understand how a syslog goes from text to useful data, you must understand which components of ELK are performing what roles. First, the syslog server is collecting the raw, textual logs. Second, Logstash is filtering and parsing the logs into structured data. Third, Elasticsearch is indexing and storing the structured data for instantaneous search capability. Fourth, Kibana is a means to interact and search through the data stored in Elasticsearch.

For the sake of simplicity, all roles will be installed on a single server. If you need additional performance or need to scale out, then the roles should be separated onto different servers.

Syslog Server - Collect the logs

Logstash - Filter and parse the logs

Elasticsearch - Index and store the data

Kibana - Interact with the data (via web interface)

Collecting the Logs With a Syslog Server

You can actually collect syslogs directly with Logstash, but many places already have a central syslog server running and are comfortable with how it operates. For that reason I will use a standard syslog server for this post. Certain types of compliance standards, like PCI-DSS, require that you keep logs for a certain period of time. Native syslog logs take up less storage than logs processed with Logstash and Elasticsearch. Because of this, I chose to store them in gzipped text files for 90 days, and only have a few weeks indexed and searchable with ELK. In the event that there was an audit or a security incident you could search the old data in the raw syslog files or pull in old data into ELK. If you have more disk space to throw at Elasticsearch, then you could keep much more than a few weeks. You are only limited by the amount of storage available.

Setting Up syslog-ng:

For a central syslog server I chose Centos 6.5 running syslog-ng. Centos ships with rsyslog, but I think the syslog-ng configuration is much easier to understand and configure. On a default installation of Centos 6.5, first we need to install Extra Packages for Enterprise Linux (EPEL).

In order to be able to receive syslog traffic, it must be permitted in iptables. For the sake of brevity, iptables will be turned off and disabled.

sudo service iptables stop

sudo chkconfig iptables off

SELinux is a security measure that enforces mandatory access control (MAC) on Linux. Sometimes this will not permit processes to function properly if the labels are not set up correctly. By installing syslog-ng with yum, all of the SELinux labels should be correct, but if you have issues you may need to fix them. I would highly suggest not disabling SELinux, instead, learn how to use it and fix whatever issues you may come across. That being said, if you don't want to mess around with it you can set it to permissive by modifying /etc/selinux/config and rebooting the server.

Setting Up Logstash, Elasticsearch, and Kibana

The easiest way to get ELK up and running is to use the Elasticsearch and Logstash repos and install using the yum package manager. Below are the steps to install everything as well as a video showing the installation, step by step.

The above configuration is just a test to make sure everything is working. The generator plugin will just generate a ton of messages that say "Hello World". The next section will discuss the steps in building a real configuration.

Start Logstash:

sudo service logstash start

Now you should be able to go to your browser and browse to http://host.domain.com/kibana and see if logs are showing up in the web interface.

Once verified that everything is working and you see logs in Kibana, go ahead and stop Logstash so it doesn't keep dumping test messages into Elasticsearch.

Step-by-Step installation video:

Custom Log Parsing

Now that the ELK installation is functioning, we need to take it one step further and define an input to pull in the syslog file. Then create filters to parse and process the individual syslog messages, and finally output the data to Elasticsearch. For more detailed usage documents and filter modules available, please visit the Logstash website.

The following sections are excerpts from /etc/logstash/conf.d/logstash.conf and are meant to show what each individual section is doing. The full configuration will be available at the end of this section.

Defining the Inputs

To define the input as the syslog file, the file input is chosen and the appropriate directives are given.

sudo vim /etc/logstash/conf.d/logstash.conf

Insert the following text into the file by pressing i, then paste the text. To save the file, first press the escape key, and then :wq and the enter key to write the file and quit.

Grok and Custom Expressions

Grok is one of the main filters you will use to parse logs. If you receive a message that is not in a structured format like xml or json, then Grok is necessary to pull apart the text into different fields. Grok has lots of built-in expressions like "HOST" that matches a hostname, or "IP" which matches an IP address, but there are times when you will have to build your own. It requires writing regex expressions which is complicated, but if you learn how to do it, it will help you tremendously with a whole other host of tasks in IT operations. For this example Logstash configuration, it is parsing Cisco ASA logs. For this I did write some custom expressions. In order to install those custom expressions you have do the following:

sudo vim /opt/logstash/patterns/custom

Insert the following text into the file by pressing i, then paste the text. To save the file, first press the escape key, and then :wq and the enter key to write the file and quit.

All of the above snippets are meant to help explain what is going on in the configuration. Here is the full configuration for reference as well as the custom Grok expressions.

TIP : There is a great tool called Grok Debugger that helps build a Grok parse statement against a raw log file. I highly suggest you use the tool.

Seeing It All In Kibana

Now that it is running, go to http://host.domain.com/kibana and read the landing page. It will give you basic instructions as well as a link to the default Logstash dashboard. For additional information on using Kibana, please visit the Kibana Guide.

Now Go Build It

Hopefully you now have a decent understanding on how to build an ELK instance. I promise, once you build it, others will see the tool and want to use it. They will likely want to put their server and application data in it as well. This will make it even more useful as you would then be able to correlate events across your entire infrastructure. I hope this post has been helpful. My contact information is listed below if you would like to reach me.