Who put a GUI on my script logs?

You know nothing, Jon Sn… er VMware Engineer

One of the great things about automation is the ability to execute a complex or tedious task in a consistent and repeatable way. And it works every time exactly as it should… Or we live in the real world filled with inconsistencies, imperfect humans writing code, and just plain old randomness . Your scripts will produce results, both positive and negative, and you will need a way to track what it is they are doing or what they accomplished. You can either go find that one log located somewhere to parse through the data to determine what happened, or you can send it to VMware vRealize Log Insight and make it all much more simple.

Log insight is a tool that can be used as a graphical aggregate for logs from a number of sources. It can be used to easily filter through a large number of logs and can present information in custom dashboards for easy consumption. What I am going to go over in this post will be how it can be utilized as a central repository for your script logs.
As enterprises grow and add an increasing reliance on automation into their environment, being able to keep track of, report, and react to issues has become an important task. How is an operations engineer able to easily keep track of a number of different scripts that execute throughout the day or be able to quickly notice patterns in script execution? Using Log Insight is one way of making this job a little easier.

Pretty Colors

One of the ways that log insight can make it easier to go through your scripts is with the customizable dashboards.
You start with the basic dashboard that only shows the total number of events ingested by Log Insight over the past hour:

You can then add filters to that dashboard or clone it and create separate customized dashboards based on your needs:

This custom dashboard is accomplished with some very easy to define queries. To break it down, you start with a time span and a field to filter off of:

In this query, I am looking for the event entries from the last hour with the event type field (this is a custom field is created by my scripts, which I will dive into later). The result is that any chart attached to this dashboard will adjust their views based on the dataset defined by the query.
This chart shows the number of script executions over the past hour:

And the following charts are examples of how you can keep a closer eye on your scripts:

This provides a simple pie chart of the type of events that have been ingested by Log Insight. If you had this dashboard set to a single script you could see its success ratio. You can then click on the individual slices to go directly to the Interactive Analytics page to see the events defined by the query. You can also use this page to further define queries and get more granular information on the fly:

…..

Or if you are trying to track specific error counts you could end up with the following chart:

You can very easily set up a number of high level dashboards to observe your environment as a whole and then have other dashboards that are much more granular to provide greater insight into individual scripts or subsets of larger scripts.

The dashboards and analytics of Log Insight provide an interactive GUI to easily make sense of a lot of log data. In a short period of time an engineer can have all of the information they need on hand to keep track of their complex environment.

Nerd Speak…squared

Now that I have sold you on the awesomeness that is sending your script logs to Log Insight, how do we get there?

The scripting required to get there is pretty straight forward, in that it is just a REST call using a POST method to the log insight server. The body of the POST is what controls the data that is ingested by Log Insight. Essentially you have a function that you will call in your script whenever you want to log an event. The parameters passed into the function are the data that makeup the meat of the event ingested.

I have some sample Powershell and Python scripts (https://github.com/gmadro/vMadBro/tree/master/SendLogInsight) with functions built out to allow a person to send their script logs as events to Log Insight. My language of choice for this is Python as it requires less code and the REST body is just pure JSON so it is a bit easier to manipulate but I do a lot of work with Powershell so including a version with that seemed appropriate. I plan to also include a version in Ruby once I start working with Chef more.

Below are examples for the body of the POST method:

# Python

rest_body = {“events”: [{

“fields”: [

{“name”:“EventType”, “content“:event_type}, # The previously mentioned eventtype field which is used as a filter for my dashboards

{“name”:“EventID”, “content”:id_event} # Another field being ingested by this event POST

An Insightful View

The beauty of Log Insight ingestion is that there is no set event data that is expected. Each event logged can be as small as just using the text field (Up to 16KB worth) to many optional user defined fields (Up to 4 MB per HTTP POST request). This versatility really allows for multiple streams of data to be consumed by Log Insight (at a rate of up to 15,000 events per second). Couple that with the easily created dashboards/interactive analytics and you have a tool that will allow you to make sense of the information overload.