Log storage and analysis is one of the most popular usage of Elasticsearch today1, it’s easy and it scales. But quickly enough, adding nodes cost too much money and storing logs from multiple months or years can be tricky.

A solution I often see when doing Elasticsearch Consulting (shameless plug) is to keep only detailed logs for the most consulted time range (the last 60 days i.e.), and to merge older logs in a new index, by day or by month, instead of having one log per event. This allows us to have a huge retention period without the pain of dealing with large indexes and enormous bills from Amazon 💸.

This merging is usually done by hand, with the help of the Bulk, the Scan, Scroll and the Delete By Query APIs… But this time is gone, as Elasticsearch 6.3 now ships with a Rolling Up functionality I’m going to cover in this article.

⚠️ This functionality is part of X-Pack Basic and is marked as experimental.

So right now my index access_logs is full of logs of the last 30 days, but I only need fine granularity over the last 10 days. Let’s create the Rolling Up Job to merge older than 10 days logs into groups of one hour.

Building a Rollup Job

This query will create a job called access_logs, merging documents by url and grouping them by hour, only for documents older than 10 days:

They are not supposed to be requested via the _search API, as you can see the names of the fields are not exactly compatible with our raw documents. This will be handled by the new _rollup_search endpoint.

Looking at the indexes

If we look at our initial raw logs access_logs index, we can see that all the raw logs are still here:

This query did not read anything from our new Rollup index, it was exactly the same as the one we created via Kibana. To use the new data our Rollup job has created, we have to change the GET access_logs/_search endpoint to a GET access_logs_rollup/_rollup_search endpoint:

As you can see, the answer is the same: 27810099 bytes. Again, that is because the raw data is still here so the rolled up one’s are not needed, let’s remove some documents now!

Removing the old raw data

That’s a tricky part because there is no easy way to be sure a document is rolled-up… But as my Job merges all documents older than 10 days, there is a good chance I can delete documents older than… 11 days2.

This query will not be run automatically, that’s up to you 😕. On most setups, you will just remove daily indexes instead of running a _delete_by_query but the same fear applies: are your sure the data has been rolled-up?

Some 11 seconds later3, you can run again the query on /access_logs_rollup/_rollup_search to see that even when the raw logs are gone, the aggregation still works! 🤘

And you will get both the old static data and the new raw statistics, merged together.

That’s it:

We have the same results but only half the documents;

All we had to change in our queries was the endpoint;

We can scale, and keep a lot more logs without increasing our costs!

About the limitations of the Rollup API

Of course, there is some limitations to this API (they are documented here), for example you will not get any _source back in the responses, it’s focused on statistics applications only.

Also, in my example, I could not have used _rollup_search for the second visualisation (Hits per page) because I did not compute that information inside my initial Job. You will have to be very cautious about what you will need in the merged index.

About the cross-index search, be careful about collision too. In my sample data, I saw some strange results for the exact day limit I used in Delete By Query. I suspect that the rolled-up data are not used if there is data in the buckets from the raw data index because when querying only the rolled-up data, statistics were OK.

Finally, the main issue I may have right now is raw log deletion. I find it hard to let the users deal with this as it can be tricky to remove only what is merged.

To conclude, this feature is very interesting and will simplify a lot of workflows, but right now it’s a bit young. I know that the Elastic team is looking at feedback from users so do not hesitate talk about it, if you have any suggestion!