Restricting Users for Kibana with Filtered Aliases

Update: After some lengthy review of the Nginx configurations we discovered that it’s not possible to lock down the aliases based on the remote user due to limitations with Nginx location expressions. The tricky part is that we are unable to ensure that the remote user can only access their alias. Ideally we would want to do something like:

But the limitation is that Nginx doesn’t interpolate variables in the regular expressions. There is not a way to ensure that the $remote_user matches their alias.

Where does that leave us? At this time, we DO NOT recommend using the Nginx setup described below. This setup is still possible but instead of using Nginx to proxy the requests to Elasticsearch, you will need to use a proxy that gives you the ability to write more expressive rules around sending request to the backend (a proxy written in Node.js comes to mind).

For context, here is the original article:

One question we often get with Kibana is, “How do you restrict the data for different users?” Our go to answer has always been to proxy the requests through Nginx and use filtered aliases to segment the data. The typical response to this is, “Uh… Okay I will look into it.” This blog post will take that advice one step further and give you a working example of exactly what’s needed to accomplish this task.

For our example, we are going to use web server logs that segment the users based on the host name. The incoming log will look something like this:

Any request that goes to /buzz-2014.02.03/_search will now include a term filter on the user field for buzz. The one gotcha for this system is that an alias will need to be setup for every user for each daily Logstash index. Elasticsearch currently does not have a feature for setting up dynamic aliases upon index creation, but the good news is that it’s coming. For now, we will need to use a nightly cron to setup our user aliases.

The next piece of the puzzle is setting up Nginx to serve the Kibana interface with basic auth and to proxy the logstash-* requests to the user’s aliases. There is a sample Nginx configuration in the Kibana Github repo that we will use as a starting point. We need to add basic auth to the top of configuration along with modifying some of the rewrite rules to use the filtered aliases and user specific indexes. You can view the modified file here.

The trickiest part to setup is translating the logstash-* requests to the user’s aliases. Kibana will often send requests like /logstash-2014.02.04,logstash-2014.02.03/_search, which will need to be translated to /buzz-2014.02.04,buzz-2014.02.03/_search. Nginx doesn’t have a simple find and replace feature, so we need to dust off our hacker skills and setup a recursive rewrite rule to make the translation for us.

There we have it, Kibana locked down using basic authentication and the data segmented by the authenticated user. Before you go to production with this setup, we highly recommend serving the Kibana interface behind a SSL (or a SSL proxy) and disable dynamic scripting. If you want to use the code from this example, you can find it in the Kibana repository on GitHub under samples/filtered-alias-example.

Update: Special thanks to Alex Brasetvik (@alexbrasetvik) for pointing out a few security issues with our Nginx re-write rules.