Logsearch relys on Elasticsearch for data persistence. There are situations in which you want to have your data replicated outside of your Elasticsearch cluster (for example migrating data between clusters). For this Elasticsearch provides data snapshots and restore.

This requires a shared file system repository such as NFS, S3 repositories, HDFS repositories on hadoop, or Azure storage repositories.

The following two assumptions needs to be considered:

You should have a Logsearch deployment in place.

You should also have a NFS mount point (If you do not you can deploy one with the nfs-boshrelease).

Mounting NFS on your Elasticsearch cluster

Within the nfs-boshrelease there is a nfs_mounter job provided. This job will mount /var/vcap/store from the nfs server to /var/vcap/nfs on the elasticsearch nodes. To accomplish this you need to collocate the nfs-boshrelease mounter within the Logsearch deployment. Also, you will need to provide this new job with the correct properties.

Prepare Elasticsearch cluster for creating repository

For the share filesystem implementation to work, all masters and data nodes need to know the path where snapshot repositories will be created. This is done through the path.repo Elasticsearch property.

Logsearch provides a way to distribute extra Elasticsearch configuration options through the deployment manifest.

See Logsearch example stub below:

---...properties:...elasticsearch:config_options:|<%= 'path.repo: ["/var/vcap/nfs/shared"]'.gsub(/^/, ' ').strip # Doing this with ruby because spiff[0] does not merge quite well strings with ':' or 'foo.bar' format. %>

Deploy kopf to Elasticsearch cluster

For monitoring and administrating Elasticsearch you can make things easier using the kopf plugin. Logsearch provides a solution for installing plugins. The only isssue is that it requires access to the internet to download them. You can use elasticsearch-plugins-boshrelease to provide plugin sources offline.