System requirements and other deployment considerations for search head clusters

The members of a search head cluster have most of the same system requirements as any non-clustered search head. This topic details requirements specific to a search head cluster.

Summary of key requirements

These are the main issues to note regarding provisioning of cluster members:

Each member must run on its own machine or virtual machine, and all machines must run the same operating system.

All members must run on the same version of Splunk Enterprise.

All members must be connected over a high-speed network.

You must deploy at least as many members as either the replication factor or three, whichever is greater.

In addition to the cluster members, you need a deployer to distribute updates to the members. The deployer must run on a non-member instance. In some cases, it can run on the same instance as a deployment server or an indexer cluster master node.

See the remainder of this topic for details on these and other issues.

Hardware and operating system requirements

Machine requirements for cluster members

Each member must run on its own, separate machine or virtual machine.

The hardware requirements for the machine are essentially the same as for any Splunk Enterprise search head. See Reference hardware in the Capacity Planning Manual. The main difference is the need for increased storage to accommodate a larger dispatch directory. See Storage considerations.

Splunk recommends that you use homogeneous machines with identical hardware specifications for all cluster members. The reason is that the cluster captain assigns scheduled jobs to members based on their current job loads. When it does this, it does not have insight into the actual processing power of each member's machine. Instead, it assumes that each machine is provisioned equally.

Operating system requirements for cluster members

Search head clustering is available on all operating systems supported for Splunk Enterprise. For a list of supported operating systems, see System requirements in the Installation Manual.

All search head cluster members and the deployer must run on the same operating system.

If the search head cluster is connected to an indexer cluster, then the indexer cluster instances must run on the same operating system as the search head cluster members.

Storage considerations

When determining the storage requirements for your clustered search heads, you need to consider the increased capacity necessary to handle replicated copies of search artifacts.

For the purpose of developing storage estimates, you can observe the size over time of dispatch directories on the search heads in your non-clustered environment, if any, before you migrate to a cluster. Total up the size of dispatch directories across all the non-clustered search heads and then make adjustments to account for the cluster-specific factors.

The most important factor to take into consideration is the replication factor. For example, if you have a replication factor of 3, you will need approximately triple the amount of the total pre-cluster storage, distributed equally among the cluster members.

Other factors can further increase the cluster storage needs. One key factor is the need to plan for node failure. If a member goes down, causing its set of artifacts (original and replicated) to disappear from the cluster, fix-up activities take place to ensure that each artifact once again has its full complement of copies, matching the replication factor. During fix-up, the copies that were resident on the failed member get replicated among the remaining members, increasing the size of each remaining member's dispatch directory.

Other issues can also increase storage on a per-member basis. For example, the cluster does not guarantee an absolutely equal distribution of replicated copies across the members. In addition, the cluster can hold more than the replication factor number of some search artifacts. See How the cluster handles search artifacts.

As a best practice, equip each member machine with substantially more storage than the estimated need. This allows both for future growth and for temporarily increased need resulting from downed cluster members. The cluster will stop running searches if any of its members runs out of disk space.

Splunk Enterprise instance requirements

Splunk Enterprise version compatibility

You can implement search head clustering on any group of Splunk Enterprise instances, version 6.2 or above.

All cluster members must run the same version of Splunk Enterprise, down to the maintenance level. You must upgrade all members to a new release at the same time. You cannot, for example, run a search head cluster with some members at 6.3.2 and others at 6.3.1.

The deployer must run the same version as the cluster members, down to the minor level. In other words, if the members are running 6.3.2, the deployer must run some version of 6.3.x. It is strongly advised that you upgrade the deployer at the same time that you upgrade the cluster members. See Upgrade a search head cluster.

Note: During search head cluster upgrades, the cluster can temporarily include both members at the previous version and members at the new version. By the end of the upgrade process, all members must again run the same version. This is valid only when upgrading from version 6.4 or later. See Upgrade a search head cluster.

7.x search head clusters can run against 5.x, 6.x, or 7.x search peers. The search head cluster members must be at the same or a higher level than the search peers. For details on version compatibility between search heads and search peers, see Version compatibility.

For example, if your replication factor is either 2 or 3, you need at least three instances. If your replication factor is 5, you need at least five instances.

You can optionally add more members to boost search and user capacity.

Maximum number of instances

Search head clustering supports up to 100 members in a single cluster.

Search head clusters running across multiple sites

Although there is currently no formal notion of a multisite search head cluster, you can still deploy the cluster members across multiple sites.

When deploying the cluster across multiple sites, put a majority of the cluster members on the site that you consider primary. This ensures that the cluster can continue to elect a captain, and thus continue to function, as long as the primary site is running. See Deploy a search head cluster in a multisite environment.

Cluster member cannot be a search peer

Network requirements

Network provisioning

All members must reside on a high speed network where each member can access every other member.

The members do not necessarily need to be on the same subnet, or even in the same data center, if you have a fast connection between the data centers. You can adjust the various search head clustering timeout settings in server.conf. For help in configuring timeout settings, contact Splunk Professional Services.

Ports that the cluster members use

These ports must be available on each member:

The management port (by default, 8089) must be available to all other members.

The http port (by default, 8000) must be available to any browsers accessing data from the member.

The KV store port (by default, 8191) must be available to all other members. You can use the CLI command splunk show kvstore-port to identify the port number.

Caution: Do not change the management port on any of the members while they are participating in the cluster. If you need to change the management port, you must first remove the member from the cluster.

Synchronize system clocks across the distributed search environment

It is important that you synchronize the system clocks on all machines, virtual or physical, that are running Splunk Enterprise instances participating in distributed search. Specifically, this means your cluster members and search peers. Otherwise, various issues can arise, such as search failures, premature expiration of search artifacts, or problems with alerts.

The synchronization method you use depends on your specific set of machines. Consult the system documentation for the particular machines and operating systems on which you are running Splunk Enterprise. For most environments, Network Time Protocol (NTP) is the best approach.

Deployer requirements

Deployer functionality is only for use with search head clustering, but it is built into all Splunk Enterprise instances running version 6.2 or above. The processing requirements for a deployer are fairly light, so you can usually co-locate deployer functionality on an instance performing some other function. You have several options as to the instance on which you run the deployer:

If you have a deployment server that is servicing only a small number of deployment clients (no more than 50), you can run the deployer on the same instance as the deployment server. The deployer and deployment server functionalities can interfere with each other at larger client counts. See Deployment server provisioning in Updating Splunk Enterprise Instances.

If you are running an indexer cluster, you might be able to run the deployer on the same instance as the indexer cluster's master node. Whether this option is available to you depends on the master's load. See Additional roles for the master node in Managing Indexers and Clusters of Indexers for information on cluster master load limits.

Enter your email address, and someone from the documentation team will respond to you:

Please provide your comments here. Ask a question or make a suggestion.

Feedback submitted, thanks!

You must be logged into splunk.com in order to post comments.
Log in now.

Please try to keep this discussion focused on the content covered in this documentation topic.
If you have a more general question about Splunk functionality or are experiencing a difficulty with Splunk,
consider posting a question to Splunkbase Answers.