Install the Splunk OVA for VMware

Use the instructions below to install the Splunk OVA for VMware onto your Splunk platform deployment.

Data Collection Node resource requirements

DCNs communicate with the Collection Configuration page, which runs on the Splunk scheduler, to retrieve performance, inventory, hierarchy, task, and event data from vCenter servers.

Each Data Collection Node (DCN) needs at least one CPU core for every 10 hosts from which the DCN is collecting data.

Splunk recommends that you estimate the number of CPUs needed for your worker processes with the expectation that a CPU in your deployment will eventually fail at some point. Splunk recommends that you provision at least one extra CPU in order to help promote capacity and availability in your deployment.

Each DCN polls information for up to 70 ESXi hosts and 1,750 virtual machines. With this sizing, a site pulling information from 200 hypervisors and 5,000 VMs needs to create at least 3 DCNs.

DCN virtual appliance sizing is as follows:

8 CPU cores with 2GHz reserved

12 GB Memory with a reservation of 1GB

12 GB storage

In a Search Head Clustering (SHC) deployment, the DCN Scheduler must not be deployed on a dedicated search head, and not on any individual Search Head in the SHC.

The Splunk Add-on for VMware does not support scheduler and Data Collection Node functions on Windows operating systems. Linux or UNIX are required. When deploying the VMware add-on into a Windows-based Splunk environment, deploy Linux-based virtual appliances from the Splunk-provided OVA image for both scheduler and data collection node roles.

Install the Splunk OVA for VMware in your virtual environment

In the Deploy OVF Template wizard click Deploy from a file or URL, then click Browse…

Browse to the location of your OVA file, splunk_data_collection_node_for_vmware_<version>-<build_number>.ova, then click Next.

Note: You can not download the file directly from the URL. Splunk Apps requires that you be authenticated via a supported web browser before you begin your download.

Review the OVF template details, then click Next

In the Name and Location screen provide a new name for the node VM. (You can use the default name, if you want.)

Select a data center or folder as the deployment destination for the node VM, then click Next.

On the Host / Cluster screen, select the specific host or cluster where you would like to run the node VM, then click Next.

In the Datastore screen, choose the datastore where you want the VM and its filesystem to reside. The datastore can be from 4GB to 10GB. Click Next.

On the Disk Format screen, select either Thin or Thick Provisioning, then click Next. We recommend thick provisioning.

On the Network Mapping screen, to specify the networks that you want the deployed template to use. Use the Destination Networks menu to map your data collection node .ova template to one of the networks in your inventory.

Validate your selections in the Ready to complete dialog, then select Next to begin deployment.

Once deployed, click Close to complete the installation and exit the wizard.

Right-click on the collection node VM and choose Power > Power On from the menu to start the VM. When you power on the data collection node, Splunk starts automatically even though the VMware data collection mechanism is not configured. By default, the node VM boots and gets its network settings via DHCP. You can keep this default setting or you can set a static IP address. If you use DHCP, check the Summary tab in the vSphere client to get the IP address of the node VM.

To ssh into the data collection node use the default username and password (splunk/changeme). You automatically land in /home/splunk.

Your Splunk platform is installed in /opt.

Navigate to /opt/splunk/etc/apps/SA-Hydra/local and open outputs.conf.

Create your own data collection node

You can build a data collection node and configure it specifically for your environment. Create and configure this data collection node on a physical machine or as a VM image to deploy into your environment using vCenter.

Build a data collection node

Whether you are building a physical data collection node or a data collection node VM follow the steps below. To build a data collection node VM we recommend that you follow the guidelines set by VMware to create the virtual machine and deploy it in your environment.

To build a data collection node:

Install a CentOS or RedHat Enterprise Linux version that is compatible with Splunk Enterprise version 6.4.6 or later.

Install Splunk Enterprise version 6.4.6 or later, and configure it as a heavy forwarder.
Note: You cannot use a universal forwarder. It lacks necessary python libraries.

Download the Splunk_add-on_for_vmware-<version>-<build_number>.tgz from Splunkbase.

Copy the file Splunk_add-on_for_vmware-<version>-<build_number>.tgz from the download package, and move to $SPLUNK_HOME/etc/apps.

Extract the file Splunk_add_on_for_vmware-<version>-<build_number>.tgz from $SPLUNK_HOME/etc/apps.

Verify that the firewall ports are correct. The DCN communicates with splunkd on port 8089. The DCN communicates with the scheduler node on port 8008. Set up forwarding to the same port as your Splunk indexers.

Navigate to $SPLUNK_HOME/etc/apps/SA-Hydra/local and open outputs.conf.

Enter your email address, and someone from the documentation team will respond to you:

Please provide your comments here. Ask a question or make a suggestion.

Feedback submitted, thanks!

You must be logged into splunk.com in order to post comments.
Log in now.

Please try to keep this discussion focused on the content covered in this documentation topic.
If you have a more general question about Splunk functionality or are experiencing a difficulty with Splunk,
consider posting a question to Splunkbase Answers.

0
out of 1000 Characters

Your Comment Has Been Posted Above

We use our own and third-party cookies to provide you with a great online experience. We also use these cookies to improve our products and services, support our marketing campaigns, and advertise to you on our website and other websites. Some cookies may continue to collect information after you have left our website.
Learn more (including how to update your settings) here »