Welcome to Splunk Answers, a Q&A forum for users to find answers to questions about deploying, managing, and using Splunk products. Contributors of all backgrounds and levels of expertise come here to find solutions to their issues, and to help other users in the Splunk community with their own questions.

This quick tutorial will help you get started with key features to help you find the answers you need. You will receive 10 karma points upon successful completion!

Refine your search:

What config changes are need to ensure DB inputs don't run on non-search-head captain nodes.

0

Hi All,We are using Splunk DB Connect 2.3 in a search head clustered environment.While setting up a new database input, we set up the input on the DBX2 app on the deployer(not part of the search head cluster) and then do copy the splunk_app_db_connect app from "/apps/splunk/etc/apps/" to " /apps/splunk/etc/shcluster/apps"and then run ./splunk apply shcluster-bundle -target https://searchheadclustermember:8089, to push it to search heads. As per our understanding DB inputs run on the search head captain. But as per logs we can find "action=modular_input_not_running_on_captain", i.e. db inputs aren't always running on captain. Does Splunk 1st queries each of the search head peer to find which one is the captain and then finally runs the input query on captain node? If not, what config changes are need to ensure inputs don't run on non-captain nodes?

Splunk recommends running reports (saved searches), alerts, and lookups on the search head or search head cluster captain, and running inputs and outputs from a forwarder. This is because disruptive search head restarts and reloads are more common, and scheduled or streaming bulk data movements can impact the performance of user searches. Poor user experience, reduced performance, increased configuration replication, unwanted data duplication, or even data loss can result from running inputs and outputs on search head infrastructure. Running inputs and outputs on a search head captain does not provide extra fault tolerance or enhance availability, and is not recommended.

We do have a plan to move to HWF along with DBC3 but that will take some time.Since we are currently using DBC2 in SHC, could you provide some pointers to debug mentioned issue?

Also , in the SHC we do not need to maintain the state of DB Connections and reads as even if a captain goes down, other search heads have the latest state of it and can continue processing. How do we address failover in a HWF configuration ?