Welcome to Splunk Answers, a Q&A forum for users to find answers to questions about deploying, managing, and using Splunk products. Contributors of all backgrounds and levels of expertise come here to find solutions to their issues, and to help other users in the Splunk community with their own questions.

This quick tutorial will help you get started with key features to help you find the answers you need. You will receive 10 karma points upon successful completion!

Refine your search:

ANNOUNCEMENT: Answers is being migrated to a brand new platform! answers.splunk.com will be read-only from 5:00pm PDT June 4th - 9:00am PDT June 9th. Please read this Answers thread for all details about the migration.

Welcome to Splunk Answers! Not what you were looking for? Refine your search.

We have Splunk Enterprise and our cluster consists of 3 search heads and 9 search peers. After upgrading to version 6.3, the following started to happen.

Although the cluster in total has enough space, certain peers from time to time fill up the disk and the splunkd process dies, pushing the cluster into re-organizing the data. After bringing back the dead peer and waiting for the cluster to be 100% operational (meet its search factor and replication factor) many of the searches produce the following errors :

I have no clue how to fix this (I could not find any useful info about this on the internet) and the results are incomplete - our business cannot operate correctly as we take decisions based on the analysis we run using Splunk.

not yet - we have 2 open tickets with Support open, and I had to upload a diag to them, this was a week ago. since them nothing. I will call our sales rep and ask if they can nudge the support - this is crazy

2 Answers

Update: the issue was never resolved, how ever, we don't experience it anymore. We did a DC move in the mean time and we took down the whole cluster for a good few hours, after starting it back up we ended up with a bunch of duplicate buckets that we were able to remove and since then we don't see this issue. Unfortunately time solved it, but no clue what was the root cause :(

Notes: The mode verb 'make-searchable' is synonym for 'repair'. The mode 'check-integrity' will verify data integrity for buckets created with the integrity-check feature enabled. The mode 'generate-hash-files' will create or update bucket-level hashes for buckets which were generated with the integrity-check feature enabled. The mode 'check-rawdata-format' verifies that the journal format is intact for the selected index buckets (the journal is stored in a valid gzip container and has valid journal structure Flag --log-to--splunkd-log is intended for calls from within splunkd. If neither --backfill-always nor --backfill-never are given, backfill decisions will be made per indexes.conf 'maxBloomBackfillBucketAge' and 'createBloomfilter' parameters. Values of 'homePath' and 'coldPath' will always be read from config; if config is not available, use --one-bucket and --bucket-path but not --index-name. All constraints supplied are implicitly ANDed. Flag --metadata is only applicable when migrating from 4.2 release. If giving --include-hots, please recall that hot buckets have no bloomfilters. Not all argument combinations are valid. If --help found in any argument position, prints this message & quits.

We use our own and third-party cookies to provide you with a great online experience. We also use these cookies to improve our products and services, support our marketing campaigns, and advertise to you on our website and other websites. Some cookies may continue to collect information after you have left our website. Learn more (including how to update your settings) here. Closing this box indicates that you accept our Cookie Policy.