Welcome to Splunk Answers, a Q&A forum for users to find answers to questions about deploying, managing, and using Splunk products. Contributors of all backgrounds and levels of expertise come here to find solutions to their issues, and to help other users in the Splunk community with their own questions.

This quick tutorial will help you get started with key features to help you find the answers you need. You will receive 10 karma points upon successful completion!

People who like this

4 Answers

Based on the above error, the search bundle size is 800+MB and as a result, bundles are not getting downloaded to the indexers, causing searches to fail.

On the search head, the knowledge bundles reside under the $SPLUNK_HOME/var/run directory. The bundles have the extension .bundle for full bundles or .delta for delta bundles. They are tar files, so you can run tar tvf against them to see the contents.

The knowledge bundle gets distributed to the $SPLUNK_HOME/var/run/searchpeers directory on each search peer. The search peers use the search head's knowledge bundle to execute queries on its behalf. When executing a distributed search, the peers are ignorant of any local knowledge objects. They have access only to the objects in the search head's knowledge bundle.

Bundles typically contain a subset of files (configuration files and assets) from $SPLUNK_HOME/etc/system, $SPLUNK_HOME/etc/apps and $SPLUNK_HOME/etc/users

The process of distributing knowledge bundles means that peers by default receive nearly the entire contents of the search head's apps. If an app contains large binaries or CSV that do not need to be shared with the peers, you can eliminate them from the bundle and thus reduce the bundle size.

Next we checked the content of the bundle on the search head using:

$SPLUNk_HOME_SEARCH_HEAD/var/run tar -tvf sh604-1409261525.bundle

We noticed that bundle had many lookup files and some as Big as 100MB.

Note: The stats count is the point at which map/reduce happens and sends that to the search head. This typically happens with the first reporting command, so what matters is: do I need the lookup before or after the 1st reporting command? That is the determining factor for needing the lookup on the indexers or not.

In the 2nd example, I use a field produced by the lookup ("datacenter") in my first reporting command. Clearly, my indexers are going to need access to the lookup in order to run that stats.

In the 1st example, that is not the case. You needlocal=true if you want the indexers not to attempt to run the lookup. So, the 1st example should actually be:

Thanks for the detailed analysis. Unfortunately, we use very large lookup files referenced by accelerated datamodels. By the very nature of accelerated datamodels, these lookups have to take place on the indexers. Other types of lookups are either too slow or get replicated in the same way.We are just below the point that breaks the bundle replication, but would love an intermediate solution.

This question has come up few times by Splunk Users, In a distributed search environment, I was under the impression a lookup only needed to exist on the search heads, not the indexers. However, I created a lookup on a search head and get warning messages when I use it in a search:

[splunk-idx-01] Search process did not exit cleanly, exit_code=255, description="exited with code 255". Please look in search.log for this peer in the Job Inspector for more info.
[splunk-idx-01] Streamed search execute failed because: Error in 'lookup' command: The lookup table 'showDiskType' does not exist.
[splunk-idx-02] Search process did not exit cleanly, exit_code=255, description="exited with code 255". Please look in search.log for this peer in the Job Inspector for more info.
[splunk-idx-02] Streamed search execute failed because: Error in 'lookup' command: The lookup table 'showDiskType' does not exist.
[splunk-idx-03] Search process did not exit cleanly, exit_code=255, description="exited with code 255". Please look in search.log for this peer in the Job Inspector for more info.
[splunk-idx-03] Streamed search execute failed because: Error in 'lookup' command: The lookup table 'showDiskType' does not exist.

To avoid this issue and force search to use lookup only on the Search Head , here are two options:

1) Using local=t on the lookup:

f you have a distributed environment and your lookup file is BIG , you also add local=t .

One of the other issue to with large lookup in distributed environment.

For the lookup over 10MB when they are replicated from the “Search Head” to the indexer . Any lookup that is over 10MB ( default value of mem_table_bytes =10MB in lookup.conf) will be indexed on the indexer once the search bundle is download. This add an overhead due to lookup indexing and leads to timeout.

We use our own and third-party cookies to provide you with a great online experience. We also use these cookies to improve our products and services, support our marketing campaigns, and advertise to you on our website and other websites. Some cookies may continue to collect information after you have left our website. Learn more (including how to update your settings) here. Closing this box indicates that you accept our Cookie Policy.