Welcome to Splunk Answers, a Q&A forum for users to find answers to questions about deploying, managing, and using Splunk products. Contributors of all backgrounds and levels of expertise come here to find solutions to their issues, and to help other users in the Splunk community with their own questions.

This quick tutorial will help you get started with key features to help you find the answers you need. You will receive 10 karma points upon successful completion!

Refine your search:

ANNOUNCEMENT: Answers is being migrated to a brand new platform! answers.splunk.com will be read-only from 5:00pm PDT June 4th - 9:00am PDT June 9th. Please read this Answers thread for all details about the migration.

Welcome to Splunk Answers! Not what you were looking for? Refine your search.

I realize this will be simple for someone with more experience than I have. Running 2 search heads, 2 indexers, management server feeding from our centralized rsyslog server for all information. 1 head and indexer is dedicated to Enterprise security (ES). On the ES search head only--I am getting the following set of "error messages" hourly.A simple search using source=*python_modular_input.log index=_internal configuration_check.py shows:

with the error msg showing up at the console. I have chased this through every log file, found the inputs.conf, tried btool, but nothing leads to what exactly this is looking for. Which "related" search is not there?

I would appreciate any help in resolving this; however, I would also like the "why & how" so I will be able to chase this down in the future. We do have SoS but it is on the general search head which makes the ES search head the only server it cannot see.

People who like this

Were you able to figure out what was causing that to continually fire? We just noticed it in our environment this morning as well - same message showing up on the hour. We're still in the middle of our implementation and will reach out to our PS resource as well.

3 Answers

Looking at the log message on the initial post, that does not appear to be an error with a search not being enabled. The script is actually erroring out with a non-zero exit code: "exited with code 2". Looking at configuration_check.py, the exit code "2" corresponds to "ERR_REST_EXC", indicating that there was an exception making a REST call as part of the configuration check. This indicates some other issue with the system. In this case the exception should be logged to configuration_check.log and I would begin looking there for answers, specifically for Python tracebacks that correspond in time to the log messages in python_modular_input.log.

Thanks for pointing out a new avenue of searching. I still do not have an answer from Splunk but with your answer I changed my search and found; 2015-02-15 03:08:03,287 ERROR pid=22083 tid=MainThread file=configuration_check.py:run:168 | status="RESTException when executing configuration check" exc="[HTTP 404] https://127.0.0.1:8089/servicesNS/dhorn/DA-ESS-IdentityManagement/saved/searches/Access%20-%20Interactive%20Logon%20by%20a%20Service%20Account%20-%20Rule; [{'code': None, 'text': "\n In handler 'savedsearch': Could not find object id=Access - Interactive Logon by a Service Account - Rule", 'type': 'ERROR'}]" Traceback (most recent call last):

The configuration check is verifying that any correlation searches that use the new "Extreme Search" capabilities packaged with ES are enabled in tandem with their baselining searches. This check should not be erroring out when it encounters a search that it cannot identify. I'll track this as a new bug in the ES project (I am one of the developers on the team).

The real error here is that this search: "Access - Interactive Logon by a Service Account - Rule" likely does not exist or cannot be found. Based on the URL being shown in the log, which contains a username ( https://127.0.0.1:8089/servicesNS/dhorn/DA-ESS-IdentityManagement/saved/searches) I expect that the error here may be that we are using the "owner" field from the correlation search, and that field is returning a user name. In order to retrieve a Correlation Search without incident, it should probably be using the unscoped name "nobody" here.

I would recommend running these two queries via CURL on your system and appending them to the support ticket if you have already created one. Also feel free to send me the salesforce ticket number directly; my username minus the "_splunk" @splunk.com will get to me.

Thanks and Kudos to the support people at Splunk--I believe there has been a resolution I probably would have never found.

In this case the app causing the friction was DA-ESS-IdentityManagement so in the /etc/apps/DA-ESS-IdentityManagement/local directory was a couple of files with one being correlationsearches.conf which had an "extra" stanza in it for the non-existant or orphaned search. Removal of that file followed by a reload command;

Just to follow up and confirm, did this solve your issue? If yes, please be sure to accept the answer to resolve this question so other users with the same problem can find this post with a concrete solution. Thanks!

Tried and failed. Then again...I am going to retrace my steps again this afternoon. When the correct solution is found I will be sure to give credit to whomever found it because I am pretty sure it will not be me (at this point in time.)

Actually, it was me that failed. When the messages returned the following morning at 3am I thought the process failed but not until I did another follow up check did I realize it was the same message because of a different correlation search. rinse & repeat the steps and now it has cleared everything up.

In our case, we had a correlation search that was created but something happened - maybe during creation or replication (sh cluster) that kind of hung. Someone tried to delete the search in Slunk Web which looked like it worked. But in fact, it was still there on the back end. Once found, we deleted it and restarted splunk. That resolved it for us.

That's correct and is probably how this problem gets introduced most frequently, although it's probably not the only way it can be introduced. Deletion of Enterprise Security correlation searches via the normal Splunk Manager GUI is not supported because a Correlation Search consists of several configuration file artifacts - stanzas in savedsearches.conf and correlationsearches.conf. The normal Splunk Manager page for managing searches is not aware of the other configuration files. So, deleting a Correlation Search via the unapproved mechanism will leave the system in an inconsistent state. All management of Correlation Searches is expected to occur through the configuration pages in the Enterprise Security app - which do not currently support deletion to my knowledge, only disabling.

We've rectified the error in our configuration check in the next ES release to more properly reflect the condition - we now warn that you have an "orphaned" stanza in correlationsearches.conf, possibly as a result of an attempt to manage a Correlation Search via the unapproved mechanism.

We use our own and third-party cookies to provide you with a great online experience. We also use these cookies to improve our products and services, support our marketing campaigns, and advertise to you on our website and other websites. Some cookies may continue to collect information after you have left our website. Learn more (including how to update your settings) here. Closing this box indicates that you accept our Cookie Policy.