Welcome to Splunk Answers, a Q&A forum for users to find answers to questions about deploying, managing, and using Splunk products. Contributors of all backgrounds and levels of expertise come here to find solutions to their issues, and to help other users in the Splunk community with their own questions.

This quick tutorial will help you get started with key features to help you find the answers you need. You will receive 10 karma points upon successful completion!

Refine your search:

Alert Trigger Question | Don't trigger it for that specific user for x amount of minutes

0

I made an alert query that particularly looks for a windows failed login by users using stats. It works.

Whenever there is an event greater than 0, it’ll show case it and display it. It works.

Now here comes the problem:

The user who is constantly failing over a period of time also causes mass amount of alert notification triggers. Let’s say it’s every 10 minutes for the alert interval. Every 10 minutes we’ll be notified for the same user failing.

There is this option in splunk, that I am aware of:

This option works per say, however, if now a different user account were to have a +1 count, it will not be alerted because the alert won’t trigger until the next 20 minutes.

So here comes the question:

How can I make alert triggers intelligent enough to distinguish each user account as unique but if the user account was last seen then don’t trigger that same account for X amount of hours?

Hopefully I made sense, if not I’ll try to elaborate the problem further:

Account1 failed 5 logins at 1:00 triggeredAccount2 failed 10 logins at 1:10 no trigger because of “after triggering the alert, don’t trigger it again for 20 minutes…”Account1 failed 5 logins at 1:20 triggered

People who like this

1 Answer

If you have a user who is "constantly failing" over a period of time, then that is a training problem.

If your job is running every 10 minutes, across a 10-minute timeframe, and deciding whether to alert, then you can just change it to run across a 20-minute timeframe, and alert only for those users that deserve an alert in the second 10 minutes but did not deserve (and therefore probably receive) an alert in the first 10 minutes. It won't be the splunk alert that's suppressing the long-term fails, but the search itself.across

You could actually go one further, just in case someone keeps failing long-term. Do the calculation across a 30 minute period. If the present period is an alert, suppress the alert only if the prior period was an alert but two periods ago was NOT an alert. Basically, if the guy has been failing for 30 minutes straight then there is something really wrong with him and we should send Mongo to go break his legs...

We use our own and third-party cookies to provide you with a great online experience. We also use these cookies to improve our products and services, support our marketing campaigns, and advertise to you on our website and other websites. Some cookies may continue to collect information after you have left our website. Learn more (including how to update your settings) here. Closing this box indicates that you accept our Cookie Policy.