Sonarr initiates episode/season search with one or more indexers.
Indexers may return 100+ results for the season or episode.
Dozens of the results meet the download criteria - profile/language/filters, etc.
Sonarr sends download requests to usenet client.
A download fails and instead of using other results already received from the indexer Sonarr runs the same search again. If multiple downloads fail Sonarr keeps sending searches instead of using available results. The multiple searches are returning the same results.

When using indexers which may have API limits this cycle can run up the counter rather quickly.

That makes sense. Sonarr does behave like it’s not caching the results. I figured overhead could be reduced and efficiency gained if Sonarr cached and exhausted the results before searching again.

I use a mix of paid and free indexers to cover more bases along with NZBHydra. It was in the Hydra stats and logs that I noticed the multiple back-to-back searches for the same episodes each time a queued download could not be completed.

All in all Sonarr has been an excellent tool for the short time I’ve used it. Keep up the great work!

@markus101
Actually, they are ever since we started queueing stuff for when download clients were unavailable. That includes Fallback releases not rejected by other criteria. I’ve been meaning to look at the FDH search logic, but for that we need to know if a search was performed vs rss vs release push. Best would probably be to include the kind of grab in the history event and track it that way.
So yes, it’s possible to almost entirely eliminate FDH searches.