Logstash now support persistent queues, which allows it to buffer data on disk if it is unable to send it on to downstream systems. If you configured multiple Logstash instance so that you got some redundancy, you should be able to handle outages in Logstash and Elasticsearch without losing data.

If Filebeat however were to crash and not be restarted before logs are rotated, preventing logs from being read, you could however still lose data. I am not sure if there is any other way than making the rotation less aggressive to get around that though.

Single day access log file size is more than 30GB. There are many issue raised on web severs like data mining , crawling etc. If we tried to extract single file for extracting particular timeslot logs then it will take more time and system resources.

As per our current settings filebeat functionality seems fine. We are looking for solution in filebeat in case filebeat sudden got crashed and recovery took more than 30 mins. In this scenario, rotated files won't be capture by Filebeat.

In case filebeat crashes (an issue which I'm not aware of yet) I would assume it takes the system seconds to restart it. It sounds like this issue is less about filebeat but more how you handle recovery?