If I need 12GB of RAM to start an empty Kibana instance when using SG then that is most definitely related to SearchGuard and a problem, even if it’s not necessarily the fault of SearchGuard that this happens.

That being said, I already tried setting this as high as 16000, which is the theoretical limit of the VM, and it still happened.

No there no other hungry processes running. There’s definitively enough RAM available. One can observe how much RAM a container consumes and Kibana crashes between 2-3GB, which was nowhere close what we had set it. htop also looked ok.

@sascha.herzinger
Have you tried to add --max-old-space-size=4096 option directly in the /kibana/bin/kibana? A guy from Elastic suggested this. Just trying to narrow down the scope of the problem, maybe something happened to the env variable. Can you share your docker file?

@sascha.herzinger@Doug_Renze I see Kibana optimizes the plugins on every start. And Kibana forks multiple children process on start for the thread-loader. Notice, each process has --max_old_space_size=4096. I presume Kibana expects each process to occupy the heap space up to the limit if required. These workers should exit after Kibana finishes the optimization phase.

These two should be all means be the same. env returns NODE_OPTIONS=--max-old-space-size=4096
Could this be a node.js issue? This looks like node.js handles env variables different than the passed parameters.