3.10.0-693.11.6.vz7.40.4 - with meltdown on most servers have been fine. However one server basically locks out (out of memory). It is in the older kernel for the moment. Whats wierd is its not happening on all. Dmesg shows:

Restarting openvz vps server hangs and only a mount is active, after that nothing works to stop or unmount that vps. Only reboot node fixes the problem.
When i boot with previous kernel restart or stop/start works great.

I discovered this problem when i was running a script to find certain files and change ownership and saw that i had memory allocation errors.

I don't have more info to provide here unfortunately. I did migrate all vm's off the server as a resolution to a system with out issues. It doesn't appear to be hardware relate - all tests pass. I may test further with a BIOS upgrade in the future. All software really matched the other openvz7 servers.

I may not have been clear. The entire server was crashing similar to what 'ikbenut' said not a single vm only. I saw the same vm's running, but could not enter with mounts active. Had to do a a full restart. The common error seems to be 'SLUB: Unable to allocate memory on node -1'. There was plenty of free memory as well, on boot also no swap space was used in any way before it locked up again.

I have similar problem. After upgrade 2 nodes to latest kernel (3.10.0-693.11.6.vz7.40.4), on both occured problems wth memory. After reboot the server runs fine for 30 - 60 minutes. After that, one or more containers crashes and after while is the whole node unaccessible (I cannot log in, only hard reboot is possible). This is from /var/log/messages:

Funny thing is that after reboot to older kernel, these errors are still there! But before the update to latest kernel (and utilities), none of this happened. So I believe that it has to be linked to this (it is happening on two physical nodes right after the upgrade).

After reboot in older kernel de slub error seem te reapear, but less then with the latest kernel. So it does seem to be related with the update, i think that some other kernel setting are changed during the latest kernel install and are still active after booting a older kernel.
Before i did the kernel update everything was fine and no errors whatsoever. The scripts that are causing the errors are a quota tally script from DirectAdmin and an own made script to set ownership of files in the Home folders of users.

Also when i boot in the latest kernel i also get the same issue as martv describes. I can restart vps server but after about 15 minutes this does not work anymore and de vps is mounted as ploop device, but no vps start.
I thougt i let the server do his thing, but after 1 hour still nothing happend and the whole server hangs. Only a reboot helps!

After several days without problem, I must say that the solution proposed by khorenko worked. I disabled KMEM limits for all containers and now is everything stable again. I used this simple shell script: