Tag Info

HA! I finally found the problem myself. It's more related to programming than server admin, but I decided to put the answer here anyway because by searching google I found I'm not the only one with that kind of problem (and since Apache hangs, the first guess is that there's a problem with the server).
The issue is not with Apache, but with my Wordpress. ...

If you use mod_wsgi daemon mode, it doesn't matter which Apache MPM you use, although suggested that for UNIX systems worker MPM be used unless you are stuck with having to also host PHP applications using mod_php as some PHP extensions aren't thread safe.
The suggestion that you have to have worker MPM in order to use mod_wsgi daemon mode is wrong. What is ...

If you vz is number 101, than use the following to set it two 2 CPUs (change the number accordingly):
vzctl set 101 --cpus 2 --save
No restart of the VE required. The --save makes it so when the VE is rebooted, the changes will still be there. vzctl --help for other resources that can be set.

They are pretty dramatically different technologies. Xen provides full virtualization and varying degrees of paravirtualization. OpenVZ, on the other hand uses a container model, without any hardware or system virtualization.
OpenVZ is more efficient, from a memory usage perspective, than Xen, because the host kernel is shared across all guests. Xen ...

simfs is not an actual filesystem; it's a map to a directory on the host (by default /vz/private/<veid>). To check the filesystem, you have to check the host filesystem from the host, which also means you have to bring down every container on the host. If you believe it's necessary to check the filesystem, schedule a maintenance period and notify all ...

Due to the update of vzctl from 4.6 to 4.7 they changed the setting of nf_conntrack to be disabled by default. (https://openvz.org/Download/vzctl/4.7/changes)
Corresponding commit message:
...
Disable conntrack for VE0 by default
IP conntrack functionality has some negative impact on venet performance (uo to about 10%), so they better be ...

OpenVZ is great at letting you share directories without the need for Samba or NFS overhead.
To see how it works do a bind mount to root (not private) when the container is running:
mount --bind /vz/private/109/common-stuff /vz/root/108/common-stuff
To make the share persistent over container reboots:
Put Script A into /etc/vz/conf/108.mount
Run chmod ...

That is definitely the error you get when APC runs out of memory. When I (re)build servers, I often forget to increase this value to 128 M (suitable for my application) and that is the exact error you see.

One major difference between Xen and OpenVZ is that with Xen, there is no overselling.
When you get a Xen VPS with 512M RAM, you get 512M RAM.
With OpenVZ it's all kinda smoke and mirror. The host might claim "Guaranteed RAM: 512M" and "Burstable RAM: 1G" but in reality there's no way to guarantee anything with OpenVZ. Depending on what other VPS ...

Burst memory is essentially memory that you can use if the host node has memory available and you have exceeded the memory guaranteed to your container. This is a flawed system because applications do not read memory in OpenVZ beancounters. It means that your VPS thinks it has more memory than the host node actually guarantees that you have.
For example, if ...

You want to install grub. Without it, how are you going to boot Ubuntu after the upgrade? You shouldn't treat a virtual server any differently than a physical one, both need a bootloader to bootstrap the OS at boot.

Rsyslog has a tendency to use 100%+ CPU on OpenVZ. I run following commands via SSH to fix the problem
service rsyslog stop
sed -i -e 's/^\$ModLoad imklog/#\$ModLoad imklog/g' /etc/rsyslog.conf
service rsyslog start
Or, see here for a workaround.

OpenVZ isn't really virtualization. It's containerization. So each container sees the system that it's on as it's own. To control how much cpu time each VE can get you have to assign each VE cpucredits. This page goes into how set the limits on each VE.
Edit: Just found this in the vzctl man page.
--cpulimit num[%] Limit of CPU usage for the VE, in per ...

Some parts OpenVZ were merged into the official kernel but not enough to run OpenVZ. So you do need a separately patched kernel.
The best way to install an OpenVZ is through a package manager. On CentOS / RedHat variants you probably will need to add some external repositories. On Debian stable (6.0) the OpenVZ kernel image is in the official repository and ...

The best-practice is to use whatever kernel that comes with your distribution-channels.
But if you're compiling your own, you certainly can use the old .config file for the basis of your new config. The tricky part is all the added modules between 2.6.27 and 2.6.32. The way I see it you have two options
Option 1: Do all the research
What's new in each ...

To the best knowledge at the time of this writing there were still critical issues with /proc filtering. They ought to be addressed in Linux Kernel 3.6 or later.
Since I'm facing the same problem as you I've done some investigation and I'm not yet convinced that LXC is an alternative to Linux VServer.
If you decide not to switch to LXC have a look at the ...

Docker is VERY lightweight compared to a VM and a VM system should function just fine running containers. Each container essentially does run as an isolated system so it's very good for isolation from a perspective of system stability. Based on your description it sounds like the ideal use case for Docker. If you do experiment with Docker make sure you use ...

Basically, there is no such thing as "Debian" virtualization. There is a Linux Kernel hypervisor called KVM. A number of projects have arisen around it, one of them which is a rather well supported "install & run" package is Proxmox VE - it includes web-based management and even has support for the more sophisticated features like live migration of ...

You really need to get more details from your hosting provider about how the spams were sent. A list of the services running on your machine would be helpful too. A few possibilities:
You might be running an SMTP server without knowing it. To deactivate it, uninstall whatever package is acting as an SMTP server (popular options are postfix, exim, ...

I prefer nginx over lighttpd for following reasons:
Easier and more sane config format.
More and better documented modules
It is actively maintained and developed. Last major release was 2 years ago.
Only pros of lighttpd over nginx
1. Talking to backends via HTTP/1.1
2. Automatic spawning of FastCGI backends

I'm using OpenVZ on my servers (I used to run Xen before). It's not real virtualization like Xen or KVM. OpenVZ is runing multiple isolated instances (containers).
It's much easier to maintain, and performance overhead is near zero.
If you want to use OpenVZ and Ubuntu, use 8.04 LTS because there is official OpenVZ kernel image.

I am a fan of OpenVZ. I am also a Proxmox user.
OpenVZ is "just" a hardened chroot (with fine grained control and networking). The kernel is the same in the "containers" and on the host itself.
OpenVZ is lightweigth because of its design. It works perfectly fine as long as you need linux guests only. If your hardware supports hw virtualization you can use ...

Can you restart the network service? (on the openvz host.. etc..)
Yes
What will happen?
Best case scenario: Nothing (horribly detrimental that is)
Worst Case: Loss of networking on the host. Loss of any connectivity to the host. Loss of any connectivity to the VMs. Critical Application failures on the VMs due to network connectivity issues. Corruption of ...