Okay, so I’ve been running a VPS for a little while now, but I recently got a message telling me it had to be restarted due to using too much memory, so I’m looking at ways to trim this down.

The following are options that are available, but I’m not sure which ones I should try (I don’t want to just change them randomly), so I’d appreciate answers to some questions and/or advice on which ones should work best.

Consolidating User Accounts:
When I transitioned to my VPS I initially only moved one site, which required me to create a new user account, however I’ve since moved the rest of my sites across to the VPS, but this means I now have two user accounts.

If I were to make sure that all sites were on the same account, would that save me much on RAM? More importantly, will switching user account on my busiest and largest site result in significant downtime? It has a large number of files and can generate a lot of traffic.

Disabling mod_php:
On the PS Optimisation wiki page it states “If all domains are set to use FCGI or CGI for the PHP mode, you may safely deactivate mod_php in the PS configuration to save a good deal of memory”, however the mod_php option on the VPS configuration page states “This will save a significant amount of memory if your site serves a lot of static content. PHP-intensive sites may benefit more from setting all domains to use mod_php in Manage Domains and enabling a PHP cache”, so which is it exactly?

My busiest domain is basically one large phpBB forum, with most of its guest traffic offloaded into CloudFlare using a custom cache. This means that most of the traffic that does make it to my server requires dynamic page generation, so do I need mod_php or not? Regarding the VPS configuration page’s note, I don’t even see a mod_php option under domains when I go into Manage Domains, which is nice and confusing

Enabling XCache:
I know that this will increase RAM use per PHP process, which is something that confused me before (or does it use shared memory), but is it likely to reduce RAM usage overall by serving up pages faster?

Like I say my main use is a phpBB forum, and average page generation time is around 0.06 to 0.12 seconds; any pages that take longer to generate (such as posting a new message) usually do so because they’re connecting to other sites for spam filtering, so a cache presumably won’t help those.

Anyway, I’d appreciate any feedback on which options I should try (and why), as while I know what most of them do in theory, I don’t really want to just operate by trial and error on a live site

That said, I’m also confused as to why my site restarted at all; when I go into the Manage Resources page, my actual memory use rarely passes around 110mb, with the rest of my 500mb allowance eaten up by “Cached Memory”; what exactly is cached memory, and do I really need so much of it? My site does serve up a lot of files, but at the same time I’ve made an effort to ensure that most of these are served up by CloudFlare, so the server should only rarely be getting requests when CloudFlare needs a new file, or needs to refresh an expired one. Is it possible to tweak how much cached memory there is? For example, on my MySQL VPS, even with cached memory the total usage is around 250mb out of the 300mb allowance, so there’s plenty of breathing room in case of any spikes.

For my particular setup I am using PHP5.3Cgi (not fastCGI)
Not using Xcache
mod_php disabled

But everyone runs different things, so there is not really a secret mix of settings that will work for all situations. Just take it slow, track what different things you try for a few days, try something else for a few days, etc…

What type of site(s) do you run? Like I say, my main traffic hog is a phpBB forum, and so far fast CGI has been nice and fast, any particular reason you’ve gone for regular CGI instead?
Also, you mention you disabled mod_php, why did you decide to do this? Is there anything it’s actually needed for?

PsManager seems interesting, but I’m already right on the limit of what I can really afford, so even if I only went up from time to time it’d probably cost too much; plus like I say, cache memory fills up my allowance more than anything else, and generally prevents me from being able to reduce resource use without a restart, so if I did install it I’m not sure if I’d actually be able to get resources back down again, that’s why I’m so confused as to what cached memory is actually doing, as I never saw anything like it under regular shared hosting.

I run some WordPress sites, an instance of OwnCloud, and a database/web-interface for tracking clients.

The great thing about PSManager is that it allows you to keep your memory limit lower, and will self-adjust if it needs more (meaning you can save money)

I stopped using FastCGI because some other people on this forum had mentioned that switching to just CGI had reduced their memory usage for WordPress sites. After switching, I had the same results (less memory)

I did have mod_php, but the control panel said that PHP 5.2 was required to use mod_php. Then I started getting notices from DH that I should upgrade to PHP 5.3, because PHP 5.2 would become deprecated soon. Since switching, I have not had much of a difference in memory usage, so I just left it.

I’m in a similar situation and similarly confused. I am running a number of custom PHP/MySQL sites with fairly slowly changing dynamic content. My actual memory use rarely gets above 75mb but the cache steadily climbs until either I hit the limit and get rebooted (and DH --> change that message – I find it insulting, unprofessional and makes me angry enough I want to change hosting just to get away from it) or I do a manual reboot (so I can control the time – most of my hits are from North America/Western Europe)

All the sites are under one user, all are PHP 5.3 Fast CGI, all have the free CloudFare (except one I just noticed and just changed). and PageSpeed enabled.

I find it frustrating because the Dreamhost response seems to be “we don’t know why cache memory grows, just buy more memory.”

Okay, so I’m giving a combination of things a try. I disabled mod_php with no visible ill-effects, and the memory it freed up seems to have given me the space to enable XCache, so far this seems to be reducing overall memory load by serving pages faster. I’ll have to keep an eye on my resource usage and Google Analytics to get a more complete picture though, as I do all my changes at off-peak times so it’s a bit hard to get an idea of what’s changed.

I’m also giving PSManager a try, and it seems to be set up okay; only thing though is, should I be concerned that it forced the memory resize (i.e - a triggered a server restart)? It seems to have settled for now and the site is running smoothly at PSManager’s chosen memory limit, but I’m unsure if I like the idea that it’ll force a restart every time my memory usage drops; what if memory usage goes up at peak times and then goes down later on, will that mean my server could end up restarting daily? Fortunately the next few weeks are usually relatively quiet so I can probably put up with some minor issues to see just how it works.
I did however find a similar issue every time I wanted to reduce memory usage on my VPS manually; cached memory just fills up every megabyte of unused space and seemingly refuses to give it up when a resize is a requested. This is something that Dreamhost really need to fix somehow, maybe by stopping their memory cache service first, then resizing the VPS and restarting the cache, whatever it even does (I still have no idea).

I’m probably going to consolidate my user accounts as well, the second user only uses around 20mb, but if that can be handled by the larger user’s PHP processes then it’s still potentially 20mb saved, so I figure I might as well. But it seems likely that it will result in a longish interruption to my site (since the user I need to change owns the biggest site) so I’ll wait a while so I can give my users a heads up.

It also seems like a lot of effort for what may not be huge gains, as the majority of my overhead is from PHP processes rather than Apache itself, as far as I can tell. Have you found that nginx cuts overhead that significantly?

I run Lighttpd for testing on my main computer, and it’s delightfully lightweight, but when you start simulating the same number of active PHP sessions there doesn’t seem to be much in it anymore.

It’s my understanding that “…nginx does not read .htaccess files!”, according to the DH wiki. Therefore, sites running software like phpBB, ZenCart, and Drupal (see O.P.) may break upon implementing Nginx.