If you are already running lve-stats (in case you are running cPanel LVE plugin), run:

$ yum update lve-stats

This should also be updated automatically next time your system runs system wide update.

The package installs lvestats-server. You can re-start the server by running:

$ service lvestats restart

The package creates sqlite database /var/lve/lveinfo.db that stores history information about LVE usage. Up to two months of hourly info is stored for each client. The data for the last hour is stored with 5 minutes interval, and the data for the past 10 minutes is stored with 1 minute interval.

LVE Stats updates /var/lve/info every few seconds. That info is used by LVE Manager plugin.

Package consists of lveinfo utility to query LVE usage, and lvechart that allows you to chart usage for individual LVE.

To query historical LVE info, lveinfo command provided. It is located at /usr/sbin/lveinfo:

# /usr/sbin/lveinfo [OPTIONS]
-h --help : this help screen
-v, --version : version number
-d, --display-username : try to convert LVE id into username when possible
-f, --from= : run report from date and time in YYYY-MM-DD HH:MM format
if not present last 10 minutes are assumed
-t, --to= : run report up to date and time in YYYY-MM-DD HH:MM format
if not present, reports results up to now
-o, --order-by= : orders results by one of the following:
cpu_avg : average CPU usage
cpu_max : max CPU usage
mep_avg : average number of entry processes (concurrent connections)
mep_max : max number of entry processes (concurrent connections)
vmem_avg : average virtual memory usage
vmem_max : max virtual memory usage
pmem_avg : average physical memory usage
pmem_max : max physical memory usage
nproc_avg : average number of processes usage
nproc_max : max number of processes usage
io_avg : average IO usage
io_max : max IO usage
total_mem_faults : total number of out of virtual memory faults (deprecated since 0.8-6)
total_vmem_faults: total number of out of virtual memory faults (since 0.8-6)
total_pmem_faults: total number of out of physical memory faults (since 0.8-6)
total_mep_faults : total number of entry processes faults (deprecated since 0.8-6)
total_ep_faults : total number of entry processes faults (since 0.8-6)
total_nproc_faults: total number of number of processes faults (since 0.8-6)
any_faults : total number of any types of faults (since 0.8-6)
--id= : LVE id -- will display record only for that LVE id
-u, --user= : Use username instead of LVE id, and show only record for that user
-l, --limit= : max number of results to display, 10 by default
-c, --csv : display output in CSV format
-b, --by-usage : show LVEs with usage (averaged or max) within 90% percent of the limit
available values:
cpu_avg : average CPU usage
cpu_max : max CPU usage
mep_avg : average number of entry processes (concurrent connections)
ep_avg : average number of entry processes (since 0.8-6)
mep_max : max number of entry processes (concurrent connections)
ep_max : max number of entry processes (since 0.8-6)
mem_avg : average virtual memory usage
mem_max : max virtual memory usage
vmem_avg : average virtual memory usage
vmem_max : max virtual memory usage
pmem_avg : average physical memory usage
pmem_max : max physical memory usage
nproc_avg : average number of processes
nproc_max : max number of processes
io_avg : average IO usage
io_max : max IO usage
-p, --percentage : defines percentage for --by-usage option
-f, --by-fault : show LVEs which failed on max entry processes limit or memory limit
available values: mem, mep.
since 0.8-6 : vmem, pmem, ep, nproc
--show-all : since 0.8-6 only columns for enabled limits will show up.
-r, --threshold : in combination with --by-fault, shows only LVEs with number of faults above threshold specified
--server_id : used in combination with centralized storage, to access info from any server
--show-all : full output (show all limits); brief output by default

OptimumCache is a de-duplicating file cache optimized specifically for shared hosting. Typical shared hosting server runs a number of sites with WordPress and Joomla as well as other popular software. This usually means that there are hundreds of duplicate files that are constantly being read into file cache - both wasting precious disk IO operations as well as memory. OptimumCache creates a cache of such duplicated files and de-duplicates file cache.

With OptimumCache, if a duplicate of an already loaded file is requested, the file gets loaded from filesystem cache. By doing that, system bypasses disk IO, significantly improving the speed of reading that file, while lowering load on the hard disk. As the file had been read from disk just once, it is cached by filesystem cache just once, minimizing amount of duplicates in file system cache and improving overall cache efficiency. This in turn reduces memory usage, decreases the number of disk operations - all while improving the websites response time.

OptimumCache must be provided with list of directories to expect duplicate files be in:

# occtl --recursive --mark-dir /home

# occtl --recursive --mark-dir /home2 (for cPanel)

# occtl --recursive --mark-dir /var/www (for Plesk)

OptimumCache is going to index these directories. Thus system load during this period (from hours to days) might be as twice as high. See Marking directories.

Allocating Disk Space for OptimumCache:

By default OptimumCache will attempt to setup 5GB ploop (high efficiency loopback disk) to be used for the cache in /var/share/optimumcache/optimumcache.image

That ploop will be mounted to: /var/cache/optimumcache

The ploop image will be located at /var/share/optimumcache/optimumcache.image

Allocating OptimumCache disk space for ploop on a fast drives (like SSD) will provide additional performance improvement as more duplicated files would be loaded from fast disks into memory.

Moving ploop image to another location:

# occtl --move-ploop /path/to/new/image/file [new size[KMGT]]

/path/to/new/image/file must be file path + file name, not a directory name.

Example:

# occtl --move-ploop /var/ssh/optimumcache.image

If new size is not mentioned, then value from /etc/sysconfig/optimumcache is used. If /etc/sysconfig/optimumcache does not mention anything regarding ploop image size, then default 5GB is used.

Enabling and disabling ploop:

To turn on ploop:

# occtl --init-ploop

To disable ploop:

# occtl --disable-ploop

If ploop image has been mounted in /etc/fstab for OpimumCache-0.1-21 and earlier, you may consider removing this fstab entry in OpimumCache 0.2+. That is because since 0.2+ ploop is mounted automatically at service start.

If you prefer leave that fstab mount point as is, you may see some warnings when you decide to move ploop later via occtl --move-ploop.

Resizing ploop:

To resize ploop:

# occtl --resize-ploop [new size[KMGT]]

A common reason for resizing ploop is reacting to OptimumCache syslog message like “OptimumCache recommends cache storage size to be at least … GB”.

Deleting ploop:

# occtl --delete-ploop

For the case when this action cannot be completed due to “Unable unmount ploop” issue, there is a workaround in “Troubleshooting” section.

Q. I created/resized/moved/deleted ploop. Do I need to rerun the initial mark process?

On servers with kernel prior to lve1.2.55 ploop will not be used (due to ploop related issues in the kernel). Instead cached files will be stored in /var/cache/optimumcache.

The cache will be cleaned (shrunk) by 20% once partition on which OPTIMUMCACHE_MNT resides has only 10% of free space. You can change that by changing PURGEAHEAD param in /etc/sysconfig/optimumcache, and restarting optimumcache service.

The cache is cleaned /etc/cron.d/optimumcache_cron script optimumcache_purge, which runs every minute:

OptimumCache is going to index these directories. Thus system load during this period (from hours to days) might be as twice as high. You can check indexing job status with at -l at any time.

Ignoring particular files & directories:

OptimumCache tracks files & directories that need to be cached. Once file is modified, it will no longer be tracked by OptimumCache (as there is very little chance that it will have a duplicate). Yet, all new files created in tracked directories are checked for duplicates.

Sometimes you might want to ignore such checks for directories where large number of temporary or new files are created, that will not have duplicates - as such checks are expensive. Directories like mail queue, and tmp directories should be ignored.

You can set a regexp mask for directories that you would like to ignore using:

$ occtl --add-skip-mask REGEX

To list skip masks:

$ occtl --list-skip-mask

To remove skip mask:

$ occtl --remove-skip-mask ID|Tag

At the very end, for those changes to take effect:

$ occtl --check

occtl --check is the same lengthy operation as marking is. Thus, it’s usage has to be sane, especially for big home (>500G).

Installing this package automatically starts system load statistics collection in background. cloudlinux-collectl package has no strict dependency on OptimumCache, thus the statistics is collected regardless of whether OptimumCache is installed or not. The aim of having this package pre-installed is to compare system performance before and after installing OptimumCache, thus to measure OptimumCache effectiveness.

The next goes URLSTATTRACKER DETAIL block with url response time in milliseconds. Negative values here may pop up unexpectedly. Negative numbers are not milliseconds, but signal about http error response code for that specific url. For instance, -403 will signal for Forbidden http error. As for -500 value, it signals not only for Internal Server Error, but can be displayed, when there is connection problem with the server, which is specified by the url.

cloudlinux-collectl has got collectl package as a dependency. Initd script /etc/init.d/cloudlinux-collectl will automatically bring up another instance of collectl named collectl-optimumcache . collectl-optimumcache daemon instance has a separate config and does not interfere with other running pre-configure collectl daemon (if any).

For OptimumCache version prior 0.2-11, uninstalling via rpm package manager does not automatically removes away ploop image. That is because not always possible to unmount it properly due to kernel dependency. If there is no luck with unmounting ploop, then the server will have to be rebooted and will need to remove ploop files manually:

For OptimumCache version 0.2-11 and later, ploop image will be removed automatically during uninstall. If ploop unmount issue prevents doing that, ploop image clean up will be scheduled after next server reboot.

If uninstall OptimumCache process lasts for too long, please find the solution in Troubleshooting section of this document.

For your changes to take effect, the server has to be rebooted. Upon reboot, you may clean up manually old ploop image file and DiskDescriptor.xml file, which resides in the same directory along with old image.

High IO problem was fixed in latest version of OptimumCache (version 0.2-6). The fix is to eliminate superflows fsync() calls in OptimumCache operations. To activate this fix in existing installation, flag NOIMMSYNC=1 has to be manually set in /etc/syscoconfig/optimumcache.

To ensure that this parameter is set ON in the config, set LOGLEVEL=2 and execute service optimumcache restart. You will see something like this:

Once it is detected that OptimumCache overuses CPU, it is useful to check, whether checksums reindexing process is running. When reindexing is running, high CPU usage is ok, as far it will certainly drop down after reindexing finished.

Uninstalling OptimumCache takes time because of files unmark process, which lasts proportionally to number of files, previously marked for caching with occtl --mark-dir.... If you see, that yum remove optimumcache command is stuck and you have no time to wait for it to finish, or IO load, caused by unmarking files, is undesirable for you, open another console terminal and invoke:

# occtl --cancel-pending-jobs

This command will cancel unmark operation, being run by yum under the hood. So that yum uninstall package transaction will complete very soon.

CPU Limits are set by CPU and NCPU parameters. CPU specifies the % of total CPU of the server available to LVE. NCPU specifies the number of cores available to LVE. The smallest of the two is used to define how much CPU power will be accessible to the customer.

Cores Per Server

CPU Limit

NCPU Limit

Real limit

1

25%

1

25% of 1 core

2

25%

1

50% of 1 core

2

25%

2

50% of 1 core

4

25%

1

100% of 1 core (full core)

4

25%

2

1 core

4

50%

1

1 core

4

50%

2

2 cores

8

25%

1

1 core

8

25%

2

2 cores

8

50%

2

2 cores

8

50%

3

3 cores

When user hits CPU limit, processes within that limit are slowed down. For example, if you set your CPU limit to 10%, and processes inside LVE want to use more then 10% they will be throttled (put to sleep) to make sure they don't use more then 10%. In reality, processes don't get CPU time above the limit, and it happens much more often then 1 second interval, but the end result is that processes are slowed down so that their usage is never above the CPU limit set.