Thanks GOOGLE for making this page top 3 search result of "maxperm sybase ase".Resolving Poor ASE Performance Due to AIX Default Virtual Memory Paging

ASE delivers its best performance when all of its memory resides in physical memory on the machine.It ASE’s memory is paged out by the operating system’s virtual memory subsystem then ASE performancecan suffer dramatically.

Poor performance due to memory paging on AIX is often reported as problems with ASE’s checkpoint and/ or housekeeper tasks. These tasks may be slow, appear hung, or encounter timeslice errors.Transaction dumps may also be affected as ASE cannot properly execute a checkpoint.Paging particularly affects these tasks because they often process long lists of memory.If ASE’s memory has been paged out access to it may require a read from disk, resulting inpoor performance. Any other ASE task that must traverse long lists of memory pages may be similarly affected,resulting in timeslice errors. Logical lock contention may also increase as taskswhich hold locks take longer to run.

While this is a potential problem on all platforms, certain characteristics of AIX make it more likelyto occur on that platform. The default configuration of the AIX kernel allows the file system cacheto consume up to 80% of the available physical memory pages. Depending on system demand this may resultin ASE memory pages being copied to the swap device. Sybase recommends tuning the AIX virtual memorysubsystem to avoid this scenario.

The AIX values minperm and maxperm loosely control the ratio of page frames used for files versusthose used for computational processes (such as ASE). Tuning these values requires determiningthe physical memory load of processes running on the host machine. maxperm should then be setso that file pages do not interfere with process pages.

For example, consider an AIX host that has 8 Gb of physical memory and hosts two ASE servers.One ASE server is configured to use 3 Gb of max memory and the other is configured to use 2 Gb of max memory.Other applications on the host require a total of 1 Gb of memory. Therefore the total memoryrequirement is 6 Gb (3 Gb + 2 Gb + 1 Gb). As 75% of this host’s physical memory is neededfor applications (6 Gb out of 8 Gb), maxperm should be set no higher than 25%.

In addition to configuring maxperm, Sybase highly recommends setting strict_maxperm to 1. When strict_maxpermis set to 0 (the default value), AIX may override the maxperm setting at its discretion. Setting strict_maxpermto 1 informs AIX that this is a hard limit.

Kernel parameters:

Note: Value for tunable maxperm% must be greater than or equal to the value of tunable maxclient%

=======================maxclient%

Purpose:Specifies maximum percentage of RAM that can be used for caching client pages.Similar to maxperm% but cannot be bigger than maxperm%.Values:o Default: 80o Range: 1 to 100%.o Type: DynamicDiagnosis:If J2 file pages or NFS pages are causing working storage pages to get pagedout, maxclient can be reduced.TuningDecrease the value of maxclient if paging out to paging space is occurring dueto too many J2 client pages or NFS client pages in memory. Increasing thevalue can allow more J2 or NFS client pages to be in memory before pagereplacement starts.

Refer To:Miscellaneous I/O Tuning Parameters

========================

maxperm%

Purpose:Specifies the point above which the page-stealing algorithm steals only filepages.Values:o Default: total number of memory frames * 0.8o Range: 1 to 100o Type: DynamicDiagnosis:Monitor disk I/O with iostat n.TuningThis value is expressed as a percentage of the total real-memory page framesin the system. Reducing this value may reduce or eliminate page replacement ofworking storage pages caused by high number of file page accesses. Increasingthis value may help NFS servers that are mostly read-only. For example, ifsome files are known to be read repetitively, and I/O rates do not decreasewith time from startup, maxperm may be too low.Refer To:Tuning VMM Page Replacement with the vmtune Command---------------------------------------------------------------------------------------maxfree

Purpose:Specifies the number of frames on the free list at which page-stealing is tostop.Values:o Default: 128o Range: 16 to 204800o Type: DynamicDiagnosis:Observe free-list-size changes with vmstat n.TuningIf vmstat n shows free-list size frequently driven below minfree byapplication demands, increase maxfree to reduce calls to replenish the freelist. Generally, keep maxfree - minfree equal to or less than 100. Setting thevalue too high causes page replacement to run for a longer period of time.Value must be at least 8 greater than minfree

Purpose:Specifies the maximum percentage of real memory that can be pinned.Values:o Default: 80 percento Range: 1 to 99o Type: DynamicDiagnosis:Cannot pin memory, although free memory is available.TuningIf this value is changed, the new value should ensure that at least 4 MB ofreal memory will be left unpinned for use by the kernel. The maxpin valuesmust be greater than one and less than 100. Change this parameter only inextreme situations, such as maximum-load benchmarking.

Purpose:Specifies the minimum number of frames on the free list at which the VMMstarts to steal pages to replenish the free list.Values:o Default: maxfree - 8o Range: 8 to 204800o Type: DynamicDiagnosis:vmstat nTuningPage replacement occurs when the number of free frames reaches minfree. Ifprocesses are being delayed by page stealing, increase minfree to improveresponse time. The difference between minfree and maxfree should always beequal to or greater than maxpgahead.

Purpose:If set to 1, the maxperm value will be a hard limit on how much of RAM can beused as a persistent file cache.Values:o Default: 0 (off)o Range: 0 or 1.o Type: DynamicDiagnosis:Excessive page outs to page space caused by too many file pages in RAM.TuningSet to 1 in order to make the maxperm value a hard limit (use in conjunctionwith the tuning of the maxperm parameter).Refer To:Placing a Hard Limit on Persistent File Cache with strict_maxperm

vmo -a ; displayvmo -d ; set to default valuevmo -o ; display or set a tunable to new value

vmo -p ; When used in combination with -o, -d or -D, makes changes apply to bothcurrent and reboot values, that is, turns on the updating of the/etc/tunables/nextboot file in addition to the updating of the current value.These combinations cannot be used on Reboot and Bosboot type parameters becasuetheir current value can't be changed.

Parameter types:S = Static: cannot be changedD = Dynamic: can be freely changedB = Bosboot: can only be changed using bosboot and rebootR = Reboot: can only be changed during rebootC = Connect: changes are only effective for future socket connectionsM = Mount: changes are only effective for future mountingsI = Incremental: can only be incremented