We are facing performance issue. After verifying from OS side we found that 2 disks are busy 100%. and we got error that jfs2log is full and that is related to this file system and VG. That is the reason I am planning to increase the jfs2log. Will it resolve the issue? And do we need downtime or any impact for this?

JFS LOG can be a performance bottleneck if the filesystem is facing lots of
i-node changes per second - means many change/delete of files. Normally the
size of JFS log logical volume is as big as a single PP, enough for most
operations. If i-node change is too high, logging will affect I/O
performance adversely and the log LV should be specially designed, like
defining a small striped LV as log logical volume. Anyway, you can increase
the size of a logical volume dynamically.

Thanks for your response. Right now Apps Team confirming that they are faceing performance issue. When I checked from my side, CPU is 50% used and paging is 30%. After checking errorlogs i found that jfs2log is full and in the error it indicates 1 file system.

And through nmon analyzer i found that particular filesystem disks(2) are 100% busy.

Could you please tell me how to resolve this issue? Will it resolve if I increase jfs2log or will it resolve if I increase the size of file system? Present it is 84%.

If error log states that jfs2log is full, add a couple of more PPs to it; it
is harmless.
However in normal cases increasing the filesystem size does not help, except
if you add more disks and re-stripe the filesystem which is not feasible in
most cases. If filesystem is 100% busy, they might need to be re-designed:
using more disk spindles, striping, HBA load balancing, ...

As the first step, increase the jfs2log and let us know the result. And
would you please explain what you have in the busy filesystems (like
database files, small files, ...) ? This is very important.

Actually, there is no direct command to show how much of a jfs2log is used. Internally it is used in a round-robin fashion. It is formatted when it is first created, and adding partitions later will have no effect, other than to waste space. It will have to be reformatted to make use of any additional space added to it.

Here is an article that shows the procedure to increase a jfs2log, for both root and user filesystems.
*IF* you are having jfs2log related problems, specific error messages will appear in the error log. See the article.
A simple "filesystem full" error is not related to the jfs2log usage.

J2_FS_FULL means some JFS2 filesystems are full. If a filesystem is 88%
occupied, 12% is free. But if an application needs more than this 12% for a
transaction-like operation, it considers the filesystem as full. Such a
clear message indicates you should increase the size of filesystems or free
some space in them. There might (or might not) be a relation between
filesystem free space and the performance issue. It actually depends on the
application behavior.

Just increase the FS size and monitor the system again and keep us
posted.

Still the issue is same. After some other investigation, i found that aioserver processes are running more in the server. i checked with below command and i can see 401 process and took at off peak hours. Is this might causing the issue for I/O or any other reason?
ps -ek | grep aioserver | wc -l
401

we have 2 processors in the server. And values for maxservers is ..

autoconfig available STATE to be configured at system restart True
fastpath enable State of fast path True
kprocprio 39 Server PRIORITY True
maxreqs 8192 Maximum number of REQUESTS True
maxservers 100 MAXIMUM number of servers per cpu True
minservers 1 MINIMUM number of servers True

Hi Ramesh,
In the previous posts you have mentioned that only two disks are 100% busy.
How many disks are there in the volume group? Looks like the filesystem
layout is not efficient for such a workload.
- What is the disk type? Internal/SAN/...
- Do you have unused disks in the system? The busy filesystem can be spread
on them for better performance.
- Do you use striping?
- Do you use LVM mirroring?
- RAID type?
...

If the disks come from DS4700, there are called LUN, means there is a kind
of mechanism (RAID) to protect the storage units you see in the operating
system. I believe your storage administrator could help you resolve the
performance issue, like presenting more disk spindles to your system.
Anyway, if there are three disks in the VG, you - as the AIX admin - can
balance the I/O load on all three disks.

The first step to balance I/O load of a filesystem while system is running
is "poor man's striping". It is called "Inter-Physical Volume Allocation
Policy" in LVM terms. It helps a lot in many systems:

Monitored apps after increasing the file system but still the issue is same. So I am planning to allocate new small luns to that particular vg and do mirror of that both LVs. Later planning to remove the copy from old disks. So could you please provide how to do mirror of LV in the HACMP 5.4 Cluster? /test_app and /test_db both filesystems are created under 2 hdisks at the build time. but application filesystem and database filesystem data is available in one single disk. That disk is showing 95%busy and sometimes 100% busy. Later due to increase the filesystem, added 2 more disks(few months back). Now total 3 disks are available in that volume group. Now I am planning to allocate 6 LUNs to that volume group and want to divide /test_app file system to 3 LUNs and /test_db to remaining 3LUNs. But I am not sure how to do mirror of LV to particular disks and later finishing the mirroring. How to remove the copy from the old disks?

Does this require downtime to do this activity? Please help me on this.