Replication is working correctly but we are seeing that Timesten is building logs on disks steadily.
Please can somebody explain and confirm that the logs generated are normal or does it indicate any problem.

Also i have a another contextual question.
There's lot of ttJdbc-*.so files are getting created under /tmp location on linux. What are those and what can be the cause for these files. They might eat up the file descriptor limit on the node.

I don't know what those ttJdbc-*.so files are nor why they are being created. Can you provide and 'ls -l' of the files and also what does the 'file' command say about one of them?

The DSN definition you provided does not look like the full definition. Many attributes are missing. Is this really the definition you have? Also, I see that you are storing the database transaction log files within the product installation tree. This is a very bad idea. AN uninstall or upgrade of the TimesTen software may well remove those files without warning rendering your database unusable and unrecoverable (unless you have a backup). NEVER store database files within the product install tree.

The issue here I think is that you have 'over configured' replication. You have 7 datastores involved in the replication scheme and you say that you have specified ReplicationParallelism=12 (though the DSN definition does not show this) This means that each machine will be running 144 replication threads (12 senders and 12 receivers for each peer store). That's a lot of threads to support a replicated throughput of 30 TPS. For that level of throughput you don't need any parallelism and i would suggest reducing the setting to 1 (you will have to drop and re-create the database to change this unfortunately). Also, a log buffer of 192 MB is quite small especially for a replicated system. You should try increasing that to 256 MB or even 512 MB. Then test again and see how things behave.

Note that if any of the peer stores become unavailable then TimesTen will accumulate logs as the log data is needed to resync the unavailable peer(s) when they become available again. You can prevent this by specifying the LOGTHRESHOLD option on each STORE clause. In that case once the threshold is exceeded that peer is marked as 'failed' and must then be recovered/resync'd by a full 'duplicate' from an authoritative copy.

[TT]
Driver=/home/oracle/product/11.2.2.4/TimesTen/tt11224/lib/libtten.so
DataStore=/home/oracle/product/11.2.2/TimesTen/tt/datastores/TT
LogDir=/home/oracle/product/11.2.2.4/TimesTen/tt11224/logs
DatabaseCharacterSet=AL32UTF8
#Set to 0, which enables parallel propagation
ReplicationApplyOrdering=0
#Should be no greater than half the value of the LogBufParallelism attribute.
ReplicationParallelism=12
# Should be euqal to the number of CPU or cores present in the machine.
LogBufParallelism=24
# should be 8 * (LogBufParallelism) larger value helps parallelism
LogBufMB=192
# should be >= LogBufMB
LogFileSize=192
#0 - Write data to the transaction log files using the previously used value.
#1 (default) - Write data to transaction log files using buffered writes and use explicit sync operations as needed to sync log data to disk (for example with durable commits).
#2 - Write data to transaction log files using synchronous writes such that explicit sync operations are not needed.
LogFlushMethod=2
#The permanent memory region - this is where your data is stored.
PermSize = 500
#If left unspecified, its value is determined from PermSize as follows:Tempsize = 40 MB + ceiling(PermSize / 8 MB)
TempSize=103

You mentioned that the replicaiton is 'over configured' for the load that we are currently testing. I think we are fine with the 'over-configured' setting provided it is not wrong.

Our main concern is regarding the transaction log files getting accumulated and filling the disk space, which you said is because of the unavailability of peer and which can changed by using LOGTHRESHOLD option.

Im going to try that. Also, i will correct the issue you pointed out regarding logs directory.

I would contend that the over configured parallel replication is 'wrong' or at least inadvisable. There are not enough cores in the machine to properly service all these threads and under heavier load you will just be adding unnecessary contention and synchronisation between the threads. This is overhead which brings no benefit and may even actually hurt.

When tuning the LogBufMB the objective is that under normal conditions (all peers are available and replication is fully operational) the values of SYS.MONITOR.LOG_BUFFER_WAITS and SYS.MONITOR.LOG_FS_READS are zero and stay at zero. It is acceptable to get a see an occasional, small increment when under heavy load but any significant/increasing non-zero values for these metrics indicate a performance issue.