We are currently doing performance testing for timesten for 2000 users. The application is a java project and is deployed on weblogic server. We are facing very poor performance with timesten. The response time for the same code with Oracle db is 0.116 second , however with timesten it is coming about 9 seconds.

We have tried both the client-server connection as well as direct connection .
The sql query is just a select statement which gets the count of records from the database . Our requirement is read only and we are not writing anything in timesten . We are caching data from oracle db in timesten tables and running our query on it .

The details of the environment and the timesten database are as follows.

1.)Timesten is intalled on RHEL 5 64 bit machine . The output of ttversion of the same is

5.)The weblogic installed is version 10.3.4 . There are three managed servers in it and they are in a cluster . The timesten data source is created in the weblogic . The application uses the jndi name to connect to the data source using jdbc driver .

6 .) Cache groups

We have 7 cachegroups created for the 7 tables. Indexes have also been created on the tables which are same as the source oracle db from where data is cached .

2. You really need MemoryLock=4 (better still huge pages configured and used) for a database of this size. Please check documentation and try to at least enable MemoryLock=4.

3. Your timing methodology is invalid. It is invalid for Oracle DB (since it includes database connect/disconnect time) but it is totally bogus for TimesTen. Unless you have changed
the default ramPolicy then the default behaviour for TimesTen is to load the database into memory when something connects and unload when the last connection closes.
So, it is quite possible that your 'query time' includes a lot of time for loading/unloading the database. To avoid this you can manually load the database into memory before you
start your tests as follows:

ttAdmin -ramPolicy manual TTEAG

ttAdmin -ramLoad TTEAG

If you do this then you need to be sure to properly shutdown the database when you have finished with it...

ttAdmin -ramUnload TTEAG

If you want to set the behaviour back to the default:

ttAdmin -ramPolicy inUse TTEAG

Note that even with the database pre-loaded, connect/disconnect is very expensive. If you are running short queries like this then you will lose most of the benefit of TimesTen if
you open and close a connection for each query. You should change the application to use long lived persistent connections and to to prepare then just once and then execute
many times using parameterised inputs. Of course this will bring big benefits in Oracle as well as in TimesTen.

Please apply these changes and then report back on how the timings look. It is likely there is still some query plan optimisation to be done but until you have optimsier stats in place and we can look at the 'real' plans it is hard to say much.

After this setting we have not recieved any further impovement in the performance .

3.)Rampolicy for timesten database .

Currently we have the rampolicy is set to inUse . However we have a data source configured in weblogic server which maintains connection pool . Hence the connections to the timesten databse is always there .In our app code we do getConnection() from this data source. What we think is that the database is not unloaded from memory as long as the weblogic server is up.

However the performance of the Oracle database is still better and is about 0.01 secs. Please suggest if further changes are required to improve the performance . We are looking at results at par or less than oracle's response time i.e. 0.01 sec.

MergeJoins are often not so good for response time. In ttIsql you can use an optimiser hint (trymergejoin 0) to ask the optimiser to use a nested loop join instead. Note that optimiser hints have transaction scope (commit or rollback resets all hints) so you have to disable autocommit (autocommit 0) in order to use hints. If this helps you can also issue the same hints from application code.

We have noticed that CACHE HITS is always 0 after several runs of the query as seen in the output of monitor command.
Hence we suspect that the query is being propagated to the Oracle database instead of being
run from the TimesTen cache.
Is there any way to determine for sure that this is the case?
Is there anyway we can force the query to run in timesten cache if previous is the case?
And what would be the performance improvements in this case ?

Currently we are able to achieve very good performance results when the count returned by the query is small .
But when the number of records go high the performance degrades very much even after updating the statistics.

Your cache groups are not dynamic (assuming the schema you posted previously has not changed) so there should not be any dynamic load occurring; the CACHE HITS columns is only relevant if you have one or more dynamic cache groups defined. If you do have dynamic cache groups then the SYS.MONITOR columns TPL_FETCHES and TPL_EXECS being non zero will indicate dynamic load activity against Oracle.

The only other way that queries can get passed to Oracle is via PassThrough but this will only happen if you have set PassThrough explicitly in your sys.odbc.ini settings, in the application connection string or programatically from the application. SYS.MONITOR.PASSTHROUGH_COUNT will show if any passthrough is occurring.

Did you optimize the query as I recommended earlier (changing indexing, eliminating MergeJoin)? If not then you need to do that as otherwise the performance could indeed degrade as the number of rows increases.