Hi folks, since updating to 10.1 and being able to take advantage of the new memory restriction of 4gb rather than 2, I have been having a difficult time finding the right setup without experiencing errors in the stmm.log regarding unable to find donors.

Firstly I am running a personal database of some 36gb in size, with 4 main tablespaces being results, indexes, rawdata and static data. These all share a single bufferpool called 'DATAPOOL'.

My database is NOT OLTP, it is used for datamining, storing and processing association rules, sequential pattern rules etc. Rules are received with bodies of elements and a rule head. The bodies are stored then broken into individual elements and stored again. This way I am able to use relational division queries to the rawdata table and get the dates on which the rules occur. The datamining engine only returns counts ie: confidence support etc, no dates. So it is up to me to find the actual dates the each rule occurs on in the past, then be able to check when active and store future dates as they occur.

Now in order to get dates etc, the relational division query must gather a significant number of dates that each of the individual elements has occurred on, then using 'having count(*) = the number of elements in the rule body' it is able to find the specific dates they all happened together.

My problem is finding the right balance of bufferpool versus database (sort etc) memory. I have thought about using stmm but it reacts too slowly to the processing as the database is only active for processing at the end of the day, with no activity at other times. Leaving the database active, stmm reduces all the appropriate values over time so come processing there is only a minimal setup and performance is terrible.

Given 4gb of available memory, I started very basically going for something like 2gb for bufferpools and 2gb for the rest, but no matter how I try to set it up, it always gives me these stmm.log donor errors.

System has two (2) physical discs that I split the 4 tablespace containers over, logs and results on 1, indexes, rawdata and static on the other. System has 8gb of real memory, running 64-bit win 7 pro sp1 and db2 express-c luw 10.1 64-bit. Processor is an i5 2400 quad core.

I use oorexx 4.1 32-bit to run the whole thing executing sql stored procedures etc. As I am the only one connecting to the database, everything usually happens sequentially in that my rexx programs run one at a time with a single connection only. There are only very rare occasions where there may be more than 1 connection active at a time and that would be for minimal queries.

Anyone that could help with some suggestions as to where best to start at least would be most welcome. I realize specifics are not practical but some suggestions and how tos would be gratefully appreciated.

No takers ? Surely someone out there has installed the 64-bit version of 10.1 LUW Express-C and configured it for 4gb without getting the unable to find a donor errors while being able to utilize the entire 4gb or memory allowed ?

Just after very rough estimates of how to take full advantage of the 4gb available to Express-C 10.1 64-bit in a WAREHOUSE environment.

As a rough estimate, how much would one assign to bufferpools and to the other main memory config params such as SHEAPTHRES_SHR and SORTHEAP whilst all other params are set to automatic.

Presently I am getting a mess of unable to find donor messages filling up the stmm.log file. This is what I am trying to avoid while taking FULL advantage of the entire 4gb. The PC has 8gb memory so I do not need to worry about 'other' applications. The db2syscs process is only utilizing some 2.3gb in memory at the moment.

When allocation a bufferpool with fixed size 500 000, I experienced the same problem, with STMM reporting no memory left to grab.
I would start by reducing the size of your BP, may setting it to Automatic

I would start by setting everything AUTOMATIC, including the bufferpools then continuously perform mining runs for a few hours, until the parameter values do not evolve, giving an idea of the optimal values for the parameters.

Then I would update the parameters with these optimal values, without the automatic option.

Also, since you typically perform a lot of sequential access to the data, consider using a large pagesize for the bufferpool and the tablespaces.