While bench marks from the industry bodies (SPEC, etc.) & vendors are a good indication of a particular platform's performance and its scale, the context of their testing has to be considered before we take technology decisions. Was a long running transaction (service demand) considered? Was the right mix of transactions considered 40% read, 60% write? Did the test application use a large quantity of cache? The questions are endless. And so on …

So, if the stakes are high, and a higher predictability is warranted, there is a need to understand the specific load profile of the solution that is being designed. In this blog, I'll share my experience of planning for the memory requirements, as the system scales up.

First, let us identify the factors that will influence the memory requirements. We have to separate static requirements (ones which will not change due to load) from dynamic requirements (ones that will change due to load).

Static memory requirements are essentially influenced by the space your program requires, the constant data and the static data (sd) being cached. If you're a purist, you may include the requirements of the server software, the OS, etc, in this discussion. But I feel that focusing on static data being cached alone is sufficient.

Dynamic memory requirements are essentially influenced by the following factors - - Concurrent user load (u): Please note this is not influenced by the total users in the system. Instead, it is the number of user we expect to be logged in at the peak time interval. We also need to take the size of the data (ud) that is loaded per user session. In a j2ee environment, it is the size of the http session object and / or the stateful session bean object. Memory required = u * ud - Concurrent transaction load (t): Please note this is not the same as concurrent user load. These are business transactions being processed at any given point in time. Users may be doing multiple tasks (data entry in forms, parallel processing with other applications, etc.). Some user transactions may trigger multiple transactions at the backend. And hence, in most cases, 't' is not likely to be the same as 'u'. We need to find out the size of the data (td) that will be loaded in memory for processing each transaction. Memory required = t * td - Dynamic data cache (c): Some application may need to cache data across a few transactions. So, there is more memory being consumed than what your current transaction is processing. Having gathered the above information, the following will give the memory requirements - Mem_Reqd = sd + (u * ud) + (t * td) + c There may be other factors that may have to be considered, such as - Threshold value target on memory usage as well, before suggesting the right amount of memory required to support the current needs and the needs when the application scales up. - Variations of size of data being stored for different user profiles or different kind of transactions etc. For each resource that needs to scale according the increase in load, we may have to go to similar levels of details as above. I'd like to hear your experiences as well, if any, and learn from you all.

"I would like to think that this is applicable to all systems that= needs to scale up (either because, users will increase,= transactions will increase, data will increase and so on)=2E In= distributed systems, we may have to fine tune the approach to= the load profile on any single server on the distributed= system=2E Is this addressing your query? "