Arjen Markus (21 august 2002) Simulating discrete events can be useful in a variety of situations, one of them being the analysis of the performance of a server system. Suppose that you have a server system and this receives a certain amount of requests per minute (on average of course). Then important parameters could be:

The mean time a request has to wait

The mean length of the request queue

The maximum time a request has to wait

The maximum length of the request queue

For any but the simplest systems (requests that always take the same amount of time to be handled, for instance) a rigourous, mathematically exact, analysis will be difficult or impossible. Suppose that there are two types of requests, one requiring the full attention of the system, so that no other requests can be handled at the same time. An exact analysis would be difficult to say the least.

The method presented below in the form of a Tcl script is simple and straightforward, but could be the basis of a more elaborate model of the server system of your choice.

It assumes:

Events are received at a statistically constant rate of k events per unit of time.

The events arrive indepedently

All events require the same time T to be handled

The number of events that can be handled during one unit of time is constant, N, reflecting the number of threads or child processes taking care of the actual job.

Capacity: 20
Total number of events: 25629
Mean life time of events: 5
Mean queue length: 14
Capacity: 15
Total number of events: 25505
Mean life time of events: 8
Mean queue length: 20
Capacity: 10
Total number of events: 25910
Mean life time of events: 1129
Mean queue length: 2911
Capacity: 5
Total number of events: 25933
Mean life time of events: 3044
Mean queue length: 7891

At the transition from a capacity of 15 to 10, the server system as modelled acquires a totally different behaviour, that is, it is no longer capable of dealing with the flood of events!

Was this predictable? Yes, at least that some transition occurs. We can estimate the breakpoint by considering that:

On average 2.5 events per unit of time are generated

Each event takes 5 units of time to be processed

So unless at least 2.5*5 = 12.5 events per unit of time are handled, you get a backlog. Eventually the queue of events will grow unbounded!

More detailed analyses (especially numerical) are possible:

What happens to the average time when you get closer to the breakpoint?

What is the (expected) maximum queue length (if not all events are stored, you will loose some!)

Etc.

Note: Since the presented results are a single realisation of a stochastic process, one should do a fair number of these simulations to get dependable results.

The site netlib.org [1] has a wealth of numerical software. It contains among many other libraries several accurate algorithms for generating random numbers acoording to the Poisson distribution (and most other well-known distributions as well).