FLEXIBLE PRIORITY QUEUES

Classic priority queue guarantee that the top element is the largest element in the container (in other words the largest element is always first to leave queue). But in some cases this rule can't be accepted. Imagine that we have server program that processes queries. Our server has internal queue to place all incoming queries. Each query has it's priority. For simplicity, let's take that there two priority values: 1 and 2. Now, if server receives large amount of queries with priority 2, users, issued less priority queries should wait. Sometimes such server behaviour (the way of priority value interpretation) is not acceptable.The way to make priority more flexible is to look at priority value as "probability" of fetching query from the queue. Let's see at very simple way how such queue may be implemented.For each distinct priority value we need particular FIFO queue. Two queues - first for priority 1 and second - for priority 2. Each time when we need to fetch element from the queue we generate random number, that takes value 1 with probability 1/3 and 2 - with probability 2/3. When we get 1 - we fetch query from the first queue, otherwise - from the second. This figure illustrates the scheme of queue (for two distinct priority values 1 and 2), described here:
To test our "probabilistic" priority queue I've created small application. This application deals with 5 distinct priority values: 1, 3, 5, 10 and 11. Application creates priority queue and pushes equally 100000 elements for each priority value, i.e. 100000 elements - for priority 1, 100000 - for priority 3, and so on. Totally, we have 500000 elements in the queue. After that we fetch 50000 elements from the queue. In case of classical priority queue we should have 50000 elements with largest priority 11. In our case we should get that number of fetched elements (for each priority) is proportional to corresponding priority value. So, we should get: