Under (relatively) heavy load LRUAlgorithm for some reason tries to remove the same node twice. Result is java.lang.RuntimeException: LRUAlgorithm.evict(): null node map entry for fqn: /test1/node835264.

Since I have just refactored the eviction policy last week, I have tried your test case on the latest code on jboss-head. I don't see any error right now. So can you try it yourself to make sure the problem is resolved?

Now it's working much better, it takes some time to get this error. But unfortunately, it's still there.Seems it heavily depends on system load. It I put Thread.sleep(10) in putting thread it never happens.

As you see eviction task started but for a minute didn't manage to do any noticeable garbage collecting. I thing it production environment it's very possible case. Don't you thing you should advance eviction thread priority so it's always cleaning faster then garbage appear?

I've just put latest jboss-cache.jar into jboss and run my application testunit. It's about the same I used above. It puts as much as possible my cached objects to cache.Shortly after start I noticed that number of cached object keep rising far above limit (10000). I stopped my testcase.Then, while there's no any activity with cache, it jboss stdout I see that on each eviction step the same exception it thrown.By the way, according to printDetails() called from jmx console, this particular key [ChartRequest 425, 60, 41, candle] does not exist in cache while eviction process keeps trying to remove it.

Can you guys try it one more time by getting the latest jboss-head? It is actually not a serious error, per se. Here is why.

What happened was in TreeCache, it will do another nodeModified event notification after the parent (leaf node has no problem) node is evicted. So you see, it will loop back to eviction policy again. And therefore could not found the evicted node.

Bela and I have found a solution to stop the notification in this case. So the problem should be solved.

Unfortunately, it's still there. I updated from cvs-head and run my testcase which I used in the beginning of this thread. It took an hour before this exception appeared. Seems somehow it depend on system load..

Uve, Could you also try to reproduce this bug to assure Ben it's a real thing?

Hi,I can easily reproduce the java.lang.OutOfMemoryError with the testcase,but I am not sure - if this is really a bug.Could it be possible, that simply the parameters in local-eviction-service.xml are not optimal for the use case and the environment?-uwe

I don't think it is a bug. I can also re-produce the OOM error myself here as well. The key is the load is simply not realistic if we have multiple threads that are generating put and get all the time and thus exhibiting 100% CPU.

A more realistic use case will requires a random sleep in between put/get so the eviction timer has time to pocess those.

In another persective, to tune it up, you can decrease the parameter "wakeUpIntervaleSeconds" so the timer wakes up more often. But again there is a limit that it can do that, especially, when it is *local* mode of which put/get is really quick.

If it is replicated mode, then the bottle neck will probably not be CPU bound anymore.

I don't think it is a bug. I can also re-produce the OOM error myself here as well. The key is the load is simply not realistic if we have multiple threads that are generating put and get all the time and thus exhibiting 100% CPU.

A more realistic use case will requires a random sleep in between put/get so the eviction timer has time to pocess those.

In another persective, to tune it up, you can decrease the parameter "wakeUpIntervaleSeconds" so the timer wakes up more often. But again there is a limit that it can do that, especially, when it is *local* mode of which put/get is really quick.

If it is replicated mode, then the bottle neck will probably not be CPU bound anymore.

Of course I do not consider this particular out of memory exception as a bug.The thing is OutOfMem CAUSED by exception which is this thread's topic.

In my last post - take a look - first this exception appeared, it repeated with each eviction cycle effectively stopping eviction.This caused rising number of nodes far behind maxNodes limit, and, AS A RESULT, OutOfMem.

Again, as you see in my last post's log extracts, cache keeps trying to remove the same node, _which is not exitst in cache at the moment_, causing exception. THIS is a bug, imho.

Hint: when i change int i = (int) (Math.random() * 10000); // creates node0...node9999 to say int i = (int) (Math.random() * 1000000000); it disappears. Taking into account that maxNodes=5000 it seems that exception appears then I putting the same node which is already exists and enlisted for eviction... something like this.

Ben, just to assure it's not an artificial case: the bug appears in my real application where cache is to cache stock charts which are produced in relatively long time but frequently requested from web (8-digit requests daily).I reproduce this bug just running some 10 threads in JMeter making web requests, which is far less then it's going to be in production.