From java-dev-return-27337-apmail-lucene-java-dev-archive=lucene.apache.org@lucene.apache.org Wed Sep 10 18:16:44 2008
Return-Path:
Delivered-To: apmail-lucene-java-dev-archive@www.apache.org
Received: (qmail 17198 invoked from network); 10 Sep 2008 18:16:44 -0000
Received: from hermes.apache.org (HELO mail.apache.org) (140.211.11.2)
by minotaur.apache.org with SMTP; 10 Sep 2008 18:16:44 -0000
Received: (qmail 47176 invoked by uid 500); 10 Sep 2008 18:16:37 -0000
Delivered-To: apmail-lucene-java-dev-archive@lucene.apache.org
Received: (qmail 47122 invoked by uid 500); 10 Sep 2008 18:16:37 -0000
Mailing-List: contact java-dev-help@lucene.apache.org; run by ezmlm
Precedence: bulk
List-Help:
List-Unsubscribe:
List-Post:
List-Id:
Reply-To: java-dev@lucene.apache.org
Delivered-To: mailing list java-dev@lucene.apache.org
Received: (qmail 47046 invoked by uid 99); 10 Sep 2008 18:16:37 -0000
Received: from athena.apache.org (HELO athena.apache.org) (140.211.11.136)
by apache.org (qpsmtpd/0.29) with ESMTP; Wed, 10 Sep 2008 11:16:37 -0700
X-ASF-Spam-Status: No, hits=2.0 required=10.0
tests=HTML_MESSAGE,SPF_PASS
X-Spam-Check-By: apache.org
Received-SPF: pass (athena.apache.org: domain of chris.lu@gmail.com designates 209.85.217.13 as permitted sender)
Received: from [209.85.217.13] (HELO mail-gx0-f13.google.com) (209.85.217.13)
by apache.org (qpsmtpd/0.29) with ESMTP; Wed, 10 Sep 2008 18:15:37 +0000
Received: by gxk6 with SMTP id 6so7340815gxk.5
for ; Wed, 10 Sep 2008 11:15:08 -0700 (PDT)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
d=gmail.com; s=gamma;
h=domainkey-signature:received:received:message-id:date:from:to
:subject:in-reply-to:mime-version:content-type:references;
bh=zxseUhSSpBpxdk5+RZMLUOtO083ep5LpOKkHbfwbBrU=;
b=M5P7ldZakjYYXfblAbWKr3wV3IrX3XjQo404YoIZPIsjt1QVwciqeVqUhF4FJrwqpz
BgMIWFtEwpoSYTsFvJs4bR51AtP8sQIZa3PbuyRYpmIN9Qtq59H44hbFnzxozGKvBO+6
U/0bV7aNBwqXHJz9cfGHsP2YjEqTZd+2A6xEs=
DomainKey-Signature: a=rsa-sha1; c=nofws;
d=gmail.com; s=gamma;
h=message-id:date:from:to:subject:in-reply-to:mime-version
:content-type:references;
b=nFgUh9T+HolGbxf+p7dZ6umoR4cEW+PYrRC9DVf953+O8Dd+aL3xG0t9NIhWTo19It
7Al/Tv8NweXCB44fvv+KirnjFzsoi0QDrbBbmkThPuaWc1v9QtRpolmfZvzA6/ILfqKv
BXubNl/7WiEd8sSMmQSuNCVWJdR7aQZoedyU8=
Received: by 10.142.186.15 with SMTP id j15mr552778wff.44.1221070507556;
Wed, 10 Sep 2008 11:15:07 -0700 (PDT)
Received: by 10.142.111.8 with HTTP; Wed, 10 Sep 2008 11:15:07 -0700 (PDT)
Message-ID: <6e3ae6310809101115y4804a10cq17146b33baf8d5d3@mail.gmail.com>
Date: Wed, 10 Sep 2008 11:15:07 -0700
From: "Chris Lu"
To: java-dev@lucene.apache.org
Subject: Re: ThreadLocal causing memory leak with J2EE applications
In-Reply-To:
MIME-Version: 1.0
Content-Type: multipart/alternative;
boundary="----=_Part_151324_20614761.1221070507524"
References: <6e3ae6310809091157j7a9fe46bxcc31f6e63305fcdc@mail.gmail.com>
<0FB358D2-EC2F-429D-B892-DAF8BFB47345@ix.netcom.com>
<6e3ae6310809100816q248cfef8pa8933a383de76003@mail.gmail.com>
<6e3ae6310809100844he599a49y93fc51859d8a7072@mail.gmail.com>
<54EA8BC1-DEDC-4495-BA0B-BCF90DB29280@ix.netcom.com>
<6e3ae6310809101036p5d495499yc865940080df0fd9@mail.gmail.com>
<8E2FB1FD-CFC2-4D7F-8CE6-C53624593E62@ix.netcom.com>
<6e3ae6310809101048g14bf4b98ib1ed9a34e847d81b@mail.gmail.com>
X-Virus-Checked: Checked by ClamAV on apache.org
------=_Part_151324_20614761.1221070507524
Content-Type: text/plain; charset=ISO-8859-1
Content-Transfer-Encoding: 7bit
Content-Disposition: inline
Actually I am done with it by simply downgrading and not to use r659602 and
later.The old version is more clean and consistent with the API and close()
does mean close, not something complicated and unknown to most users, which
almost feels like a trap. And later on, if no changes happened for this
file, I will have to upgrade Lucene and manually remove the patch
Lucene-1195.
--
Chris Lu
-------------------------
Instant Scalable Full-Text Search On Any Database/Application
site: http://www.dbsight.net
demo: http://search.dbsight.com
Lucene Database Search in 3 minutes:
http://wiki.dbsight.com/index.php?title=Create_Lucene_Database_Search_in_3_minutes
DBSight customer, a shopping comparison site, (anonymous per request) got
2.6 Million Euro funding!
On Wed, Sep 10, 2008 at 10:56 AM, robert engels wrote:
> Why not just use reopen() and be done with it???
>
> On Sep 10, 2008, at 12:48 PM, Chris Lu wrote:
>
> Yeah, the timing is different. But it's an unknown, undetermined, and
> uncontrollable time...
> We can not ask the user,
>
> while(memory is low){
> sleep(1000);
> }
> do_the_real_thing_an_hour_later
>
>
> --
> Chris Lu
> -------------------------
> Instant Scalable Full-Text Search On Any Database/Application
> site: http://www.dbsight.net
> demo: http://search.dbsight.com
> Lucene Database Search in 3 minutes:
> http://wiki.dbsight.com/index.php?title=Create_Lucene_Database_Search_in_3_minutes
> DBSight customer, a shopping comparison site, (anonymous per request) got
> 2.6 Million Euro funding!
>
> On Wed, Sep 10, 2008 at 10:39 AM, robert engels wrote:
>
>> Close() does work - it is just that the memory may not be freed until much
>> later...
>> When working with VERY LARGE objects, this can be a problem.
>>
>> On Sep 10, 2008, at 12:36 PM, Chris Lu wrote:
>>
>> Thanks for the analysis, really appreciate it, and I agree with it. But...
>> This is really a normal J2EE use case. The threads seldom die.
>> Doesn't that mean closing the RAMDirectory doesn't work for J2EE
>> applications?
>> And only reopen() works?
>> And close() doesn't release the resources? duh...
>>
>> I can only say this is a problem to be cleaned up.
>>
>> --
>> Chris Lu
>> -------------------------
>> Instant Scalable Full-Text Search On Any Database/Application
>> site: http://www.dbsight.net
>> demo: http://search.dbsight.com
>> Lucene Database Search in 3 minutes:
>> http://wiki.dbsight.com/index.php?title=Create_Lucene_Database_Search_in_3_minutes
>> DBSight customer, a shopping comparison site, (anonymous per request) got
>> 2.6 Million Euro funding!
>>
>>
>> On Wed, Sep 10, 2008 at 9:10 AM, robert engels wrote:
>>
>>> You do not need to create a new RAMDirectory - just write to the existing
>>> one, and then reopen() the IndexReader using it.
>>> This will prevent lots of big objects being created. This may be the
>>> source of your problem.
>>>
>>> Even if the Segment is closed, the ThreadLocal will no longer be
>>> referenced, but there will still be a reference to the SegmentTermEnum
>>> (which will be cleared when the thread dies, or "most likely" when new
>>> thread locals on that thread a created, so here is a potential problem.
>>>
>>> Thread 1 does a search, creates a thread local that references the RAMDir
>>> (A).
>>> Thread 2 does a search, creates a thread local that references the RAMDir
>>> (A).
>>>
>>> All readers, are closed on RAMDir (A).
>>>
>>> A new RAMDir (B) is opened.
>>>
>>> There may still be references in the thread local maps to RAMDir A (since
>>> no new thread local have been created yet).
>>>
>>> So you may get OOM depending on the size of the RAMDir (since you would
>>> need room for more than 1). If you extend this out with lots of threads
>>> that don't run very often, you can see how you could easily run out of
>>> memory. "I think" that ThreadLocal should use a ReferenceQueue so stale
>>> object slots can be reclaimed as soon as the key is dereferenced - but that
>>> is an issue for SUN.
>>>
>>> This is why you don't want to create new RAMDirs.
>>>
>>> A good rule of thumb - don't keep references to large objects in
>>> ThreadLocal (especially indirectly). If needed, use a "key", and then read
>>> the cache using a the "key".
>>> This would be something for the Lucene folks to change.
>>>
>>> On Sep 10, 2008, at 10:44 AM, Chris Lu wrote:
>>>
>>> I am really want to find out where I am doing wrong, if that's the case.
>>>
>>> Yes. I have made certain that I closed all Readers/Searchers, and
>>> verified that through memory profiler.
>>> Yes. I am creating new RAMDirectory. But that's the problem. I need to
>>> update the content. Sure, if no content update and everything the same, of
>>> course no OOM.
>>>
>>> Yes. No guarantee of the thread schedule. But that's the problem. If
>>> Lucene is using ThreadLocal to cache lots of things by the Thread as the
>>> key, and no idea when it'll be released. Of course ThreadLocal is not
>>> Lucene's problem...
>>>
>>> Chris
>>>
>>> On Wed, Sep 10, 2008 at 8:34 AM, robert engels wrote:
>>>
>>>> It is basic Java. Threads are not guaranteed to run on any sort of
>>>> schedule. If you create lots of large objects in one thread, releasing them
>>>> in another, there is a good chance you will get an OOM (since the releasing
>>>> thread may not run before the OOM occurs)... This is not Lucene specific by
>>>> any means.
>>>> It is a misunderstanding on your part about how GC works.
>>>>
>>>> I assume you must at some point be creating new RAMDirectories -
>>>> otherwise the memory would never really increase, since the
>>>> IndexReader/enums/etc are not very large...
>>>>
>>>> When you create a new RAMDirectories, you need to BE CERTAIN !!! that
>>>> the other IndexReaders/Searchers using the old RAMDirectory are ALL CLOSED,
>>>> otherwise their memory will still be in use, which leads to your OOM...
>>>>
>>>>
>>>> On Sep 10, 2008, at 10:16 AM, Chris Lu wrote:
>>>>
>>>> I do not believe I am making any mistake. Actually I just got an email
>>>> from another user, complaining about the same thing. And I am having the
>>>> same usage pattern.
>>>> After the reader is opened, the RAMDirectory is shared by several
>>>> objects.
>>>> There is one instance of RAMDirectory in the memory, and it is holding
>>>> lots of memory, which is expected.
>>>>
>>>> If I close the reader in the same thread that has opened it, the
>>>> RAMDirectory is gone from the memory.
>>>> If I close the reader in other threads, the RAMDirectory is left in the
>>>> memory, referenced along the tree I draw in the first email.
>>>>
>>>> I do not think the usage is wrong. Period.
>>>>
>>>> -------------------------------------
>>>>
>>>> Hi,
>>>>
>>>> i found a forum post from you here [1] where you mention that you
>>>> have a memory leak using the lucene ram directory. I'd like to ask you
>>>> if you already have resolved the problem and how you did it or maybe
>>>> you know where i can read about the solution. We are using
>>>> RAMDirectory too and figured out, that over time the memory
>>>> consumption raises and raises until the system breaks down but only
>>>> when we performing much index updates. if we only create the index and
>>>> don't do nothing except searching it, it work fine.
>>>>
>>>> maybe you can give me a hint or a link,
>>>> greetz,
>>>>
>>>> -------------------------------------
>>>>
>>>> --
>>>> Chris Lu
>>>> -------------------------
>>>> Instant Scalable Full-Text Search On Any Database/Application
>>>> site: http://www.dbsight.net
>>>> demo: http://search.dbsight.com
>>>> Lucene Database Search in 3 minutes:
>>>> http://wiki.dbsight.com/index.php?title=Create_Lucene_Database_Search_in_3_minutes
>>>> DBSight customer, a shopping comparison site, (anonymous per request)
>>>> got 2.6 Million Euro funding!
>>>>
>>>> On Wed, Sep 10, 2008 at 7:12 AM, robert engels wrote:
>>>>
>>>>> Sorry, but I am fairly certain you are mistaken.
>>>>> If you only have a single IndexReader, the RAMDirectory will be shared
>>>>> in all cases.
>>>>>
>>>>> The only memory growth is any buffer space allocated by an IndexInput
>>>>> (used in many places and cached).
>>>>>
>>>>> Normally the IndexInput created by a RAMDirectory do not have any
>>>>> buffer allocated, since the underlying store is already in memory.
>>>>>
>>>>> You have some other problem in your code...
>>>>>
>>>>> On Sep 10, 2008, at 1:10 AM, Chris Lu wrote:
>>>>>
>>>>> Actually, even I only use one IndexReader, some resources are cached
>>>>> via the ThreadLocal cache, and can not be released unless all threads do the
>>>>> close action.
>>>>>
>>>>> SegmentTermEnum itself is small, but it holds RAMDirectory along the
>>>>> path, which is big.
>>>>>
>>>>> --
>>>>> Chris Lu
>>>>> -------------------------
>>>>> Instant Scalable Full-Text Search On Any Database/Application
>>>>> site: http://www.dbsight.net
>>>>> demo: http://search.dbsight.com
>>>>> Lucene Database Search in 3 minutes:
>>>>> http://wiki.dbsight.com/index.php?title=Create_Lucene_Database_Search_in_3_minutes
>>>>> DBSight customer, a shopping comparison site, (anonymous per request)
>>>>> got 2.6 Million Euro funding!
>>>>> On Tue, Sep 9, 2008 at 10:43 PM, robert engels wrote:
>>>>>
>>>>>> You do not need a pool of IndexReaders...
>>>>>> It does not matter what class it is, what matters is the class that
>>>>>> ultimately holds the reference.
>>>>>>
>>>>>> If the IndexReader is never closed, the SegmentReader(s) is never
>>>>>> closed, so the thread local in TermInfosReader is not cleared (because the
>>>>>> thread never dies). So you will get one SegmentTermEnum, per thread * per
>>>>>> segment.
>>>>>>
>>>>>> The SegmentTermEnum is not a large object, so even if you had 100
>>>>>> threads, and 100 segments, for 10k instances, seems hard to believe that is
>>>>>> the source of your memory issue.
>>>>>>
>>>>>> The SegmentTermEnum is cached by thread since it needs to enumerate
>>>>>> the terms, not having a per thread cache, would lead to lots of random
>>>>>> access when multiple threads read the index - very slow.
>>>>>>
>>>>>> You need to keep in mind, what if every thread was executing a search
>>>>>> simultaneously - you would still have 100x100 SegmentTermEnum instances
>>>>>> anyway ! The only way to prevent that would be to create and destroy the
>>>>>> SegmentTermEnum on each call (opening and seeking to the proper spot) -
>>>>>> which would be SLOW SLOW SLOW.
>>>>>>
>>>>>> On Sep 10, 2008, at 12:19 AM, Chris Lu wrote:
>>>>>>
>>>>>> I have tried to create an IndexReader pool and dynamically create
>>>>>> searcher. But the memory leak is the same. It's not related to the Searcher
>>>>>> class specifically, but the SegmentTermEnum in TermInfosReader.
>>>>>>
>>>>>> --
>>>>>> Chris Lu
>>>>>> -------------------------
>>>>>> Instant Scalable Full-Text Search On Any Database/Application
>>>>>> site: http://www.dbsight.net
>>>>>> demo: http://search.dbsight.com
>>>>>> Lucene Database Search in 3 minutes:
>>>>>> http://wiki.dbsight.com/index.php?title=Create_Lucene_Database_Search_in_3_minutes
>>>>>> DBSight customer, a shopping comparison site, (anonymous per request)
>>>>>> got 2.6 Million Euro funding!
>>>>>>
>>>>>> On Tue, Sep 9, 2008 at 10:14 PM, robert engels >>>>> > wrote:
>>>>>>
>>>>>>> A searcher uses an IndexReader - the IndexReader is slow to open,
>>>>>>> not a Searcher. And searchers can share an IndexReader.
>>>>>>> You want to create a single shared (across all threads/users)
>>>>>>> IndexReader (usually), and create an Searcher as needed and dispose. It is
>>>>>>> VERY CHEAP to create the Searcher.
>>>>>>>
>>>>>>> I am fairly certain the javadoc on Searcher is incorrect. The
>>>>>>> warning "For performance reasons it is recommended to open only one
>>>>>>> IndexSearcher and use it for all of your searches" is not true in
>>>>>>> the case where an IndexReader is passed to the ctor.
>>>>>>>
>>>>>>> Any caching should USUALLY be performed at the IndexReader level.
>>>>>>>
>>>>>>> You are most likely using the "path" ctor, and that is the source of
>>>>>>> your problems, as multiple IndexReader instances are being created, and thus
>>>>>>> the memory use.
>>>>>>>
>>>>>>>
>>>>>>> On Sep 9, 2008, at 11:44 PM, Chris Lu wrote:
>>>>>>>
>>>>>>> On J2EE environment, usually there is a searcher pool with several
>>>>>>> searchers open. The speed to opening a large index for every user is
>>>>>>> not acceptable.
>>>>>>>
>>>>>>> --
>>>>>>> Chris Lu
>>>>>>> -------------------------
>>>>>>> Instant Scalable Full-Text Search On Any Database/Application
>>>>>>> site: http://www.dbsight.net
>>>>>>> demo: http://search.dbsight.com
>>>>>>> Lucene Database Search in 3 minutes:
>>>>>>> http://wiki.dbsight.com/index.php?title=Create_Lucene_Database_Search_in_3_minutes
>>>>>>> DBSight customer, a shopping comparison site, (anonymous per request)
>>>>>>> got 2.6 Million Euro funding!
>>>>>>>
>>>>>>> On Tue, Sep 9, 2008 at 9:03 PM, robert engels >>>>>> > wrote:
>>>>>>>
>>>>>>>> You need to close the searcher within the thread that is using it,
>>>>>>>> in order to have it cleaned up quickly... usually right after you display
>>>>>>>> the page of results.
>>>>>>>> If you are keeping multiple searcher refs across multiple threads
>>>>>>>> for paging/whatever, you have not coded it correctly.
>>>>>>>>
>>>>>>>> Imagine 10,000 users - storing a searcher for each one is not going
>>>>>>>> to work...
>>>>>>>>
>>>>>>>> On Sep 9, 2008, at 10:21 PM, Chris Lu wrote:
>>>>>>>>
>>>>>>>> Right, in a sense I can not release it from another thread. But
>>>>>>>> that's the problem.
>>>>>>>>
>>>>>>>> It's a J2EE environment, all threads are kind of equal. It's simply
>>>>>>>> not possible to iterate through all threads to close the searcher, thus
>>>>>>>> releasing the ThreadLocal cache.
>>>>>>>> Unless Lucene is not recommended for J2EE environment, this has to
>>>>>>>> be fixed.
>>>>>>>>
>>>>>>>> --
>>>>>>>> Chris Lu
>>>>>>>> -------------------------
>>>>>>>> Instant Scalable Full-Text Search On Any Database/Application
>>>>>>>> site: http://www.dbsight.net
>>>>>>>> demo: http://search.dbsight.com
>>>>>>>> Lucene Database Search in 3 minutes:
>>>>>>>> http://wiki.dbsight.com/index.php?title=Create_Lucene_Database_Search_in_3_minutes
>>>>>>>> DBSight customer, a shopping comparison site, (anonymous per
>>>>>>>> request) got 2.6 Million Euro funding!
>>>>>>>>
>>>>>>>> On Tue, Sep 9, 2008 at 8:14 PM, robert engels <
>>>>>>>> rengels@ix.netcom.com> wrote:
>>>>>>>>
>>>>>>>>> Your code is not correct. You cannot release it on another thread -
>>>>>>>>> the first thread may creating hundreds/thousands of instances before the
>>>>>>>>> other thread ever runs...
>>>>>>>>>
>>>>>>>>> On Sep 9, 2008, at 10:10 PM, Chris Lu wrote:
>>>>>>>>>
>>>>>>>>> If I release it on the thread that's creating the searcher, by
>>>>>>>>> setting searcher=null, everything is fine, the memory is released very
>>>>>>>>> cleanly.
>>>>>>>>> My load test was to repeatedly create a searcher on a RAMDirectory
>>>>>>>>> and release it on another thread. The test will quickly go to OOM after
>>>>>>>>> several runs. I set the heap size to be 1024M, and the RAMDirectory is of
>>>>>>>>> size 250M. Using some profiling tool, the used size simply stepped up pretty
>>>>>>>>> obviously by 250M.
>>>>>>>>>
>>>>>>>>> I think we should not rely on something that's a "maybe" behavior,
>>>>>>>>> especially for a general purpose library.
>>>>>>>>>
>>>>>>>>> Since it's a multi-threaded env, the thread that's creating the
>>>>>>>>> entries in the LRU cache may not go away quickly(actually most, if not all,
>>>>>>>>> application servers will try to reuse threads), so the LRU cache, which uses
>>>>>>>>> thread as the key, can not be released, so the SegmentTermEnum which is in
>>>>>>>>> the same class can not be released.
>>>>>>>>>
>>>>>>>>> And yes, I close the RAMDirectory, and the fileMap is released. I
>>>>>>>>> verified that through the profiler by directly checking the values in the
>>>>>>>>> snapshot.
>>>>>>>>>
>>>>>>>>> Pretty sure the reference tree wasn't like this using code before
>>>>>>>>> this commit, because after close the searcher in another thread, the
>>>>>>>>> RAMDirectory totally disappeared from the memory snapshot.
>>>>>>>>>
>>>>>>>>> --
>>>>>>>>> Chris Lu
>>>>>>>>> -------------------------
>>>>>>>>> Instant Scalable Full-Text Search On Any Database/Application
>>>>>>>>> site: http://www.dbsight.net
>>>>>>>>> demo: http://search.dbsight.com
>>>>>>>>> Lucene Database Search in 3 minutes:
>>>>>>>>> http://wiki.dbsight.com/index.php?title=Create_Lucene_Database_Search_in_3_minutes
>>>>>>>>> DBSight customer, a shopping comparison site, (anonymous per
>>>>>>>>> request) got 2.6 Million Euro funding!
>>>>>>>>>
>>>>>>>>> On Tue, Sep 9, 2008 at 5:03 PM, Michael McCandless <
>>>>>>>>> lucene@mikemccandless.com> wrote:
>>>>>>>>>
>>>>>>>>>>
>>>>>>>>>> Chris Lu wrote:
>>>>>>>>>>
>>>>>>>>>> The problem should be similar to what's talked about on this
>>>>>>>>>>> discussion.
>>>>>>>>>>> http://lucene.markmail.org/message/keosgz2c2yjc7qre?q=ThreadLocal
>>>>>>>>>>>
>>>>>>>>>>
>>>>>>>>>> The "rough" conclusion of that thread is that, technically, this
>>>>>>>>>> isn't a memory leak but rather a "delayed freeing" problem. Ie, it may take
>>>>>>>>>> longer, possibly much longer, than you want for the memory to be freed.
>>>>>>>>>>
>>>>>>>>>> There is a memory leak for Lucene search from Lucene-1195.(svn
>>>>>>>>>>> r659602, May23,2008)
>>>>>>>>>>>
>>>>>>>>>>> This patch brings in a ThreadLocal cache to TermInfosReader.
>>>>>>>>>>>
>>>>>>>>>>
>>>>>>>>>> One thing that confuses me: TermInfosReader was already using a
>>>>>>>>>> ThreadLocal to cache the SegmentTermEnum instance. What was added in this
>>>>>>>>>> commit (for LUCENE-1195) was an LRU cache storing Term -> TermInfo
>>>>>>>>>> instances. But it seems like it's the SegmentTermEnum instance that you're
>>>>>>>>>> tracing below.
>>>>>>>>>>
>>>>>>>>>> It's usually recommended to keep the reader open, and reuse it
>>>>>>>>>>> when
>>>>>>>>>>> possible. In a common J2EE application, the http requests are
>>>>>>>>>>> usually
>>>>>>>>>>> handled by different threads. But since the cache is ThreadLocal,
>>>>>>>>>>> the cache
>>>>>>>>>>> are not really usable by other threads. What's worse, the cache
>>>>>>>>>>> can not be
>>>>>>>>>>> cleared by another thread!
>>>>>>>>>>>
>>>>>>>>>>> This leak is not so obvious usually. But my case is using
>>>>>>>>>>> RAMDirectory,
>>>>>>>>>>> having several hundred megabytes. So one un-released resource is
>>>>>>>>>>> obvious to
>>>>>>>>>>> me.
>>>>>>>>>>>
>>>>>>>>>>> Here is the reference tree:
>>>>>>>>>>> org.apache.lucene.store.RAMDirectory
>>>>>>>>>>> |- directory of org.apache.lucene.store.RAMFile
>>>>>>>>>>> |- file of org.apache.lucene.store.RAMInputStream
>>>>>>>>>>> |- base of
>>>>>>>>>>> org.apache.lucene.index.CompoundFileReader$CSIndexInput
>>>>>>>>>>> |- input of org.apache.lucene.index.SegmentTermEnum
>>>>>>>>>>> |- value of
>>>>>>>>>>> java.lang.ThreadLocal$ThreadLocalMap$Entry
>>>>>>>>>>>
>>>>>>>>>>
>>>>>>>>>> So you have a RAMDir that has several hundred MB stored in it,
>>>>>>>>>> that you're done with yet through this path Lucene is keeping it alive?
>>>>>>>>>>
>>>>>>>>>> Did you close the RAMDir? (which will null its fileMap and should
>>>>>>>>>> also free your memory).
>>>>>>>>>>
>>>>>>>>>> Also, that reference tree doesn't show the ThreadResources class
>>>>>>>>>> that was added in that commit -- are you sure this reference tree wasn't
>>>>>>>>>> before the commit?
>>>>>>>>>>
>>>>>>>>>> Mike
>>>>>>>>>>
>>>>>>>>>>
>>>>>>>>>> ---------------------------------------------------------------------
>>>>>>>>>> To unsubscribe, e-mail: java-dev-unsubscribe@lucene.apache.org
>>>>>>>>>> For additional commands, e-mail: java-dev-help@lucene.apache.org
>>>>>>>>>>
>>>>>>>>>>
>>>>>>>>>
>>>>>>>>>
>>>>>>>>> --
>>>>>>>>> Chris Lu
>>>>>>>>> -------------------------
>>>>>>>>> Instant Scalable Full-Text Search On Any Database/Application
>>>>>>>>> site: http://www.dbsight.net
>>>>>>>>> demo: http://search.dbsight.com
>>>>>>>>> Lucene Database Search in 3 minutes:
>>>>>>>>> http://wiki.dbsight.com/index.php?title=Create_Lucene_Database_Search_in_3_minutes
>>>>>>>>> DBSight customer, a shopping comparison site, (anonymous per
>>>>>>>>> request) got 2.6 Million Euro funding!
>>>>>>>>>
>>>>>>>>>
>>>>>>>>>
>>>>>>>>
>>>>>>>>
>>>>>>>>
>>>>>>>>
>>>>>>>
>>>>>>>
>>>>>>>
>>>>>>
>>>>>>
>>>>>>
>>>>>
>>>>>
>>>>>
>>>>>
>>>>
>>>>
>>>>
>>>
>>>
>>> --
>>> Chris Lu
>>> -------------------------
>>> Instant Scalable Full-Text Search On Any Database/Application
>>> site: http://www.dbsight.net
>>> demo: http://search.dbsight.com
>>> Lucene Database Search in 3 minutes:
>>> http://wiki.dbsight.com/index.php?title=Create_Lucene_Database_Search_in_3_minutes
>>> DBSight customer, a shopping comparison site, (anonymous per request) got
>>> 2.6 Million Euro funding!
>>>
>>>
>>>
>>
>>
>>
>>
>
>
>
------=_Part_151324_20614761.1221070507524
Content-Type: text/html; charset=ISO-8859-1
Content-Transfer-Encoding: quoted-printable
Content-Disposition: inline

Actually I am done with it by simply downgrading and not t=
o use r659602 and later.

The old version is more clean and consisten=
t with the API and close() does mean close, not something complicated and u=
nknown to most users, which almost feels like a trap. And later on, if no c=
hanges happened for this file, I will have to upgrade Lucene and manually r=
emove the patch Lucene-1195.

You do =
not need to create a new RAMDirectory - just write to the existing one, and=
then reopen() the IndexReader using it.

This will prevent lots of big objects being created. This m=
ay be the source of your problem.

Even if the Segm=
ent is closed, the ThreadLocal will no longer be referenced, but there will=
still be a reference to the SegmentTermEnum (which will be cleared when th=
e thread dies, or "most likely" when new thread locals on that th=
read a created, so here is a potential problem.

Thread 1 does a search, creates a thread local that re=
ferences the RAMDir (A).

Thread 2 does a search, creates a thread=
local that references the RAMDir (A).

All readers=
, are closed on RAMDir (A).

A new RAMDir (B) is opened.

T=
here may still be references in the thread local maps to RAMDir A (since no=
new thread local have been created yet).

So you m=
ay get OOM depending on the size of the RAMDir (since you would need room f=
or more than 1). If you extend this out with lots of threads that don=
't run very often, you can see how you could easily run out of memory. =
"I think" that ThreadLocal should use a ReferenceQueue so s=
tale object slots can be reclaimed as soon as the key is dereferenced - but=
that is an issue for SUN.

This is why you don't want to create new RAMDirs.<=
/div>

A good rule of thumb - don't keep references t=
o large objects in ThreadLocal (especially indirectly). If needed, us=
e a "key", and then read the cache using a the "key".=
div>

This would be something for the Lucene folks to change.

On Sep 10, 2008, at 10:44 A=
M, Chris Lu wrote:

I am really want to find out where I am doing wrong, if that's the cas=
e.

Yes. I have made certain that I closed all Readers/S=
earchers, and verified that through memory profiler.

Yes. I am creating new RAMDirectory. But that's the problem. I need to=
update the content. Sure, if no content update and everything the same, of=
course no OOM.

Yes. No guarantee of the thread sc=
hedule. But that's the problem. If Lucene is using ThreadLocal to cache=
lots of things by the Thread as the key, and no idea when it'll be rel=
eased. Of course ThreadLocal is not Lucene's problem...

It is=
basic Java. Threads are not guaranteed to run on any sort of sch=
edule. If you create lots of large objects in one thread, releasing them in=
another, there is a good chance you will get an OOM (since the releasing t=
hread may not run before the OOM occurs)... This is not Lucene specif=
ic by any means.

It is a misunderstanding on your part about how GC works.

I assume you must at some point be creating new RAMDirect=
ories - otherwise the memory would never really increase, since the IndexRe=
ader/enums/etc are not very large...

When you create a new RAMDirectories, you need to BE C=
ERTAIN !!! that the other IndexReaders/Searchers using the old RAMDirectory=
are ALL CLOSED, otherwise their memory will still be in use, which leads t=
o your OOM...

On =
Sep 10, 2008, at 10:16 AM, Chris Lu wrote:

I do not believe I am making any mistake. Actually I ju=
st got an email from another user, complaining about the same thing. And I =
am having the same usage pattern.

After the reader is opened, the RAMDirectory is shared by s=
everal objects.

There is one instance of RAMDirectory in the mem=
ory, and it is holding lots of memory, which is expected.

<=
br>

If I close the reader in the same thread that has opened it, th=
e RAMDirectory is gone from the memory.

If I close the reader in=
other threads, the RAMDirectory is left in the memory, referenced along th=
e tree I draw in the first email.

I do not think the usage is wrong. Period.

-------------------------------------

Hi,
i found a forum post from you here [1] where you mention that you
have a memory leak using the lucene ram directory. I'd like to ask you
if you already have resolved the problem and how you did it or maybe
you know where i can read about the solution. We are using
RAMDirectory too and figured out, that over time the memory
consumption raises and raises until the system breaks down but only
when we performing much index updates. if we only create the index and
don't do nothing except searching it, it work fine.
maybe you can give me a hint or a link,
greetz,

It does not matter =
what class it is, what matters is the class that ultimately holds the refer=
ence.

If the IndexReader is never closed, the SegmentRe=
ader(s) is never closed, so the thread local in TermInfosReader is not clea=
red (because the thread never dies). So you will get one SegmentTermEnum, p=
er thread * per segment.

The SegmentTermEnum is not a large object, so even if =
you had 100 threads, and 100 segments, for 10k instances, seems hard to bel=
ieve that is the source of your memory issue.

The SegmentTermEnum is cached by thread since it needs to enumerate the te=
rms, not having a per thread cache, would lead to lots of random access whe=
n multiple threads read the index - very slow.

You need to keep in mind, what if every thread was executing a search simu=
ltaneously - you would still have 100x100 SegmentTermEnum instances anyway =
! The only way to prevent that would be to create and destroy the Seg=
mentTermEnum on each call (opening and seeking to the proper spot) - which =
would be SLOW SLOW SLOW.

On Sep 1=
0, 2008, at 12:19 AM, Chris Lu wrote:

I have tried to create an IndexReader pool and dynamically c=
reate searcher. But the memory leak is the same. It's not related to th=
e Searcher class specifically, but the SegmentTermEnum in TermInfosReader.

Your c=
ode is not correct. You cannot release it on another thread - the first thr=
ead may creating hundreds/thousands of instances before the other thread ev=
er runs...

On Sep 9, 2008, at 10:10 PM, Chris Lu =
wrote:

If I releas=
e it on the thread that's creating the searcher, by setting searcher=3D=
null, everything is fine, the memory is released very cleanly.

My load test was to repeatedly create a searcher on a RAMDirectory an=
d release it on another thread. The test will quickly go to OOM after sever=
al runs. I set the heap size to be 1024M, and the RAMDirectory is of size 2=
50M. Using some profiling tool, the used size simply stepped up pretty obvi=
ously by 250M.

I think we should not rely on something that'=
;s a "maybe" behavior, especially for a general purpose library.<=
/div>

Since it's a multi-threaded env, the thread th=
at's creating the entries in the LRU cache may not go away quickly(actu=
ally most, if not all, application servers will try to reuse threads), so t=
he LRU cache, which uses thread as the key, can not be released, so th=
e SegmentTermEnum which is in the same class can not be released.

And yes, I close the RAMDirectory, and the fileM=
ap is released. I verified that through the profiler by directly checking t=
he values in the snapshot.

Pretty sure the referen=
ce tree wasn't like this using code before this commit, because after c=
lose the searcher in another thread, the RAMDirectory totally disappeared f=
rom the memory snapshot.

The "rough" conclusion of that thread i=
s that, technically, this isn't a memory leak but rather a "delaye=
d freeing" problem. Ie, it may take longer, possibly much longer=
, than you want for the memory to be freed.

There is a memory leak for Lucen=
e search from Lucene-1195.(svn r659602, May23,2008)

This patch bri=
ngs in a ThreadLocal cache to TermInfosReader.

One thing that confuses me: TermInfosReader was a=
lready using a ThreadLocal to cache the SegmentTermEnum instance. Wha=
t was added in this commit (for LUCENE-1195) was an LRU cache storing Term =
-> TermInfo instances. But it seems like it's the SegmentTermE=
num instance that you're tracing below.

It's usually recommended to =
keep the reader open, and reuse it when possible. In a common J2EE appl=
ication, the http requests are usually
handled by different threads. But since the cache is ThreadLocal, the cach=
e are not really usable by other threads. What's worse, the cache c=
an not be cleared by another thread!

This leak is not so obvio=
us usually. But my case is using RAMDirectory,
having several hundred megabytes. So one un-released resource is obvious t=
o me.