Paul,
Yeah, memory usage just grows and grows.
Do you think this might just be a problem with the windows version of
CouchDB or Erlang?
I couldn't find anything online about a memory leak in this sort of
case for either project.
Alex
On Oct 23, 2009, at 1:20 PM, Paul Davis wrote:
> On Fri, Oct 23, 2009 at 12:55 PM, Alexander Quick
> wrote:
>> Hi,
>>
>> I'm having an issue where CouchDB crashes consistently while bulk
>> saving(all_or_nothing is off). It completes fewer than a million
>> documents
>> of the following form before choking:
>> {
>> "_id": "an_id",
>> "user_id": "a_user",
>> "attribute_one":"<7chars",
>> "attribute_two":"<7chars",
>> "attribute_three":"<7chars",
>> }
>>
>> CouchDB 0.10 is running on Windows Server 2008 (might be my first
>> mistake)
>> and trying to insert only in 3k batches, so I don't think
>> individual batch
>> size is an issue(along with the fact that the first 30-40 batches
>> succeed).
>>
>> I get the following error:
>> Crash dump was written to: erl_crash.dump
>> eheap_alloc: Cannot allocate 50152200 bytes of memory (of type
>> "heap").
>> Sometimes it fails while trying to allocate some other type of
>> memory, but
>> heap is most common.
>>
>> The machine still has plenty of memory left when this happens, and
>> we're
>> talking about a 50mb allocation. Even with the vm heap parameter
>> set to 1gb,
>> it chokes around 700mb-- either way this shouldn't be an issue.
>>
>> The only thing I can think of that separates this from an entirely
>> run-of-the-mill use case is the fact that I'm inserting into
>> approximately
>> 100 databases. Though not concurrently, but rather say a batch to
>> the first
>> then a batch to the second after the first completes. Nonetheless,
>> the vm
>> keeps piling on new processes for some reason.
>>
>> The dump is available here: http://sandbox.stoopdev.com/fail/erl_crash.dump
>> Please let me know if you have any ideas.
>>
>> Thanks in advance,
>> Alex Quick
>>
>
> Alex,
>
> Most odd. Can you see what the memory profile does as you do your
> bulk_docs calls? Does it appear to just continuously grow and not
> release RAM as you make multiple calls? It shouldn't hold onto any
> memory after a _bulk_docs request is completed.
>
> Paul Davis