[ https://issues.apache.org/activemq/browse/AMQ-1503?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]
Rob Davies resolved AMQ-1503.
-----------------------------
Resolution: Fixed
Fix Version/s: 5.0.0
The ability to handle large message numbers was behind the architectural change for ActiveMQ 5.0 - see http://activemq.apache.org/message-cursors.html
> OutOfMemoryError in ActiveMQ message broker when attempting to publish 2 Million messages - one publisher and four durable subscribes
> -------------------------------------------------------------------------------------------------------------------------------------
>
> Key: AMQ-1503
> URL: https://issues.apache.org/activemq/browse/AMQ-1503
> Project: ActiveMQ
> Issue Type: Bug
> Components: Broker
> Affects Versions: 4.1.1
> Reporter: Noah Zucker
> Assignee: Rob Davies
> Fix For: 5.0.0
>
>
> (reference original posting on Nabble: http://www.nabble.com/forum/ViewPost.jtp?post=12798657)
> We have a one topic publisher that is attempting to publish 2 million messages with 4 durable subscribers. Things go fine until we hit 1.7 million messages - then we get an OutOfMemoryError.
> ActiveMQ is setup to use 5 x 20M log files, and Derby JDBC persistence. We use the JVM memory settings -Xmx1024M and
> At the time of the out of memory, one of the log file is gone crazy and it's size is 415M. The Derby size is big as well.
> We are using session client acknowledgment. and prefetch size of 1 (we need to serialize message consumption).
> each message is being acknowledged using javax.jms.Message.acknowledge();
> Did not find anything on how to change checkpoint interval for persistence.
--
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.