No subject

messages and physically save them to disk for each of the consumers to read
(1 queue/consumer, 1msg copy/queue). The reason I expected that is because
if messages are ack'ed by consumers 1 and 2, I expected their queue to be
free of their messages while consumer 3 would still need them to be
persisted until it has read them. That was possible a false assumption I
should have verified with you guys. It would indeed be more optimal to
store a single copy of the messages and only keep the metadata. *Is this
correct?*
Also, during a test where I continuously sent (a lot) of messages while my
big consumer 3 was offline. The observation I made then is that *obviously:*
- data was accumulating on disk
- memory usage on all nodes went up. to the point where it eventually
choked the brokers
I assumed then that the increasing memory usage was because the queue and
msg metadata was increasing in Mnesia. This was a decisive factor to me
because I didn't want to be in a situation where my realtime consumers
would be affected by a slow moving elephant. Is there way I could have
avoided that situation I didn't see?
Thanks!
------=_Part_176_25682693.1370614353885
Content-Type: text/html; charset=utf-8
Content-Transfer-Encoding: quoted-printable
Hi guys,<div><br></div><div>following a discussion on the kafka user mailin=
g list in which Alexis Richardson replied, I realized I might have been wro=
ng about what I assumed for the messages storage in RabbitMQ. It won't chan=
ge the decision I made to use Kafka but I'd like to set my own mind right i=
n my conception of how RabbitMQ stores messages. I want to use Rabbit in ot=
her projects as well so I want to have the right picture in my mind.</div><=
div><br></div><div>The setup I used was based on what is described in&nbsp;=
<a href=3D"http://www.rabbitmq.com/tutorials/tutorial-four-python.html">htt=
p://www.rabbitmq.com/tutorials/tutorial-four-python.html</a>.&nbsp;</div><d=
iv><br></div><div>RabbitMQ</div><div>- RabbitMQ 3.0.x (not exactly sure wha=
t x was)</div><div>- 5 brokers (behind HA proxy)</div><div>- Messages poste=
d to multiple topics (called it A, B, C) where the topic is used as the rou=
ting key by the exchange<br></div><div>- Queues and messages are persistent=
(+ replicated)</div><div><br></div><div>Consumers</div><div>- Consumer 1 s=
ubscribes to Topic A (near real-time) - Queue 1</div><div>- Consumer 2 subs=
cribes to Topic B&nbsp;(near real-time)&nbsp;&nbsp;- Queue 2</div><div>- Co=
nsumer 3 subscribes to A, B and C (offline, BI)&nbsp;- Queue 3</div><div><b=
r></div><div>Published 10M messages (size varying between 500b to 2k)<br></=
div><div><br></div><div>From what I (believe I) saw and read, I really expe=
cted Rabbit to route messages and physically save them to disk for each of =
the consumers to read (1 queue/consumer, 1msg copy/queue). The reason I exp=
ected that is because if messages are ack'ed by consumers 1 and 2, I expect=
ed their queue to be free of their messages while consumer 3 would still ne=
ed them to be persisted until it has read them. That was possible a false a=
ssumption I should have verified with you guys. It would indeed be more opt=
imal to store a single copy of the messages and only keep the metadata. <b>=
Is this correct?</b></div><div><br></div><div>Also, during a test where I c=
ontinuously sent (a lot) of messages while my big consumer 3 was offline. T=
he observation I made then is that <u>obviously:</u></div><div>- data was a=
ccumulating on disk</div><div>- memory usage on all nodes went up. to the p=
oint where it eventually choked the brokers</div><div><br></div><div>I assu=
med then that the increasing memory usage was because the queue and msg met=
adata was increasing in Mnesia. This was a decisive factor to me because I =
didn't want to be in a situation where my realtime consumers would be affec=
ted by a slow moving elephant. Is there way I could have avoided that situa=
tion I didn't see?</div><div><br></div><div>Thanks!</div><div><br></div><di=
v><br></div><div><br></div>
------=_Part_176_25682693.1370614353885--
------=_Part_175_26893787.1370614353885--