I'm currently investigating a memory problem in my broker network. According to JConsole the ActiveMQ.Advisory.TempQueue is taking up 99% of the configured memory when the broker starts to block messages.
A few details about the config
Default config for the most part. One open stomp+nio connector, one open openwire connector. All brokers form a hypercube (one on-way connection to every other broker (easier to auto-generate)). No flow-control.
Problem details
The webconsole shows something like 1974234 enqueued and 45345 dequeued messages at 30 consumers (6 brokers, one consumer and the rest is clients that use the java connector). As far as I know the dequeue count should be not much smaller than: enqueued*consumers. so in my case a big bunch of advisories is not consumed and starts to fill my temp message space. (currently I configured several gb as temp space)
Since no client actively uses temp queues I find this very strange. After taking a look at the temp queue I'm even more confused. Most of the messages look like this (msg.toString):
ActiveMQMessage {commandId = 0, responseRequired = false, messageId = ID:srv007210-36808-1318839718378-1:1:0:0:203650, originalDestination = null, originalTransactionId = null, producerId = ID:srv007210-36808-1318839718378-1:1:0:0, destination = topic://ActiveMQ.Advisory.TempQueue, transactionId = null, expiration = 0, timestamp = 0, arrival = 0, brokerInTime = 1318840153501, brokerOutTime = 1318840153501, correlationId = null, replyTo = null, persistent = false, type = Advisory, priority = 0, groupID = null, groupSequence = 0, targetConsumerId = null, compressed = false, userID = null, content = null, marshalledProperties = org.apache.activemq.util.ByteSequence@45290155, dataStructure = DestinationInfo {commandId = 0, responseRequired = false, connectionId = ID:srv007210-36808-1318839718378-2:2, destination = temp-queue://ID:srv007211-47019-1318835590753-11:9:1, operationType = 1, timeout = 0, brokerPath = null}, redeliveryCounter = 0, size = 0, properties = {originBrokerName=broker.coremq-behaviortracking-675-mq-01-master, originBrokerId=ID:srv007210-36808-1318839718378-0:1, originBrokerURL=stomp://srv007210:61612}, readOnlyProperties = true, readOnlyBody = true, droppable = false}
After seeing these messages I have several questions:
Currently I sort of postponed the problem by deactivating the bridgeTempDestinations property on the network connectors. this way the messages are not spread and they fill the temp space much slower. If I can not fix the source of these messages I would at least like to stop them from filling the store:
UPDATE: I monitored my cluster some more and found out that the messages are consumed. They are enqueued and dispatched but the consumers (the other cluster nodes as mutch as java consumers that use the activemq lib) fail to acknowledge the messages. so they stay in the dispatched messages queue and this queue grows and grows.
This is an old thread but in case somebody runs into it having the same problem, you might want to check out this post: http://forum.spring.io/forum/spring-projects/integration/111989-jms-outbound-gateway-temporary-queues-never-deleted
The problem in that link sounds similar, i.e. temp queues producing large amount of advisory messages. In my case, we were using temp queues to implement synchronous request/response messaging but the volume of advisory messages caused ActiveMQ to spend most of its time in GC and eventually throw a GC Overhead Limit Exceeded Exception. This was on v5.11.1. Even though we closed connection, session, producer, consumer the temp queue would not be GC'd and would continue receiving advisory messages.
The solution was to explicitly delete the temp queues when cleaning up the other resources (see https://docs.oracle.com/javaee/7/api/javax/jms/TemporaryQueue.html)
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With