It is related to my previous question
I set Xms as 512M, Xmx as 6G for one java process. I have three such processes.
My total ram is 32 GB. Out of that 2G is always occupied.
I executed free command to ensure that minimum 27G is free. But my jobs required only 18 GB max at any time.
It was running fine. Each job occupied around 4 to 5 GB but used around 3 to 4 GB. I understand that Xmx doesn't mean that process should always occupy 6 GB
When another X process started on the same server with another user, it has occupied 14G. Then one of my process got failed.
I understand that I need to increase ram or manage both collision jobs.
Here the question is that how can I force my job to use 6 GB always and why does it throw GC limit reached error in this case?
I used visualvm to monitor them. And jstat also.
Any advises are welcome.
Simple answer: -Xmx is not a hard limit to JVM. It only limits the heap available to Java inside JVM. Lower your -Xmx and you may stabilize process memory on a size that suits you.
Long answer: JVM is a complex machine. Think of this like an OS for your Java code. The Virtual Machine does need extra memory for its own housekeeping (e.g. GC metadata), memory occupied by threads' stack size, "off-heap" memory (e.g. memory allocated by native code through JNI; buffers) etc.
-Xmx only limits the heap size for objects: the memory that's dealt with directly in your Java code. Everything else is not accounted for by this setting.
There's a newer JVM setting -XX:MaxRam (1, 2) that tries to keep the entire process memory within that limit.
From your other question:
It is multi threading. 100 reader, 100 writer threads. Each one has it's own connection to the database.
Keep in mind that the OS' I/O buffers also need memory for their own function.
If you have over 200 threads, you also pay the price: N*(Stack size), and approx. N*(TLAB size) reserved in Young Gen for each thread (dynamically resizable):
java -Xss1024k -XX:+PrintFlagsFinal 2> /dev/null | grep -i tlab
size_t MinTLABSize = 2048
intx ThreadStackSize = 1024
Approximately half a gigabyte just for this (and probably more)!
Thread Stack Size (in Kbytes). (0 means use default stack size) [Sparc: 512; Solaris x86: 320 (was 256 prior in 5.0 and earlier); Sparc 64 bit: 1024; Linux amd64: 1024 (was 0 in 5.0 and earlier); all others 0.] - Java HotSpot VM Options; Linux x86 JDK source
In short: -Xss (stack size) defaults depend on the VM and OS environment.
Thread Local Allocation Buffers are more intricate and help against allocation contention/resource locking. Explanation of the setting here, for their function: TLAB allocation and TLABs and Heap Parsability.
Further reading: "Native Memory Tracking" and Q: "Java using much more memory than heap size"
why does it throw GC limit reached error in this case.
"GC overhead limit exceeded". In short: each GC cycle reclaimed too little memory and the ergonomics decided to abort. Your process needs more memory.
When another X process started on the same server with another user, it has occupied 14g. Then one of my process got failed.
Another point on running multiple large memory processes back-to-back, consider this:
java -Xms28g -Xmx28g <...>;
# above process finishes
java -Xms28g -Xmx28g <...>; # crashes, cant allocate enough memory
When the first process finishes, your OS needs some time to zero out the memory deallocated by the ending process before it can give these physical memory regions to the second process. This task may need some time and until then you cannot start another "big" process that immediately asks for the full 28GB of heap (observed on WinNT 6.1). This can be worked around with:
-Xms so the allocation happens later in 2nd processes' life-time-Xmx heapIf you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With