2022-05-04

Java Memory Usage Optimization

So there is this not really well-known but existing memory usage optimization that changes how Glibc allocated thread-specific memory.

There is this guy who wrote the best roundup I found on the net so far: Major Bug in glibc is Killing Applications With a Memory Limit. I strongly suggest reading it.

For now, let me just quote the important part:

Long story short, this is due to a bug in malloc(). Well, it’s not a bug it’s a feature.

malloc() preallocates large chunks of memory, per thread. This is meant as a performance optimization, to reduce memory contention in highly threaded applications.

In 32 bits runtime, it can preallocate up to 2 * 64 MB * cores.

In 64 bits runtime, it can preallocate up to 8 * 64 MB * cores.

So the math is like: _NPROCESSORS_ONLN * $MALLOC_ARENA_MAX * Arena Size

Bonus content: As getconf _NPROCESSORS_ONLN returns the same as nproc output (well, almost, because nproc returns sysconf(_SC_NPROCESSORS_CONF)), if you are using a container engine like Kubernetes, this equation will use the node's core count, not the CPU shares allowed by cgroups to the pod.

Where do those numbers come from? check here: https://www.gnu.org/software/libc/manual/html_node/Memory-Allocation-Tunables.html

Arena Size is usually 64MB. Why is this a problem?

The first malloc in each thread triggers a 128MB mmap which typically is the initialization of thread-local storage.

-- https://bugs.openjdk.java.net/browse/JDK-8193521

For every thread created, a new arena is allocated. But even if you don't make any threads, the preallocation happens using the equation above. Huge memory waste.

If creating more arenas is denied, the thread instead writes to "main" arena or the native program heap, which is unbounded.

(Main arena can grow via brk()/sbrk())

So the most useful solution is to set the environment variable MALLOC_ARENA_MAX to a small value, like 4.