Hello, I was trying to reproduce some out-of-memory problems and I think I was able to reproduce with a very simple setup. I have a spring-boot project with Webflux (v3.5.3) where I enable the following:
management:
endpoints:
web:
exposure:
include: prometheus,health
I configured a WebClient with the only customization being adding a ServerOAuth2AuthorizedClientExchangeFilterFunction for OAuth2. I then defined the following:
private val counter = AtomicLong()
suspend fun <T> get(fqdn: String, typeReference: ParameterizedTypeReference<T>, clientRegistrationId: String = "OAuth2") {
webClient.get()
.uri(fqdn)
.attributes(ServerOAuth2AuthorizedClientExchangeFilterFunction.clientRegistrationId(clientRegistrationId))
.retrieve()
.bodyToMono(typeReference)
.awaitSingle()
.let {
if (counter.incrementAndGet().mod(45L) == 0L) {
log.info("GET {} succeeded.", fqdn)
}
}
}
I defined 3 @Scheduled functions that call this get every second, and I run this application with these flags: -XX:MaxDirectMemorySize=100k -Dio.netty.maxDirectMemory=0. That's pretty much it.
For 30 minutes or so, this ran without issues. I then performed a GET against /actuator/prometheus to look at jvm_buffer_memory_used_bytes. Shortly after that, many exceptions like this one were logged:
2025-08-19 09:56:34,897 WARN reactor.core.Exceptions:304 [reactor-http-nio-11] - throwIfFatal detected a jvm fatal exception, which is thrown and logged below:
java.lang.OutOfMemoryError: Cannot reserve 32768 bytes of direct buffer memory (allocated: 102295, limit: 102400)
at java.base/java.nio.Bits.reserveMemory(Bits.java:178)
at java.base/java.nio.DirectByteBuffer.<init>(DirectByteBuffer.java:111)
at java.base/java.nio.ByteBuffer.allocateDirect(ByteBuffer.java:360)
at io.netty.buffer.UnpooledDirectByteBuf.allocateDirect(UnpooledDirectByteBuf.java:104)
at io.netty.buffer.UnpooledDirectByteBuf.<init>(UnpooledDirectByteBuf.java:64)
at io.netty.buffer.UnpooledUnsafeDirectByteBuf.<init>(UnpooledUnsafeDirectByteBuf.java:41)
at io.netty.buffer.UnsafeByteBufUtil.newUnsafeDirectByteBuf(UnsafeByteBufUtil.java:687)
at io.netty.buffer.PooledByteBufAllocator.newDirectBuffer(PooledByteBufAllocator.java:406)
at io.netty.buffer.AbstractByteBufAllocator.directBuffer(AbstractByteBufAllocator.java:188)
at io.netty.buffer.AbstractByteBufAllocator.directBuffer(AbstractByteBufAllocator.java:179)
at io.netty.buffer.AbstractByteBufAllocator.ioBuffer(AbstractByteBufAllocator.java:140)
at io.netty.channel.DefaultMaxMessagesRecvByteBufAllocator$MaxMessageHandle.allocate(DefaultMaxMessagesRecvByteBufAllocator.java:120)
at io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:150)
at io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:796)
at io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:732)
at io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:658)
at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562)
at io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:998)
at io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74)
at io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30)
at java.base/java.lang.Thread.run(Thread.java:1583)
This also somehow completely stopped the scheduled functions.
Is it possible the Prometheus machinery is keeping references to buffers that then cannot be freed?
Comment From: bclozel
Sorry for the delayed feedback, but this issue isn't really actionable at this point.
This is mixing Spring Framework, Spring Boot, Spring Security, the Prometheus support and a low direct memory configuration. This could be caused by many different factors.
From what I understand, this is not really a memory leak but most probably something similar to: https://github.com/spring-projects/spring-framework/issues/35006. If both Netty and the JDK compete for a very low amount of direct memory and the application is actively trying to use more, I don't think this qualifies as a leak or a bug.
I'll close this issue for now, but we can reopen it if you can provide a minimal sample application that drastically reduces the number of dependencies and simplifies the scenario.
Comment From: asardaes
Well, for completeness, this also happens in our services when we allocate more direct memory, it just takes weeks instead of minutes.
Comment From: bclozel
@asardaes that's good to know. If you can come up with a minimal sample and remove as much as possible from the picture I can have a look.