I'm seeing some weird behavior when attempting to combine the 3 annotations from the subject, for example:

@Async
@ConcurrencyLimit(1)
@Transactional
void doWork() throws Exception {
    logger.info("Starting work");
    Thread.sleep(Duration.ofSeconds(10));
    logger.info("Finished work");
}

On the first two invocations of such method, @ConcurrencyLimit doesn't appear to work, but from there on it starts working OK. Here's the log output I'm seeing after invoking the service 3 times after startup, with invocations being about a second apart:

2025-11-10T13:15:44.224+01:00  INFO 906182 --- [         task-1] com.example.SampleService                : Starting work
2025-11-10T13:15:45.766+01:00  INFO 906182 --- [         task-2] com.example.SampleService                : Starting work
2025-11-10T13:15:54.225+01:00  INFO 906182 --- [         task-1] com.example.SampleService                : Finished work
2025-11-10T13:15:55.767+01:00  INFO 906182 --- [         task-2] com.example.SampleService                : Finished work
2025-11-10T13:15:55.768+01:00  INFO 906182 --- [         task-3] com.example.SampleService                : Starting work
2025-11-10T13:16:05.769+01:00  INFO 906182 --- [         task-3] com.example.SampleService                : Finished work

Removing @Transactional from the mix makes everything work as expected. I'll attach a reproducer shortly.

mre-spring-framework-35788.zip

Comment From: jhoeller

For a start, this works fine for me in a local Spring Framework integration test. I'll see what difference the Boot setup in the provided repro makes.

Comment From: mdeinum

Additionally, I ran a quick test, making the method public in the service makes it work as well.

Comment From: jhoeller

Hmm, in my local integration test, it works with and without public modifiers on the service class and method. Diving into the Boot setup in a moment here.

Comment From: jhoeller

I cannot reproduce this in the provided Boot setup either. @vpavic could you try to double-check what exactly is happening for you in a local run there?

Comment From: vpavic

Adding logging.level.org.springframework.aop.interceptor.ConcurrencyThrottleInterceptor=debug to application.properties, starting the sample and then issuing 3 GET / requests with second apart of each other yields this:

2025-11-10T15:26:05.349+01:00 DEBUG 919406 --- [         task-1] o.s.a.i.ConcurrencyThrottleInterceptor   : Entering throttle at concurrency count 0
2025-11-10T15:26:05.354+01:00  INFO 919406 --- [         task-1] com.zaxxer.hikari.HikariDataSource       : HikariPool-1 - Starting...
2025-11-10T15:26:05.559+01:00  INFO 919406 --- [         task-1] com.zaxxer.hikari.pool.HikariPool        : HikariPool-1 - Added connection conn0: url=jdbc:h2:mem:67c55b76-1ffd-45e9-af37-0ff9e8a3a4b2 user=SA
2025-11-10T15:26:05.561+01:00  INFO 919406 --- [         task-1] com.zaxxer.hikari.HikariDataSource       : HikariPool-1 - Start completed.
2025-11-10T15:26:05.569+01:00  INFO 919406 --- [         task-1] com.example.SampleService                : Starting work
2025-11-10T15:26:06.915+01:00 DEBUG 919406 --- [         task-2] o.s.a.i.ConcurrencyThrottleInterceptor   : Entering throttle at concurrency count 0
2025-11-10T15:26:06.915+01:00  INFO 919406 --- [         task-2] com.example.SampleService                : Starting work
2025-11-10T15:26:08.628+01:00 DEBUG 919406 --- [         task-3] o.s.a.i.ConcurrencyThrottleInterceptor   : Concurrency count 1 has reached limit 1 - blocking
2025-11-10T15:26:15.570+01:00  INFO 919406 --- [         task-1] com.example.SampleService                : Finished work
2025-11-10T15:26:15.579+01:00 DEBUG 919406 --- [         task-1] o.s.a.i.ConcurrencyThrottleInterceptor   : Returning from throttle at concurrency count 0
2025-11-10T15:26:16.916+01:00  INFO 919406 --- [         task-2] com.example.SampleService                : Finished work
2025-11-10T15:26:16.917+01:00 DEBUG 919406 --- [         task-2] o.s.a.i.ConcurrencyThrottleInterceptor   : Returning from throttle at concurrency count 0
2025-11-10T15:26:16.917+01:00 DEBUG 919406 --- [         task-3] o.s.a.i.ConcurrencyThrottleInterceptor   : Entering throttle at concurrency count 0
2025-11-10T15:26:16.918+01:00  INFO 919406 --- [         task-3] com.example.SampleService                : Starting work
2025-11-10T15:26:26.918+01:00  INFO 919406 --- [         task-3] com.example.SampleService                : Finished work
2025-11-10T15:26:26.919+01:00 DEBUG 919406 --- [         task-3] o.s.a.i.ConcurrencyThrottleInterceptor   : Returning from throttle at concurrency count 0

What I missed originally is that data source initialization happens in task-1 thread on first invocation of service. Do your attempts at reproducing this also include such sequence?

Comment From: jhoeller

I see the same sequence but instead of task2 entering the throttle again at concurrency count 0, I see two "Concurrency count 1 has reached limit 1 - blocking" lines from task-2 as well as task-3. So while it's similar to your log, I still cannot reproduce the oddity.

My impression is that this could be a race condition in the initialization of the per-method ConcurrencyThrottleInterceptor instance that holds the current count, initially using two such ConcurrencyThrottleInterceptor instances for the same method but then continuing to use the same cached instance from the third call onwards. This instance management happens in ConcurrencyLimitBeanPostProcessor.ConcurrencyLimitInterceptor, so I suppose we'll have to review the nuances there.

Comment From: jhoeller

At the moment, I fail to see an obvious race condition in ConcurrencyLimitBeanPostProcessor.ConcurrencyLimitInterceptor. Any hints or educated guesses welcome, also toward other potential causes for the behavior observed by @vpavic...

Comment From: vpavic

If I increase sleep duration to 30 seconds and try with 10+ seconds between the first two invocations I still see the same result:

2025-11-10T17:09:06.677+01:00 DEBUG 940137 --- [         task-1] o.s.a.i.ConcurrencyThrottleInterceptor   : Entering throttle at concurrency count 0
2025-11-10T17:09:06.682+01:00  INFO 940137 --- [         task-1] com.zaxxer.hikari.HikariDataSource       : HikariPool-1 - Starting...
2025-11-10T17:09:06.890+01:00  INFO 940137 --- [         task-1] com.zaxxer.hikari.pool.HikariPool        : HikariPool-1 - Added connection conn0: url=jdbc:h2:mem:3c1b9793-075e-41e9-8011-5363852a6471 user=SA
2025-11-10T17:09:06.893+01:00  INFO 940137 --- [         task-1] com.zaxxer.hikari.HikariDataSource       : HikariPool-1 - Start completed.
2025-11-10T17:09:06.902+01:00  INFO 940137 --- [         task-1] com.example.SampleService                : Starting work
2025-11-10T17:09:18.264+01:00 DEBUG 940137 --- [         task-2] o.s.a.i.ConcurrencyThrottleInterceptor   : Entering throttle at concurrency count 0
2025-11-10T17:09:18.265+01:00  INFO 940137 --- [         task-2] com.example.SampleService                : Starting work
2025-11-10T17:09:36.904+01:00  INFO 940137 --- [         task-1] com.example.SampleService                : Finished work
2025-11-10T17:09:36.914+01:00 DEBUG 940137 --- [         task-1] o.s.a.i.ConcurrencyThrottleInterceptor   : Returning from throttle at concurrency count 0
2025-11-10T17:09:48.265+01:00  INFO 940137 --- [         task-2] com.example.SampleService                : Finished work
2025-11-10T17:09:48.266+01:00 DEBUG 940137 --- [         task-2] o.s.a.i.ConcurrencyThrottleInterceptor   : Returning from throttle at concurrency count 0

I tried placing a breakpoint at ConcurrencyLimitBeanPostProcessor:114 (ConcurrencyThrottleInterceptor instantiation) and debugging, and in that case it's much harder to reproduce as it works no matter how quickly I resume after hitting the breakpoint on first request. However, I've managed to run a few times into scenario where second request does hit the same breakpoint as well.

I'm running Zulu 21.0.9 JDK on Ubuntu 25.10 (kernel 6.17.0). I'll try on another machine a bit later.

Comment From: jhoeller

Thanks for your efforts there, @vpavic! So it looks even more like a race condition in that instance management now.

My current suspicion is that it's actually in ConcurrentReferenceHashMap which we use for the initial ConcurrencyThrottleCache instance: ConcurrentReferenceHashMap does not provide a specific implementation of computeIfAbsent, so it inherits the default implementation from the ConcurrentMap interface with unsynchronized get and putIfAbsent calls. If that goes wrong to return two distinct ConcurrencyThrottleCache instances when run concurrently, I imagine exactly the behavior that you are seeing.

That said, while it may temporarily create more than one ConcurrencyThrottleCache instance when run concurrently, even the default computeIfAbsent implementation should only ever return the instance that first made it through the putIfAbsent call, so I'm still not quite seeing the actual root cause of the race condition yet.

If the general suspicion is right, this should be noticeable in the debugger through different ConcurrencyThrottleCache instances showing up in ConcurrencyLimitInterceptor, despite only one @ConcurrencyLimit method.

Comment From: vpavic

Everything behaves the same on my other machine (Ubuntu 24.04, kernel 6.8.0).

If the general suspicion is right, this should be noticeable in the debugger through different ConcurrencyThrottleCache instances showing up in ConcurrencyLimitInterceptor, despite only one @ConcurrencyLimit method.

This is correct. When I reproduce while debugging, ConcurrencyThrottleCache instance in the first invocation will be different from the one in second invocation (which will stay the same in further invocations).

Comment From: vpavic

It's definitely data source initialization that somehow causes different cache instance to be returned. I've modified sample service to this:

@Component
class SampleService {

    static final Logger logger = LogManager.getLogger();

    @Autowired
    DataSource dataSource;

    @Async
    @ConcurrencyLimit(1)
    void doWork() throws Exception {
        logger.info("Starting work");
        if ("task-3".equals(Thread.currentThread().getName())) {
            this.dataSource.getConnection();
        }
        Thread.sleep(Duration.ofSeconds(10));
        logger.info("Finished work");
    }

}

After starting the sample, make 3 GET / requests. Throttling will work fine, and when 3rd invocation executes data source initialization will be logged from task-3. At that point, make another GET / request that will result in different cache instance returned causing concurrency count 0 to be observed in task-4:

2025-11-11T08:06:59.898+01:00 DEBUG 1000367 --- [         task-1] o.s.a.i.ConcurrencyThrottleInterceptor   : Entering throttle at concurrency count 0
2025-11-11T08:06:59.899+01:00  INFO 1000367 --- [         task-1] com.example.SampleService                : Starting work
2025-11-11T08:07:01.263+01:00 DEBUG 1000367 --- [         task-2] o.s.a.i.ConcurrencyThrottleInterceptor   : Concurrency count 1 has reached limit 1 - blocking
2025-11-11T08:07:02.405+01:00 DEBUG 1000367 --- [         task-3] o.s.a.i.ConcurrencyThrottleInterceptor   : Concurrency count 1 has reached limit 1 - blocking
2025-11-11T08:07:09.900+01:00  INFO 1000367 --- [         task-1] com.example.SampleService                : Finished work
2025-11-11T08:07:09.901+01:00 DEBUG 1000367 --- [         task-1] o.s.a.i.ConcurrencyThrottleInterceptor   : Returning from throttle at concurrency count 0
2025-11-11T08:07:09.902+01:00 DEBUG 1000367 --- [         task-2] o.s.a.i.ConcurrencyThrottleInterceptor   : Entering throttle at concurrency count 0
2025-11-11T08:07:09.902+01:00  INFO 1000367 --- [         task-2] com.example.SampleService                : Starting work
2025-11-11T08:07:19.902+01:00  INFO 1000367 --- [         task-2] com.example.SampleService                : Finished work
2025-11-11T08:07:19.903+01:00 DEBUG 1000367 --- [         task-2] o.s.a.i.ConcurrencyThrottleInterceptor   : Returning from throttle at concurrency count 0
2025-11-11T08:07:19.904+01:00 DEBUG 1000367 --- [         task-3] o.s.a.i.ConcurrencyThrottleInterceptor   : Entering throttle at concurrency count 0
2025-11-11T08:07:19.904+01:00  INFO 1000367 --- [         task-3] com.example.SampleService                : Starting work
2025-11-11T08:07:19.904+01:00  INFO 1000367 --- [         task-3] com.zaxxer.hikari.HikariDataSource       : HikariPool-1 - Starting...
2025-11-11T08:07:20.107+01:00  INFO 1000367 --- [         task-3] com.zaxxer.hikari.pool.HikariPool        : HikariPool-1 - Added connection conn0: url=jdbc:h2:mem:87b29653-b3d3-45fa-980f-2f68bb8c9330 user=SA
2025-11-11T08:07:20.110+01:00  INFO 1000367 --- [         task-3] com.zaxxer.hikari.HikariDataSource       : HikariPool-1 - Start completed.
2025-11-11T08:07:21.989+01:00 DEBUG 1000367 --- [         task-4] o.s.a.i.ConcurrencyThrottleInterceptor   : Entering throttle at concurrency count 0
2025-11-11T08:07:21.989+01:00  INFO 1000367 --- [         task-4] com.example.SampleService                : Starting work
2025-11-11T08:07:30.113+01:00  INFO 1000367 --- [         task-3] com.example.SampleService                : Finished work
2025-11-11T08:07:30.113+01:00 DEBUG 1000367 --- [         task-3] o.s.a.i.ConcurrencyThrottleInterceptor   : Returning from throttle at concurrency count 0
2025-11-11T08:07:31.990+01:00  INFO 1000367 --- [         task-4] com.example.SampleService                : Finished work
2025-11-11T08:07:31.991+01:00 DEBUG 1000367 --- [         task-4] o.s.a.i.ConcurrencyThrottleInterceptor   : Returning from throttle at concurrency count 0

Comment From: jhoeller

In the meantime, I can reproduce this locally with a certain not-too-quick and not-too-slow pattern of sending GET requests against the provided Boot app. That is, even without DataSource initialization involved: I can reproduce this against your latest code above but between the first and the second request rather than the third!

As far as I could isolate it, it's indeed ConcurrentReferenceHashMap#computeIfAbsent returning two distinct ConcurrencyThrottleCache instances against the same target object - which should never happen according to the contract. That said, I was not able to reproduce this in a local ConcurrentReferenceHashMap unit test yet.

Comment From: jhoeller

Running against a revised ConcurrentReferenceHashMap with a dedicated computeIfAbsent implementation (as well as similar computeIfPresent, compute and merge implementations), I cannot reproduce the issue anymore.

I'll add the revised ConcurrentReferenceHashMap in a separate issue for 6.2.13, along with corresponding unit tests. Let's see whether our problem here is actually just dependent on that then.

Comment From: jhoeller

I've pushed the 7.0/JSpecify-aligned ConcurrentReferenceHashMap revision to main for a start. Working on the 6.2.13 backport still under #35794.

@vpavic please give the latest 7.0.0-SNAPSHOT a try and see whether you can reproduce any @ConcurrencyLimit race condition against it!

Comment From: vpavic

Unfortunately, after pulling in the latest snapshots I can still reproduce the issue the same way as I did before.

Interesting that you were able to reproduce it even without the data source initialization, I was never able to achieve that.

Comment From: jhoeller

Hmm double-checking a few things: Can you still see more than one ConcurrencyThrottleCache instance in use there, against the very same target object as a key? Actually executing against ConcurrentReferenceHashMap with the new dedicated computeIfAbsent implementation (that is, not landing in the ConcurrentMap.computeIfAbsent default method in the debugger)?

Comment From: jhoeller

Trying a few variants locally, I can occasionally reproduce my previously observed issue on the first requests through commenting out the computeIfAbsent implementation in ConcurrentReferenceHashMap (in a local copy in the repro sources, easy to play with). Putting computeIfAbsent into place again, the problem is not reproducible for me anymore. A manually added log output of the target and cache instances shows that they are the exact same instances after the fix, with the cache instances diverging before the fix.

Comment From: jhoeller

Voila - now I can reproduce it even with the ConcurrentReferenceHashMap fix in place, just under tighter conditions. Diving into it again, maybe there is a problem with visibility inside ConcurrentReferenceHashMap (synchronized/volatile).

Comment From: vpavic

Can you still see more than one ConcurrencyThrottleCache instance in use there, against the very same target object as a key? Actually executing against ConcurrentReferenceHashMap with the new dedicated computeIfAbsent implementation (that is, not landing in the ConcurrentMap.computeIfAbsent default method in the debugger)?

Yes, behavior is still the same on my end - task-4 gets different ConcurrencyThrottleCache instance. I've also verified that the new computeIfAbsent implementation is indeed invoked.

Comment From: jhoeller

After some back and forth, I ended up replacing ConcurrentReferenceHashMap with a synchronized IdentityHashMap there in ConcurrencyLimitBeanPostProcessor. This is more correctly scoped since it won't reuse the same concurrency limit for multiple target beans that happen to be equal according to equals, and it also scopes the interceptor per proxy instance rather than per target instance now in case of a non-singleton TargetSource (which is more in line with traditional proxy composition through ProxyFactory). Last but not least, it addresses our problem here through using strong references instead of weak references.

Ultimately, ConcurrentReferenceHashMap has some fundamental trade-offs that do not allow strict assumptions about unique target values. It is designed for a cache and not for a strong mapping, after all. While the refinements in https://github.com/spring-projects/spring-framework/issues/35794 strengthen some guarantees (and provide correct behavior for null values), ConcurrentReferenceHashMap's internal access to the references array unfortunately implies a remaining risk for lost updates - which may be acceptable for a cache but not for our strong mapping here with @ConcurrencyLimit.

To reflect the cache versus holder tradeoff, I've renamed ConcurrencyLimitBeanPostProcessor's ConcurrencyThrottleCache to ConcurrencyThrottleHolder. That's where the real mismatch was: That data structure is not a cache to begin with, it requires strong unique references.

Comment From: jhoeller

@vpavic one last time, hopefully: Please give the latest snapshot a try. With a synchronized IdentityHashMap in place, I don't see any remaining potential for a race condition in that specific spot. If any problems remain, they must be rooted elsewhere.