The problem/use-case that the feature addresses
We run a large number of Redis instances in Docker containers on shared hosts. During periods of low QPS, each Redis instance with multi-threaded I/O enabled still maintains multiple active I/O threads. Even with minimal traffic, these threads continuously perform epoll_wait
and use eventfd
for inter-thread notifications, leading to frequent CPU wakeups and context switches.
This results in non-negligible CPU overhead across the host, especially when container density is high. While multi-threaded I/O improves throughput under high load, it becomes inefficient in low-load scenarios, where the overhead outweighs the benefits.
Description of the feature
We propose adding dynamic I/O thread scaling support to Redis — the ability to automatically scale the number of active I/O threads up during high traffic and scale them down (or even to zero, leaving only the main thread) during periods of low or idle traffic.
This would allow Redis to: - Reduce CPU consumption and system wakeups during low QPS/idle periods. - Maintain high throughput during traffic spikes by scaling up I/O parallelism as needed. - Better adapt to variable workloads in containerized or cloud environments.
The scaling logic could be based on metrics such as client traffic volume, pending I/O operations, or QPS thresholds, and should be configurable (e.g., via redis.conf
or startup flags).
Alternatives you've considered
- Static thread count tuning: We have experimented with setting
io-threads
to 1 (default) for low-QPS services, but this limits peak throughput and requires manual instance categorization. - External orchestration: Using external tools to restart Redis with different
io-threads
settings is not feasible due to downtime and operational complexity. - Process-level autoscaling: Running multiple single-threaded Redis instances doesn’t solve the underlying inefficiency, as each still incurs per-thread
epoll_wait
andeventfd
overhead.
We reviewed existing issues and found related discussions on I/O thread performance , but no proposals for dynamic scaling. The current io-threads
setting is immutable (IMMUTABLE_CONFIG
), preventing runtime adjustments.
Additional information
- Redis version: 8.2-in (tested on latest stable)
- Workload: Mixed read/write, bursty traffic patterns
- Environment: Docker containers on Linux (kernel 5.15), high instance density per host
- Observed overhead: Profiling shows significant time in
epoll_wait
andeventfd
syscalls across idle I/O threads - Goal: Improve CPU efficiency and reduce "noisy neighbor" effects in multi-tenant hosts without sacrificing peak performance
We are open to contributing this feature if the Redis team is receptive to the idea and can provide guidance on design and integration.
Test scenario
qps 3w
clients 160
1 io-threads 24% cpu
4 io-threads 38.5% cpu