Context:
We are running Spring Cloud Gateway (SCG) as a stateless service inside Kubernetes across multiple pods. We have plan :- That the SCG pods load dynamic routing configurations via Spring Cloud Config Server, which pulls YAML route definitions from GitLab repositories. Currently, route changes require triggering /actuator/refresh individually on each SCG pod to reload updated route definitions. Design Problem Statement:
We are exploring options to automate and scale this refresh mechanism while considering: Minimized operational complexity. No additional messaging infrastructure unless necessary. Refresh scope limited only to SCG pods — avoiding refresh on other microservices running within the same namespace/cluster. Future-proofing for potential increase in route configurations (large YAML files).
Approaches Evaluated: Option 1 — Spring Cloud Bus (Kafka-based): Use /actuator/busrefresh to propagate refresh events across pods. Requires introducing and managing Kafka (or RabbitMQ) cluster inside the Kubernetes environment. Adds infra operational overhead and non-trivial maintenance burden.
Option 2 — Kubernetes-native refresh via Job/Script: Run a Kubernetes Job post-deployment (or via GitLab CI/CD pipeline) that: Selects SCG pods via label selectors. Executes /actuator/refresh per pod via kubectl exec or pod IP/Service.
Risks: Label-based targeting requires strict enforcement to avoid unintentionally refreshing non-Gateway pods. Becomes harder to maintain if pod labeling consistency is not strictly enforced across all deployments.
Option 3 — ConfigMap-based route storage with Spring Cloud Kubernetes Config Watcher: Migrate route definitions to Kubernetes ConfigMaps. Use Spring Cloud Kubernetes native config reload capabilities. Benefit: automatic refresh without explicit actuator calls.
Limitation: ConfigMap size limit (1MB total), which may become a bottleneck as routes grow across products/environments.
Open Questions for Review: Is there a widely adopted pattern or reference architecture to scope config refreshes strictly to a subset of pods (i.e., SCG only) within Kubernetes? Are there any additional Kubernetes-native event-driven solutions we can leverage (e.g., K8s informers, CRDs, operators)? Would a hybrid approach (Config Server + GitLab + limited Bus implementation dedicated only for SCG namespace) make sense? Has anyone implemented per-pod scoped refresh orchestration via Kubernetes controllers/operators successfully?
Constraints: We are intentionally avoiding full platform-wide Spring Cloud Bus adoption to minimize broker management overhead. Multi-tenant Kubernetes cluster hosting multiple microservices alongside SCG. Fully GitLab-based GitOps workflow is already in place for deployment pipelines.
Looking for inputs / experiences / suggestions on: Standard patterns Long-term maintainability Scalability at higher route volumes Simplification vs added infra complexity trade-offs
Comment From: ryanjbaxter
Thinking about your constraints here makes me think there is an opportunity to add a feature to spring cloud config that takes an inspiration from the config watcher in spring cloud kubernetes.
In the config watcher we have the ability to notify application instances of a configuration change by calling the instances actuator /refresh
or /shutdown
endpoint. There is not messaging broken involved but does require the use of service discovery (which is baked into the platform so no big deal).
It seems like it would be possible to do something similar in Spring Cloud Config where we use a webhook to get a notification from the git repo that configuration changed and then use the discovery client to call the instances actuator /refresh
or /shutdown
endpoints.
This type of solution would give you the best of both worlds where you are not relying on config maps and does not require you to deploy a messaging broker.
I suspect you could implement something like this on top of Spring Cloud Config today. While that is not ideal since its custom code, it would probably get you to a solution based on your current constraints.
Comment From: GautamSumit99
Thanks a lot for your detailed response — really appreciate you taking the time to think through our constraints. Your suggestion around a webhook + discovery client based refresh orchestrator on top of Spring Cloud Config sounds quite promising. It strikes a balance between avoiding full-scale broker infrastructure and still achieving automated refresh — definitely something we’ll prototype.
I had one follow-up question:
In large-scale production systems (especially with stateless services like Spring Cloud Gateway running in Kubernetes), is a Spring Cloud Config Server strictly necessary, or do rolling restarts via Kubernetes (which pull fresh config on pod start) suffice for managing config/route changes reliably at scale?
Would love to hear your perspective or any trade-offs you’ve seen in practice between these two approaches.
Thanks again!
Comment From: ryanjbaxter
I don't think Spring Cloud Config is necessary you could do all this with config maps/secrets and rolling restarts via K8S.
However I do think Spring Cloud Config (using something like Git) adds some nice functionality that might be harder to achieve with config maps/secrets.
First I think the configuration change history provided by a source control repo can provide useful insights when looking at who, what, and when configuration was changed.
Second you get the configuration refresh functionality of the apps as part of using the config server + spring cloud config monitor.
The downside of this approach is having a source control repo + config server deployed to achieve this.
There is no right or wrong answer, each one has tradeoffs.