I am getting duplicate Transfer-Encoding: chunked
headers, which causes my Nginx Ingress Controller to return 502. Weird thing about this: it starts happening randomly after the spring boot instance runs for some time. When the specific pod is restarted, it works fine again.
Full error message from Nginx Ingress controller:
2023/10/01 19:00:12 [error] 1275#1275: *631585 upstream sent duplicate header line: "Transfer-Encoding: chunked", previous value: "Transfer-Encoding: chunked" while reading response header from upstream, client: xxx, server: xxx, request: "GET /v2/projects/export?languages=de&format=JSON&zip=false HTTP/1.1", upstream: "http://10.224.1.48:8080/v2/projects/export?languages=de&format=JSON&zip=false", host: "app.tolgee.io"
194.74.103.210 - - [01/Oct/2023:18:59:47 +0000] "GET /v2/projects/xxx/translations/en HTTP/1.1" 502 559 "http://localhost:5173/" "Mozilla/5.0 (Macintosh; Intel Mac OS X 10_15_7) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/117.0.0.0 Safari/537.36" "-"
The sensitive handler method 1 code. The sensitive handler method 2 code
I am not sure whether it only happens to these 2 endpoints. They are executed very frequently and have bigger responses.
SpringBoot version: 2.7.13 Nginx.com Ingress version: 0.18.1
(I don't think it's related to this), since I don't work with response entities returned from http client.
Comment From: wilkinsona
Thanks for the report. Unfortunately, it's not clear to me why you believe this is a Spring (Boot) issue.
As explained in the Framework issue to which you have already linked, the Transfer-Encoding
header is set by the underlying web server and Spring (Boot and Framework) never set the header as far as we know. Another possibility is that the header is being set by application code. You've linked to a couple of handler methods that look fine, but perhaps there is a filter or some other component in your application that is introducing the problem?
If you believe that Spring is the cause of the problem that you're seeing and you would like us to spend some more time investigating, please spend some time providing a complete yet minimal sample that reproduces the problem. You can share it with us by pushing it to a separate repository on GitHub or by zipping it up and attaching it to this issue.
Comment From: JanCizmar
I understand. I just hope someone can point me to right direction. I don't think we are touching the headers anywhere else in the app code, and further investigation is not trivial since the errors starts occurring for unknown reason only in the production environment.
Any tips, how to debug this or what can cause this are appreciated.
Comment From: wilkinsona
Perhaps you could add a Filter
, positioned first in the chain, that checks the response headers after calling doFilter
. If there are no Transfer-Encoding
headers in the response at this point, it would suggest that the web server is setting the header twice. If there are one or more Transfer-Encoding
headers, that would suggest it's Spring or application code that is setting the header.
Comment From: JanCizmar
So I've added such debug logging filter, and I can confirm, we are not adding the header in the application code. So it has to be added after the filter chain is done.
Comment From: wilkinsona
Thanks for trying the filter. That points towards the web server being the most likely cause of the problem. I suspect you're using Tomcat so I think your next step should be to raise this with Tomcat's maintainers. I'd start with the users mailing list, asking for some further advice on how to narrow down the problem.
Comment From: itsayopapi
@JanCizmar can you please elaborate more for me on what needs to be fixed please
Comment From: JanCizmar
@itsayopapi Well, I think I elaborated well. 😆 Can you describe, what's unclear?
Comment From: itsayopapi
@JanCizmar I cant seem to clone into the repo and also the source code which needs to be fixed
Comment From: JanCizmar
Hey! Maybe you can join your slack on https://tolg.ee/slack so we don't spam this thread? Thanks for willingness to contribute btw 🚀
Comment From: khong07
I facing the same problem: 502 randomly, since a while. Use spring boot 2.7 version and k8s nginx/1.21.6 :( Do you find anything @JanCizmar ?
I found this answer, will try it https://stackoverflow.com/questions/77042701/nginx-upstream-sent-duplicate-header-line-transfer-encoding-chunked-previo
Comment From: JanCizmar
I've tried the solution from StackOverflow but it doesn't help. It seems unrelated to this. I've checked the nginx source and there's no way to bypass the check for duplicated header. Currently, I am trying to update to Spring Boot 3. It has the latest Tomcat. If it doesn't help, I will ask in Tomcat community.
Comment From: sky-philipalmeida
Hi @JanCizmar, were you able to fix the issue after update to Spring Boot 3 please?
Comment From: JanCizmar
Hi @sky-philipalmeida , unfortunately yes, but it's not happening that often. Before, we've encountered this and got to restart the affected node around 10x a week. After upgrading to Spring Boot 3 it happened just once in ~2 months.
Comment From: khong07
In my case, I already fixed this bug. It's not related to "duplicate Transfer-Encoding: chunked", but the error message print that.
It's related to my server "forwards" the header "Connection" from another upstream server to downstream application. Removed this header forwarding resolves pb.
Comment From: jakub-blazej-telekom
It's related to my server "forwards" the header "Connection" from another upstream server to downstream application. Removed this header forwarding resolves pb.
I have the same issue. Upstream server produces 1 header and when I forward it, it gets duplicated by Spring. When I filter it out, there is no header.
Comment From: wilkinsona
@jakub-blazej-telekom Please see my comment above. If you have evidence that Spring is duplicating the header and can provide a sample that reproduces this behavior then please share it with us and we'll be happy to re-open the issue and investigate. Without such evidence, we still believe that the problem lies in the underlying web server and that it will have to be addressed there.
Comment From: jakub-blazej-telekom
sorry, I missed that. Here you go: header-demo.zip commenting the header line out removes "both" headers. Tried this on spring-boot 3.2.5 and 3.3.0
@GetMapping("/")
@ResponseBody
public ResponseEntity<String> getMain() {
HttpHeaders headers = new HttpHeaders();
headers.set(HttpHeaders.TRANSFER_ENCODING, "chunked");
return ResponseEntity.ok()
.headers(headers)
.body("hello");
}
Comment From: bclozel
@jakub-blazej-telekom I believe this issue comment is relevant. Your web endpoint should never add this header in the first place, as only the web server is in charge of such headers. If your application is proxying calls to a another REST endpoint, it should clean up the response headers before forwarding them (that's what proxies do). This applies not only to Transfer-Encoding
but also security headers.
Comment From: snowymike
posting this in a couple spring issues on this topic in the hopes it helps others. the best writeup of this problem i've encountered is https://knowledge.broadcom.com/external/article/298108/too-many-transfer-encodings-502-chunked.html. Trigger number 3 in that article was our problem
In Spring-based microservice applications, when using
RestTemplate
from a Controller annotated class and returning the result ofRestTemplate.exchange
in a method that returns aResponseEntity
. Similar to situation 1, this will cause a problem if the microservice to which you are talking returns a transfer-encoding header becauseRestTemplate.exchange
will copy all of the response headers from the client response into theResponseEntity
it creates, including the transfer-encoding header.
@Hollerweger 's comment in https://github.com/spring-projects/spring-framework/issues/21523#issuecomment-1233264593 provides an alternative that avoided this problem for us.
For most usecases creating a new ResponseEntity should solve the issue (when original Headers are not required at all):
return ResponseEntity.status(responseEntity.getStatusCode()).body(responseEntity.getBody())
Comment From: nameqxq
I'm also running into this issue, my webserver is using tomcat; In the process of debugging, it was found that when tomcat processed the response of http1.1, a piece of code added duplicate chunked, which did not check whether there was chunked in the current head; This will result in duplicate chunked rows;
The specific code location is:
org.apache.coyote.http11.Http11Processor#prepareResponse
Hope it helps;