Go version

go version go1.24.5 linux/amd64

Output of go env in your module/workspace:

AR='ar'
CC='gcc'
CGO_CFLAGS='-O2 -g'
CGO_CPPFLAGS=''
CGO_CXXFLAGS='-O2 -g'
CGO_ENABLED='1'
CGO_FFLAGS='-O2 -g'
CGO_LDFLAGS='-O2 -g'
CXX='g++'
GCCGO='gccgo'
GO111MODULE=''
GOAMD64='v1'
GOARCH='amd64'
GOAUTH='netrc'
GOBIN=''
GOCACHE='/home/hoskeri/.cache/go-build'
GOCACHEPROG=''
GODEBUG=''
GOENV='/home/hoskeri/.config/go/env'
GOEXE=''
GOEXPERIMENT=''
GOFIPS140='off'
GOFLAGS=''
GOGCCFLAGS='-fPIC -m64 -pthread -Wl,--no-gc-sections -fmessage-length=0 -ffile-prefix-map=/tmp/go-build4140638120=/tmp/go-build -gno-record-gcc-switches'
GOHOSTARCH='amd64'
GOHOSTOS='linux'
GOINSECURE=''
GOMOD='/dev/null'
GOMODCACHE='/home/hoskeri/go/pkg/mod'
GONOPROXY=''
GONOSUMDB=''
GOOS='linux'
GOPATH='/home/hoskeri/go'
GOPRIVATE=''
GOPROXY='https://proxy.golang.org,direct'
GOROOT='/home/hoskeri/.go'
GOSUMDB='sum.golang.org'
GOTELEMETRY='local'
GOTELEMETRYDIR='/home/hoskeri/.config/go/telemetry'
GOTMPDIR=''
GOTOOLCHAIN='auto'
GOTOOLDIR='/home/hoskeri/.go/pkg/tool/linux_amd64'
GOVCS=''
GOVERSION='go1.24.5'
GOWORK=''
PKG_CONFIG='pkg-config'

What did you do?

http.Transport has a .MaxResponseHeaderBytes field that allows http client requests to limit the number of bytes consumed when parsing response headers.

This limit appears to be honored as intended when no proxy is configured (eg: http.Get(...)).

However, a malicious proxy server can respond to CONNECT requests, and cause the client process to read an unreasonably large amount of data while parsing the proxy response. MaxResponseReadBytes does not work to limit resource usage and memory exhaustion.

Explanation

Consider an http client process that is configured to proxy https requests via http CONNECT based tunnels.

Before making the final request, we request a tunnel from the proxy server by making a CONNECT request. A well behaved proxy should respond with a 200 OK status + headers, leaving the tcp connection open for the tunnel.

If the proxy server chooses to respond with an infinite stream of alphabetic [A-Za-z]+ bytes, the ReadResponse function fills the br buffer until memory is exhausted or the context is cancelled.

This is because ReadResponse ultimately calls net/textproto.readLineSlice(-1). The -1 disables the line length limit, causing readLineSlice to read unlimited bytes: there is no terminating sequence to be found in the response stream.

https://cs.opensource.google/go/go/+/master:src/net/http/transport.go;l=1825

        // Write the CONNECT request & read the response.
        go func() {
            defer close(didReadResponse)
            err = connectReq.Write(conn)
            if err != nil {
                return
            }
            // Okay to use and discard buffered reader here, because
            // TLS server will not speak until spoken to.
            br := bufio.NewReader(conn)
            resp, err = ReadResponse(br, connectReq)
        }()
        select {
        case <-connectCtx.Done():
            conn.Close()
            <-didReadResponse
            return nil, connectCtx.Err()
        case <-didReadResponse:
            // resp or err now set
        }

A proof of concept is attached with this report. The program runs a malicious proxy server, and a client that attempts to use the server as a proxy. When executed, the program will run until memory is exhausted or 1 minute elapsed.

badconnectproxy.go.txt

Suggested Fix.

As documented at https://pkg.go.dev/net/textproto#NewReader, we should limit the size of br to some reasonable value.

To avoid denial of service attacks, the provided bufio.Reader should be reading from an io.LimitReader or similar Reader to bound the size of responses.

What did you see happen?

N.A.

What did you expect to see?

N.A.

Comment From: gabyhelp

Related Issues

(Emoji vote if this was helpful or unhelpful; more detailed feedback welcome in this discussion.)

Comment From: neild

We should definitely set some reasonable limit on the size of the CONNECT response.

(This was considered as a vulnerability, but we decided to classify it as a bug since the impact is limited to memory exhaustion in the HTTP client, and requires a malicious or misbehaving HTTP proxy. A malicious proxy has many ways to cause problems for an HTTP client.)