As pointed out in https://github.com/grpc/grpc-go/issues/1587#issuecomment-340015543 (and in https://github.com/cockroachdb/cockroach/issues/17370), re-using MetaHeadersFrame
memory similarly to DataFrame
s has the potential to increase performance significantly (~50% throughput increase), even where the header frame data is quite small (10s of bytes). The http2 library should ideally support this.
Comment From: odeke-em
/cc @bradfitz @tombergan @rs
Comment From: tombergan
@dfawley Do you have a benchmark you can share? Ideally something we could copy into x/net/http2.
Also: it's not clear from the grpc issue if this was a microbenchmark or a more realistic benchmark.
Comment From: Zeymo
+1
The http2 library should ideally support this
we build gateway base on x/net/http2, when reuse MetaHeaderFrame (especially mh.Fields) and HeaderFrame ,gain ~10% qps benifits and reduce ~3% pause time
ps:we have 7~10 header filed per stream
Comment From: y3llowcake
I have a relatively simple production go-grpc server workload that does near-zero heap allocation in the request handler. As qps increases, p99 client response times start increasing, and line up with an increase of gc durations on the server.
The alloc_space profile shows ~30% of space down meta frame parsing call paths. See attached image. Happy to provide more debug info.
Comment From: detailyang
It looks like now the http2 design is hard to reuse memory :(
❯ go tool pprof -alloc_space mem.out 0 < < 10:39:23
Type: alloc_space
Time: Mar 26, 2020 at 10:38am (CST)
Entering interactive mode (type "help" for commands, "o" for options)
(pprof) top10
Showing nodes accounting for 464.16MB, 67.00% of 692.83MB total
Dropped 47 nodes (cum <= 3.46MB)
Showing top 10 nodes out of 92
flat flat% sum% cum cum%
110.02MB 15.88% 15.88% 110.02MB 15.88% golang.org/x/net/http2.(*Framer).readMetaFrame.func1
70.57MB 10.19% 26.07% 70.57MB 10.19% golang.org/x/net/http2.glob..func1
47.01MB 6.79% 32.85% 47.01MB 6.79% golang.org/x/net/http2.cloneHeader
40.01MB 5.78% 38.63% 40.01MB 5.78% net/textproto.MIMEHeader.Add (inline)
37.51MB 5.41% 44.04% 37.51MB 5.41% net/textproto.MIMEHeader.Set
37.51MB 5.41% 49.45% 37.51MB 5.41% golang.org/x/net/http2.(*clientConnReadLoop).handleResponse
34.01MB 4.91% 54.36% 93.02MB 13.43% golang.org/x/net/http2.(*serverConn).newWriterAndRequestNoBody
33.51MB 4.84% 59.20% 33.51MB 4.84% golang.org/x/net/http2.(*ClientConn).newStream
27.01MB 3.90% 63.10% 36.51MB 5.27% net/http.(*Request).WithContext (inline)
27MB 3.90% 67.00% 160.04MB 23.10% golang.org/x/net/http2.(*serverConn).newWriterAndRequest
Comment From: easwars
https://github.com/grpc/grpc-go/issues/3305 is blocked on this issue. Could you please share an update on this? Thank you.
Comment From: easwars
@bradfitz @tombergan Is there any update here that you can share? Thanks.
Comment From: zasweq
Friendly ping on this issue. Is there any update on the status of this issue. This still blocks an issue in the gRPC-Go repository: https://github.com/grpc/grpc-go/issues/3305.
Comment From: harshit-gangal
Hi, Any work planned for this issue?