Discovered a regression in sec/op of 10.36% for benchmark ParseBigBytes-16 between c594762 and b3e3636.
This is around the time the math/big changes landed, but unfortunately we don't have benchmark data in between.
CC @rsc
Comment From: mknyszek
This is a microbenchmark regression on our dashboard. It may not be actionable, since it is just a microbenchmark.
Comment From: prattmic
This benchmark is github.com/dustin/go-humanize@v1.0.0
, BenchmarkParseBigBytes
.
https://go-mod-viewer.appspot.com/github.com/dustin/go-humanize@v1.0.0/bigbytes_test.go#L210
Comment From: randall77
Bisected to https://go-review.googlesource.com/c/go/+/650638
Comment From: randall77
Looks like a case where malloc is faster than a sync.Pool
. The bigints in this case are typically 2 words long, so doing a make([]uint,2)
is faster than reusing such a beast using a sync.Pool
.
At least, that's my current theory. Profiling shows fewer allocations but more runtime, and a bunch of that extra runtime is in sync.Pool
functions.
It is a very noisy benchmark though, so it is hard to be sure.
Comment From: gopherbot
Change https://go.dev/cl/679475 mentions this issue: math/big: only use pool for large allocations