We should support corpus minimization, which will mean the ability to remove items from the on-disk corpus which either 1) don't have the types that are supported by the fuzz test (e.g. they are left over from a previous version of the test which took different params), or 2) don't expand any new coverage that isn't already provided by other entries in the corpus.
(1) should be pretty straightforward. We can just unmarshal the contents of the file, and see if it matches. If it doesn't, delete it.
(2) will be a bit more involved, but not necessarily that complicated. We should take a look at how libFuzzer implements this. One potential solution would be to maintain a coverage map, and run each corpus item against the fuzz test in turn, updating the map is it runs. If any of them don't expand coverage, delete it. A potential pitfall of this: if we have 20 corpus entries that each expand 1 line, and 1 corpus entry that covers all 20 of those lines at once, which do we choose?
At least (2) will be needed for OSS-Fuzz integration, if they end up supporting native support.
Comment From: bcmills
A thought regarding (2): if we have an effective minimization algorithm, is it actually necessary to prune the corpus on disk? It probably is worthwhile to prune the inputs stored in the Go build cache, but arguably the former-crashers in the testdata/fuzz
directory should be retained as regression tests.
Perhaps we could instead start each fuzzing run by spending some fraction of the time budget re-minimizing the corpus. That would also avoid losing coverage if (for example) the code is modified in a way that significantly changes coverage before refactoring back to something closer to a previous approach (for which the pruned-out inputs might actually become relevant again).
Comment From: katiehockman
It probably is worthwhile to prune the inputs stored in the Go build cache, but arguably the former-crashers in the testdata/fuzz directory should be retained as regression tests.
Yes agreed. When I talk about "corpus minimization" here, I'm only referring to the corpus in the build cache. We shouldn't touch the seed corpus.
Perhaps we could instead start each fuzzing run by spending some fraction of the time budget re-minimizing the corpus.
We could do this, but I would argue it should be opt in. e.g. go test -fuzz=Fuzz -fuzzminimizecorpus
Otherwise, I'm imagining a scenario where someone has a fuzz target that accepts []byte
. In playing around with their test, they change the target to accept []byte, int
. If we prune the corpus by default, then when they run go test -fuzz Fuzz
, all of the cache corpus that only accepts []byte
is going to be deleted before the user realizes what happened.
Comment From: katiehockman
@dgryski pointed me to https://arxiv.org/pdf/1905.13055.pdf. Commenting here so we can find it later.
Comment From: ianlancetaylor
Moving to Backlog.
Comment From: morehouse
Bump.
Because corpus minimization/merging features are missing, it is difficult for major Go projects to maintain public corpora. PRs adding new inputs to the corpus can't be easily evaluated to determine which seeds actually increase coverage, and we essentially have to accept all new inputs as a package based on coverprofiles before/after the PR. Over time, this can lead to major bloat of the corpora.
Those serious about fuzzing Go end up resorting to libFuzzer build mode and doing hacky things to measure code coverage.
It would be so much nicer if we could just use the native fuzzing tools.
Comment From: gijswijs
Let me bump this as well, to see if this can get some traction.
Comment From: morehouse
I ended up writing a hacky script to do corpus minimization, since this issue hasn't gotten any traction.
The script sets GODEBUG="fuzzdebug=1"
and then reads debug output during initial corpus processing to extract the number of coverage bits. The script is quite slow and inefficient (O(n^2)
) since coverage bits must be extracted out-of-process, and so every time a new input is selected to be included in the minimal corpus, the full updated corpus must be processed again.
A native minimization tool for Go would be more reliable long term and much faster (O(n)
), since coverage bits could be measured in-process and accumulated in a bitmap to determine which inputs to merge. See libFuzzer's merge logic for inspiration.