2020-05-12T15:01:56-a0698a6/linux-amd64-longtest
--- FAIL: TestGdbPanic (0.85s)
runtime-gdb_test.go:71: gdb version 7.12
runtime-gdb_test.go:616: could not find '(#.* .* in )?main\.crash' in backtrace
runtime-gdb_test.go:617: gdb output:
Loading Go Runtime support.
--- FAIL: TestGdbPython (0.98s)
runtime-gdb_test.go:71: gdb version 7.12
runtime-gdb_test.go:249: gdb output: Loading Go Runtime support.
Loaded Script
Yes /workdir/go/src/runtime/runtime-gdb.py
Breakpoint 1 at 0x498865: file /workdir/tmp/go-build240230015/main.go, line 16.
warning: linux_test_for_tracefork: unexpected result from waitpid (52539, status 0x1c7f)
hi
runtime-gdb_test.go:278: info goroutines failed:
FAIL
FAIL runtime 157.848s
This is the only occurrence I've seen so far. CC @ianlancetaylor in case it's somehow related to CL 232862, which went in one CL before this failure.
Comment From: danscales
I also got the same problem here:
https://storage.googleapis.com/go-build-log/3d4facca/linux-amd64-staticlockranking_9d45d3a8.log
It may be more likely when static lock ranking is enabled (because it is a timing issue?). I am also getting on some local staticlockranking runs.
Comment From: bcmills
2020-05-12T19:15:34-cb11c98/linux-mips64le-mengzhuo 2020-05-12T19:15:34-cb11c98/linux-s390x-ibm
This is looking like a recent regression.
Comment From: bcmills
That one on linux-s390x-ibm
looks related to #39025.
Comment From: bcmills
There is also an info goroutines failed:
failure mode in TestGdbPythonCgo
that is likely related:
2020-05-12T19:15:34-cb11c98/linux-amd64-sid
2020-05-12T19:15:34-cb11c98/linux-mips64le-mengzhuo
--- FAIL: TestGdbPythonCgo (1.58s)
runtime-gdb_test.go:71: gdb version 8.3
runtime-gdb_test.go:249: gdb output: Loading Go Runtime support.
Loading Go Runtime support.
Breakpoint 1 at 0x49a365: file /workdir/tmp/go-build125523894/main.go, line 19.
runtime-gdb_test.go:278: info goroutines failed:
FAIL
FAIL runtime 32.606s
Comment From: bcmills
Attempting to bisect now, but I'm not hopeful because of the existing flakiness of the test (#24616).
Comment From: bcmills
I'm unable to reproduce the failures at all on my local workstation using GDB 8.3.1 and go test -count=1024 '-run=TestGdb.*' -timeout=2h runtime
.
~/go-review$ uname -srv
Linux 5.2.17-1rodete3-amd64 #1 SMP Debian 5.2.17-1rodete3 (2019-10-21 > 2018)
~/go-review$ gdb --version
GNU gdb (Debian 8.3.1-1) 8.3.1
~/go-review$ python --version
Python 2.7.18rc1
I'm not sure how to proceed from here.
Comment From: toothrot
/cc @ianlancetaylor
Comment From: bcmills
I've gotten exactly one local repro so far. I'm not sure what conditions are required to trigger it — system load seems to be a factor.
However, the only repro I have gotten was without a test filter, which is even more difficult to use for bisection due to the presence of other flaky runtime
tests (such as #37201, which occurred on the next attempt).
~/go-review$ go test -short -failfast -count=256 -timeout=2h runtime
--- FAIL: TestGdbConst (0.55s)
runtime-gdb_test.go:71: gdb version 8.3
runtime-gdb_test.go:549: output "Loading Go Runtime support.\nBreakpoint 1 at 0x4667e0: file /tmp/go-build364032258/main.go, line 8.\n"
runtime-gdb_test.go:552: output mismatch
FAIL
FAIL runtime 10.528s
FAIL
Comment From: josharian
If this is due to EINTR, might pummeling the process with signals help?
Comment From: bcmills
An interesting idea, but it doesn't seem to help.
Comment From: toothrot
@bcmills Just a friendly ping to check on the status of this as we work through our beta blocking issues.
Comment From: bcmills
I don't know how to fix this and the tests are so nondeterministic that I can't figure out how to even bisect it.
I plan to file a proposal (later today?) to relocate these tests so that they are not run as part of go test runtime
and all.bash
, unless someone is going to step in to fix these otherwise-apparently-unmaintained tests.
Comment From: mundaym
CL 232862 does seem like a likely candidate based on the CLs that landed around the time this issue started. Perhaps we should temporarily revert it and see if the problem disappears given that bisecting appears fruitless?
Comment From: gopherbot
Change https://golang.org/cl/235282 mentions this issue: runtime: check gdb exit status and log output
Comment From: prattmic
After ~10 runs of go test runtime, I managed to repro this with golang.org/cl/235282 on an s390 gomote:
$ gomote run user-mpratt-linux-s390x-ibm-0 go/bin/go test -count=1 runtime
--- FAIL: TestGdbBacktrace (2.29s)
runtime-gdb_test.go:71: gdb version 8.2
runtime-gdb_test.go:395: gdb output:
Loading Go Runtime support.
Breakpoint 1 at 0x6e404: file <autogenerated>, line 1.
runtime-gdb_test.go:397: gdb exited with error: signal: segmentation fault (core dumped)
FAIL
FAIL runtime 226.306s
FAIL
Error running run: exit status 1
So GDB is segfaulting... I'll see if I can manage to find the core file.
Comment From: prattmic
Program terminated with signal SIGSEGV, Segmentation fault.
#0 0x000003ffb9c009f8 in fileno_unlocked () from /lib64/libc.so.6
Thread 1 (Thread 0x3ffbade6bb0 (LWP 1048)):
#0 0x000003ffb9c009f8 in fileno_unlocked () from /lib64/libc.so.6
#1 0x000003ffbacb71be in rl_resize_terminal () from /lib64/libreadline.so.7
#2 0x000002aa08763246 in tui_async_resize_screen(void*) ()
#3 0x000002aa088dca18 in invoke_async_signal_handlers() ()
#4 0x000002aa088dd57a in gdb_do_one_event() ()
#5 0x000002aa08a23e04 in wait_sync_command_done() ()
#6 0x000002aa08a23ffc in execute_command(char const*, int) ()
#7 0x000002aa08947f26 in catch_command_errors(void (*)(char const*, int), char const*, int) ()
#8 0x000002aa0894935e in gdb_main(captured_main_args*) ()
#9 0x000002aa08686f06 in main ()
It seems like this must be some kind of GDB bug, though perhaps one we a tickling somehow.
Comment From: cherrymui
Looking at GDB source code, linux_test_for_tracefork: unexpected result from waitpid
is GDB internal. It is part of GDB checking ptrace features, and the child is also part of GDB itself, not user program. It doesn't look like anything related to Go.
Just to be sure, is the GDB binary on the builder changed recently, like upgraded from one version to another?
Comment From: cherrymui
GDB 7.12 is released on 2016. Should we try some newer version of GDB on the builders?
Comment From: prattmic
FWIW, the crashes on the s390 builder are on GDB 8.2, which is a bit newer (2018).
Digging into the https://github.com/golang/go/issues/39021#issuecomment-634303519 crash a bit more:
rl_resize_terminal
will crash ifrl_initialize
is never called (i.e., initialize readline).gdb -batch -ex 'p 1'
never callsrl_initialize
(at least with HEAD gdb).- It does call
tui_interp::init
, which installs a SIGWINCH signal handler for resizing.
Thus is seems it should be possible to trigger this crash by sending a SIGWINCH to a batch mode GDB, though I haven't managed to do so.
Comment From: bcmills
@chlunde noted the same SIGWINCH
bug in an external conversation. It's not clear to me whether that bug is related to this test failure.
Comment From: prattmic
I've dumped all of the core files on the s390 builder, and they all have the same stack trace as https://github.com/golang/go/issues/39021#issuecomment-634303519
$ gomote run user-mpratt-linux-s390x-ibm-0 /usr/bin/coredumpctl list gdb | tr -s ' ' | cut -f 5 -d ' ' | xargs -n 1 gomote run user-mpratt-linux-s390x-ibm-0 /usr/bin/coredumpctl info | c++filt > /tmp/stacks.txt
Though it remains to be seen if all of the GDB failures are SEGVs with core files. Now that https://golang.org/cl/235282 is in we should be getting more information on new failures.
Comment From: prattmic
Two new segfaults:
2020-05-27T19:54:12-0d20a49/linux-amd64-staticlockranking 2020-05-27T19:54:12-0d20a49/linux-mips64le-mengzhuo
This is trivially reproducible with my system GDB with:
package main
import (
"fmt"
"os/exec"
"syscall"
"time"
)
func main() {
cmd := exec.Command("gdb", "-nx", "-batch", "-ex", "run", "--args", "sleep", "60")
go func() {
// XXX: This isn't safe!
time.Sleep(1*time.Second)
fmt.Println("Sending SIGWINCH...")
syscall.Kill(cmd.Process.Pid, syscall.SIGWINCH)
}()
got, err := cmd.CombinedOutput()
fmt.Printf("output: %q, err: %v\n", got, err)
}
$ ./gdb
Sending SIGWINCH...
output: "", err: signal: segmentation fault
I'll see if I can report this upstream.
My guess is that something has changed recently that is causing SIGWINCH to get sent to processes on the builders. I'm not sure what that would be though; AFAIK it is usually just desktop environments.
EDIT: CombinedOutput sets stdin to /dev/null and stdout to a pipe, so the bash equivalent of this is: gdb -nx -batch -ex 'run' --args sleep 60 </dev/null 2>&1 | cat
. This also dies when signalled. Though at HEAD it dies with SIGABRT because they've added a SIGSEGV handler that calls abort.
Comment From: prattmic
Upstream bug: https://sourceware.org/bugzilla/show_bug.cgi?id=26056
Comment From: prattmic
Thus far, the only true workaround I've found for this issue is to set GDB's stdin to /dev/tty. However, that's pretty ugly w.r.t. possible interactions with actual terminals. Not to mention it is probably annoying to write such that it works on all platforms (no idea if Darwin, Windows, etc have /dev/tty).
On the other hand, why are we getting SIGWINCH in the first place? This test sends SIGWINCH to the entire process group, so if it runs in parallel to the GDB tests and ends up in the same process group (not sure about that?), then it could be signaling GDB. It is a pretty old test though, and this seems to be a new issue.
Comment From: prattmic
Aha! I should read kill(2) more closely: "If pid equals 0, then sig is sent to every process in the process group of the calling process."
The original blamed http://golang.org/cl/232862 has a new test with:
// Send ourselves SIGWINCH regularly.
go func() {
for range time.Tick(100 * time.Microsecond) {
syscall.Kill(0, syscall.SIGWINCH)
}
}()
So that is presumably the source of the signals. I'll send a workaround.
Comment From: ianlancetaylor
Argh, sorry about that, don't know what I was thinking.
Comment From: gopherbot
Change https://golang.org/cl/235557 mentions this issue: runtime: start GDB in a new process group
Comment From: cherrymui
Could we run tests that sending signals to the process group not in parallel with other tests?
Comment From: prattmic
For reference, the upstream bug https://sourceware.org/bugzilla/show_bug.cgi?id=26056 was fixed in GDB 12.1 (https://salsa.debian.org/gdb-team/gdb/-/commit/72994b6028360eccb5d25b39d2e18b386d091426).
Comment From: prattmic
For reference, we saw this issue again in #74355. @cagedmantis managed to reproduce once and collect the core file in https://github.com/golang/go/issues/74355#issuecomment-3013543541.
To analyze the core file, I used the builder docker image to get the exact same gdb binary, and used Debian's debug server to get the symbols for GDB. (For some reason, this step did not work for @mknyszek):
$ docker run --rm -it gcr.io/symbolic-datum-552/linux-x86-bullseye:latest /bin/bash
root@58ee5162f587:/# export DEBUGINFOD_URLS="https://debuginfod.debian.net"
root@58ee5162f587:/# curl -L https://github.com/user-attachments/files/20951883/core.tar.gz > core.tar.gz
root@58ee5162f587:/# tar xf core.tar.gz
root@58ee5162f587:/# gdb $(which gdb) core
Program terminated with signal SIGABRT, Aborted.
#0 0x00007d1090545d61 in raise () from /lib/x86_64-linux-gnu/libc.so.6
[Current thread is 1 (Thread 0x7d108d680180 (LWP 627812))]
(gdb) bt
#0 0x00007d1090545d61 in raise () from /lib/x86_64-linux-gnu/libc.so.6
#1 0x00007d109052f537 in abort () from /lib/x86_64-linux-gnu/libc.so.6
#2 0x000059a0ce917a4c in handle_sigsegv (sig=11) at /build/gdb-Nav6Es/gdb-10.1/gdb/event-top.c:885
#3 <signal handler called>
#4 0x00007d1090585630 in fileno_unlocked () from /lib/x86_64-linux-gnu/libc.so.6
#5 0x00007d10914fb873 in rl_resize_terminal () at ./terminal.c:385
#6 0x000059a0cec56d2b in tui_async_resize_screen (arg=<optimized out>) at /build/gdb-Nav6Es/gdb-10.1/gdb/tui/tui-win.c:541
#7 0x000059a0ce964c18 in invoke_async_signal_handlers () at /build/gdb-Nav6Es/gdb-10.1/gdb/async-event.c:206
#8 0x000059a0ced89c2e in gdb_do_one_event () at /build/gdb-Nav6Es/gdb-10.1/gdbsupport/event-loop.cc:172
#9 0x000059a0cec311a4 in wait_sync_command_done () at /build/gdb-Nav6Es/gdb-10.1/gdb/top.c:526
#10 0x000059a0cec333d4 in execute_command (p=<optimized out>, p@entry=<error reading variable: value has been optimized out>, from_tty=0, from_tty@entry=<error reading variable: value has been optimized out>) at /build/gdb-Nav6Es/gdb-10.1/gdb/top.c:670
#11 0x000059a0ceb0b833 in catch_command_errors (command=<optimized out>, arg=<optimized out>, from_tty=<optimized out>) at /build/gdb-Nav6Es/gdb-10.1/gdb/main.c:457
#12 0x000059a0ceb0f147 in captured_main_1 (context=<optimized out>) at /build/gdb-Nav6Es/gdb-10.1/gdb/main.c:1218
#13 0x000059a0ceb0f57b in captured_main (data=data@entry=0x7ffc82ee30e0) at /build/gdb-Nav6Es/gdb-10.1/gdb/main.c:1243
#14 gdb_main (args=args@entry=0x7ffc82ee3110) at /build/gdb-Nav6Es/gdb-10.1/gdb/main.c:1268
#15 0x000059a0ce92990c in main (argc=<optimized out>, argv=<optimized out>) at /build/gdb-Nav6Es/gdb-10.1/gdb/gdb.c:32