I am doing some tracing of various kernel functions and system calls, and establishing patterns between them which can be used for certain performance profiling.
One thing that I noticed is that sometimes, even in my simple testing application which spins up a few threads which play with some mutexes, I wouldn't get any calls to kretprobe__sys_futex
, but I would get plenty to kprobe__sys_futex
.
I assumed that this was because e.g. a thread was calling into sys_futex
, and going to sleep or perhaps terminating, but I actually see the same processes calling sys_futex
multiple times consecutively without the return probe ever noticing anything.
I then assumed the issue was in how I was filtering calls to kprobe__sys_futex
, so I made a minimal example using BCC/eBPF to test this:
#! /usr/bin/env python
from bcc import BPF
b = BPF(text="""
BPF_HASH(call_count, int, int);
int kprobe__sys_futex() {
int zero = 0;
call_count.lookup_or_init(&zero, &zero);
bpf_trace_printk("futex start\\n");
call_count.increment(zero);
return 0;
}
int kretprobe__sys_futex() {
int zero = 0;
int *val_p = call_count.lookup(&zero);
if (val_p != NULL) {
int val = *val_p;
val--;
call_count.update(&zero, &val);
bpf_trace_printk("futex calls with no return: %d\\n", val);
} else { bpf_trace_printk("unexpected futex return\\n"); }
return 0;
}
""")
b.trace_print()
I notice that in all sorts of applications (a good example is mysql-server, which does regular futex operations even when idle - at least on my machine), many (often 10+) futex start
s are printed before a message from the return probe.
Here's an example trace of the above program which I left running for a couple of minutes while I wrote this post:
... hundreds of lines of much the same as below
gdbus-612 [001] .... 211229.997665: 0x00000001: futex start
NetworkManager-541 [001] .... 211229.997667: 0x00000001: futex start
gdbus-612 [001] .... 211229.997670: 0x00000001: futex start
mysqld-697 [001] .... 211230.789205: 0x00000001: futex start
mysqld-697 [001] .... 211230.789227: 0x00000001: futex start
mysqld-703 [001] .... 211230.789251: 0x00000001: futex start
mysqld-703 [001] .... 211230.789253: 0x00000001: futex start
mysqld-704 [001] d... 211230.789258: 0x00000001: futex calls with no return: 3994
mysqld-704 [001] .... 211230.789259: 0x00000001: futex start
mysqld-704 [001] d... 211230.789260: 0x00000001: futex calls with no return: 3994
mysqld-704 [001] .... 211230.789272: 0x00000001: futex start
mysqld-713 [000] .... 211231.037016: 0x00000001: futex start
mysqld-713 [000] .... 211231.037036: 0x00000001: futex start
vmstats-895 [000] .... 211231.464867: 0x00000001: futex start
mysqld-697 [001] .... 211231.790738: 0x00000001: futex start
mysqld-697 [001] .... 211231.790784: 0x00000001: futex start
mysqld-703 [001] .... 211231.790796: 0x00000001: futex start
mysqld-703 [001] .... 211231.790799: 0x00000001: futex start
mysqld-704 [001] d... 211231.790809: 0x00000001: futex calls with no return: 4001
mysqld-704 [001] .... 211231.790812: 0x00000001: futex start
mysqld-704 [001] d... 211231.790814: 0x00000001: futex calls with no return: 4001
As you can see, pid 697, for example, seems to have called into sys_futex
four times without returning in just this small trace.
I don't think this is a race condition in the eBPF code because if you mute the print statements and only print periodically, the count is usually bounded a few around zero for sys_write
, which happens magnitudes more often than sys_futex
(at least on my system's workload), so I would expect any race conditions to be exacerbated not solved.
I'm running Kernel 4.15.0-43-generic, on Ubuntu 18.04 LTS, which is sitting in VirtualBox.
Happy to provide any more context that might be useful!
There's a thread in the IOVisor mailing list which is relevant: https://lists.iovisor.org/g/iovisor-dev/topic/29702757