The glibc test suite contains a test that verifies that sched_getcpu
returns the expected CPU number for a thread that is pinned (via
sched_setaffinity) to a specific CPU. There are other threads running
which attempt to de-schedule the pinned thread from its CPU. I believe
the test is correctly doing what it is expected to do; it is invalid
only if one believes that it is okay for the kernel to disregard the
affinity mask for scheduling decisions.
These days, we use the cpu_id rseq field as the return value of
sched_getcpu if the kernel has rseq support (which it has in these
cases).
This test has started failing sporadically for us, some time around
kernel 6.0. I see failure occasionally on a Fedora builder, it runs:
Linux buildvm-x86-26.iad2.fedoraproject.org 6.0.15-300.fc37.x86_64 #1 SMP PREEMPT_DYNAMIC Wed Dec 21 18:33:23 UTC 2022 x86_64 x86_64 x86_64 GNU/Linux
I think I've seen it on the x86-64 builder only, but that might just be
an accident.
The failing tests log this output:
=====FAIL: nptl/tst-thread-affinity-pthread.out=====
info: Detected CPU set size (in bits): 64
info: Maximum test CPU: 5
error: Pinned thread 1 ran on impossible cpu 0
error: Pinned thread 0 ran on impossible cpu 0
info: Main thread ran on 4 CPU(s) of 6 available CPU(s)
info: Other threads ran on 6 CPU(s)
=====FAIL: nptl/tst-thread-affinity-pthread2.out=====
info: Detected CPU set size (in bits): 64
info: Maximum test CPU: 5
error: Pinned thread 1 ran on impossible cpu 1
error: Pinned thread 2 ran on impossible cpu 0
error: Pinned thread 3 ran on impossible cpu 3
info: Main thread ran on 5 CPU(s) of 6 available CPU(s)
info: Other threads ran on 6 CPU(s)
But I also encountered one local failure, but it is rare. Maybe it's
load-related. There shouldn't be any CPU unplug or anything like that
involved here.
I am not entirely sure if something is changing CPU affinities from
outside the process (which would be quite wrong, but not a kernel bug).
But in the past, our glibc test has detected real rseq cpu_id
brokenness, so I'm leaning towards that as the cause this time, too.
Thanks,
Florian