Re: Question about sched_setaffinity()

From: Paul E. McKenney
Date: Mon May 13 2019 - 11:55:31 EST


On Mon, May 13, 2019 at 11:37:14AM -0400, Joel Fernandes wrote:
> On Mon, May 13, 2019 at 05:20:43AM -0700, Paul E. McKenney wrote:
> > On Sun, May 12, 2019 at 03:05:39AM +0200, Andrea Parri wrote:
> > > > > > The fix is straightforward. I just added "rcutorture.shuffle_interval=0"
> > > > > > to the TRIVIAL.boot file, which stops rcutorture from shuffling its
> > > > > > kthreads around.
> > > > >
> > > > > I added the option to the file and I didn't reproduce the issue.
> > > >
> > > > Thank you! May I add your Tested-by?
> > >
> > > Please feel free to do so. But it may be worth to squash "the commits"
> > > (and adjust the changelogs accordingly). And you might want to remove
> > > some of those debug checks/prints?
> >
> > Revert/remove a number of the commits, but yes. ;-)
> >
> > And remove the extra loop, but leave the single WARN_ON() complaining
> > about being on the wrong CPU.
>
> The other "toy" implementation I noticed is based on reader/writer locking.
>
> Would you see value in having that as an additional rcu torture type?

Interesting question!

My kneejerk reaction is "no" because the fact that reader-writer locking
primitives pass locktorture imply that they have the needed semantics
to be a toy RCU implementation. (Things like NMI handlers prevent them
from operating correctly within the Linux kernel, and even things like
interrupt handlers would require disabling interrupts for Linux-kernel
use, but from a toy/textbook perspective, they qualify.)

We do have a large number of toy RCU implementations in perfbook, though,
and I believe reader-writer locking is one of them.

But the current "trivial" version would actually work in the Linux
kernel as it is, give or take more esoteric things like CPU hotplug
and respecting user-level uses of sched_setaffinity(). Which could be
"fixed", but at the expense of making it quite a bit less trivial.
(See early-2000s LKML traffic for some proposals along these lines.)

Thanx, Paul