Re: [RFC] in-kernel rseq
From: Peter Zijlstra
Date: Mon Feb 23 2026 - 16:56:20 EST
On Mon, Feb 23, 2026 at 01:22:18PM -0500, Mathieu Desnoyers wrote:
> > I think it would be better as the address of the instruction after
> > the 'store'.
>
> That's indeed what we do for userspace rseq.
Either works I suppose. The only think to be careful about is that you
must not restart once the store has happened.
> > You probably don't need separate 'begin' and 'restart' addresses.
>
> It's not needed as long as the abort behavior is only restart. It
> becomes useful if another behavior is wanted on abort. But since
> this is kernel code and not ABI, it can change if the need arise.
Right, didn't want to limit to restart, although that is what is used
here.
> > It might be enough to save the 'restart' address and a byte length
> > directly in 'current', much simpler code.
>
> That would make it two stores to the task struct. Those would not be
> single-instruction, so we'd have to deal with preemption coming between
> those two stores. Also this would be more code: two stores compared
> to a single pointer store to the task struct to begin the critical
> section. AFAIU Peter's proposed approach is more efficient.
Must indeed be a single store. Either we have it set in full, or we
don't.
> We could turn the end address into a length if we want, this would
> make it more alike the userspace rseq ABI counterpart.
I find 3 distinct addresses easier to fill out, but again it doesn't
matter.
> > How much it helps is another matter.
> > I'm sure I remember something about per-cpu data being used for something
> > because it was faster then using 'current' - not sure of the context.
>
> The problem with per-cpu data for this is how to handle migration ?
> The whole point of this is to replace preempt disable.
This; it cannot be a per-cpu address, if you need it to implement
per-cpu ops.
> > The real problem with rseq is they don't scale.
>
> Not sure what you mean. They don't scale with respect to what ?
He might be talking about forward progress instead of scaling. There are
indeed foward progress concerns with rseq -- as there are with trivial
LL/SC. But given the length of a slice vs the length of a rseq section,
this should be a non-issue.
Doing the restart on interrupt would be a bigger issue. Although even
there I think that since the operations we're talking about are but a
few instructions, it should all just work well enough.
And if not, you can always craft a restart path that does the actual
local_irq_disable().
Eg.
this_cpu_add(pcp, i)
{
static const struct sched_seq _R = {
.begin = &&__rseq_begin,
.commit = &&__rseq_commit,
.restart = &&__rseq_restart,
};
WRITE_ONCE(current->sched_rseq, &_R);
__rseq_begin:
barrier();
addr = raw_cpu_ptr(pcp);
v = READ_ONCE(*addr)
v += i;
WRITE_ONCE(*addr, v);
barrier();
__rseq_commit:
WRITE_ONCE(current->sched_rseq, NULL);
return;
__rseq_restart:
guard(irqsave)();
addr = raw_cpu_ptr(pcp);
v = READ_ONCE(*addr)
v += i;
WRITE_ONCE(*addr, v);
return;
}
That way you get fast most of the time, except when you did do get an
interrupt in between.
> > I think that is just unlocked RMW of a per-cpu/thread variable.
That's missing the point entirely. He might be stuck in x86_64 or
something.