Re: [RFC PATCH v2 1/4] rseq: Add sched_state field to struct rseq

From: Dmitry Vyukov
Date: Thu Sep 28 2023 - 10:44:41 EST


On Thu, 28 Sept 2023 at 01:52, Florian Weimer <fweimer@xxxxxxxxxx> wrote:
>
> * Dmitry Vyukov:
>
> > On Tue, 26 Sept 2023 at 21:51, Florian Weimer <fweimer@xxxxxxxxxx> wrote:
> >>
> >> * Dmitry Vyukov:
> >>
> >> > In reality it's a bit more involved since the field is actually 8
> >> > bytes and only partially overlaps with rseq.cpu_id_start (it's an
> >> > 8-byte pointer with high 4 bytes overlap rseq.cpu_id_start):
> >> >
> >> > https://github.com/google/tcmalloc/blob/229908285e216cca8b844c1781bf16b838128d1b/tcmalloc/internal/percpu.h#L101-L165
> >>
> >> This does not compose with other rseq users, as noted in the sources:
> >>
> >> // Note: this makes __rseq_abi.cpu_id_start unusable for its original purpose.
> >>
> >> For a core library such a malloc replacement, that is a very bad trap.
> >
> > I agree. I wouldn't do this if there were other options. That's why I
> > am looking for official kernel support for this.
> > If we would have a separate 8 bytes that are overwritten with 0 when a
> > thread is descheduled, that would be perfect.
>
> That only solves part of the problem because these fields would still
> have to be locked to tcmalloc. I think you'd need a rescheduling
> counter, then every library could keep their reference values in
> library-private thread-local storage.

This unfortunatly won't work for tcmalloc.
This data is accessed on the very hot path of malloc/free. We need a
ready to use pointer in TLS, which is reset by the kernel to 0 (or
some user-space specified value). Doing to separate loads for counters
in different cache lines would be too expensive.

It may be possible to make several libraries use this feature with an
array of notifications (see rseq_desched_notif_t in my previous
email).