Re: [PATCH v12 2/3] sched: Move task_mm_cid_work to mm work_struct
From: Peter Zijlstra
Date: Wed Apr 09 2025 - 15:09:21 EST
On Wed, Apr 09, 2025 at 11:53:05AM -0400, Mathieu Desnoyers wrote:
> On 2025-04-09 11:20, Peter Zijlstra wrote:
> > On Wed, Apr 09, 2025 at 10:15:42AM -0400, Mathieu Desnoyers wrote:
> > > On 2025-04-09 10:03, Peter Zijlstra wrote:
> > > > On Tue, Mar 11, 2025 at 07:28:45AM +0100, Gabriele Monaco wrote:
> > > > > +static inline void rseq_preempt_from_tick(struct task_struct *t)
> > > > > +{
> > > > > + u64 rtime = t->se.sum_exec_runtime - t->se.prev_sum_exec_runtime;
> > > > > +
> > > > > + if (rtime > RSEQ_UNPREEMPTED_THRESHOLD)
> > > > > + rseq_preempt(t);
> > > > > +}
> > > >
> > > > This confused me.
> > > >
> > > > The goal seems to be to tickle __rseq_handle_notify_resume() so it'll
> > > > end up queueing that work thing. But why do we want to set PREEMPT_BIT
> > > > here?
> > >
> > > In that scenario, we trigger (from tick) the fact that we may recompact the
> > > mm_cid, and thus need to update the rseq mm_cid field before returning to
> > > userspace.
> > >
> > > Changing the value of the mm_cid field while userspace is within a rseq
> > > critical section should abort the critical section, because the rseq
> > > critical section should be able to expect the mm_cid to be invariant
> > > for the whole c.s..
> >
> > But, if we run that compaction in a worker, what guarantees the
> > compaction is done and mm_cid is stable, but the time this task returns
> > to userspace again?
>
> So let's say we have a task which is running and not preempted by any
> other task on a cpu for a long time.
>
> The idea is to have the tick do two things:
>
> A) trigger the mm_cid recompaction,
>
> B) trigger an update of the task's rseq->mm_cid field at some point
> after recompaction, so it can get a mm_cid value closer to 0.
>
> So in its current form this patch will indeed trigger rseq_preempt()
> for *every tick* after the task has run for more than 100ms, which
> I don't think is intended. This should be fixed.
>
> Also, doing just an rseq_preempt() is not the correct approach, as
> AFAIU it won't force the long running task to release the currently
> held mm_cid value.
>
> I think we need something that looks like the following based on the
> current patch:
>
> - rename rseq_preempt_from_tick() to rseq_tick(),
>
> - modify rseq_tick() to ensure it calls rseq_set_notify_resume(t)
> rather than rseq_preempt().
>
> - modify rseq_tick() to ensure it only calls it once every
> RSEQ_UNPREEMPTED_THRESHOLD, rather than every tick after
> RSEQ_UNPREEMPTED_THRESHOLD.
>
> - modify rseq_tick() so at some point after the work has
> compacted mm_cids, we do the same things as switch_mm_cid()
> does, namely to release the currently held cid and get a likely
> smaller one (closer to 0). If the value changes, then we should
> trigger rseq_preempt() so the task updates the mm_cid field before
> returning to userspace and restarts ongoing rseq critical section.
>
> Thoughts ?
Yes, that seems better. Also be sure there's a comment around there
somewhere that explains this. Because I'm sure I'll have forgotten all
about this in a few months time :-)