Re: [PATCH v4 1/2] x86/resctrl: Update task closid/rmid with task_call_func()

From: Peter Newman
Date: Mon Dec 12 2022 - 12:37:53 EST


Hi Reinette,

On Sat, Dec 10, 2022 at 12:54 AM Reinette Chatre <reinette.chatre@xxxxxxxxx> wrote:
> On 12/8/2022 2:30 PM, Peter Newman wrote:
> > Based on this, I'll just sketch out the first scenario below and drop
> > (2) from the changelog. This also implies that the group update cases
>
> ok, thank you for doing that analysis.
>
> > can use a single smp_mb() to provide all the necessary ordering, because
> > there's a full barrier on context switch for it to pair with, so I don't
> > need to broadcast IPI anymore.  I don't know whether task_call_func() is
>
> This is not clear to me because rdt_move_group_tasks() seems to have the
> same code as shown below as vulnerable to re-ordering. Only difference
> is that it uses the "//false" checks to set a bit in the cpumask for a
> later IPI instead of an immediate IPI.

An smp_mb() between writing the new task_struct::{closid,rmid} and
calling task_curr() would prevent the reordering I described, but I
was worried about the cost of executing a full barrier for every
matching task.

I tried something like this:

for_each_process_thread(p, t) {
if (!from || is_closid_match(t, from) ||
is_rmid_match(t, from)) {
WRITE_ONCE(t->closid, to->closid);
WRITE_ONCE(t->rmid, to->mon.rmid);
/* group moves are serialized by rdt */
t->resctrl_dirty = true;
}
}
if (IS_ENABLED(CONFIG_SMP) && mask) {
/* Order t->{closid,rmid} stores before loads in task_curr() */
smp_mb();
for_each_process_thread(p, t) {
if (t->resctrl_dirty) {
if (task_curr(t))
cpumask_set_cpu(task_cpu(t), mask);
t->resctrl_dirty = false;
}
}
}

I repeated the `perf bench sched messaging -g 40 -l100000 ` benchmark
from before[1] to compare this with the baseline, and found that it
only increased the time to delete the benchmark's group from 1.65ms to
1.66ms, so it's an alternative to what I last posted.

I could do something similar in the single-task move, but I don't think
it makes much of a performance difference in that case. It's only a win
for the group move because the synchronization cost doesn't grow with
the group size.

[1] https://lore.kernel.org/lkml/20221129111055.953833-3-peternewman@xxxxxxxxxx/


>
> > faster than an smp_mb(). I'll take some measurements to see.
> >
> > The presumed behavior is __rdtgroup_move_task() not seeing t1 running
> > yet implies that it observes the updated values:
> >
> > CPU 0                                   CPU 1
> > -----                                   -----
> > (t1->{closid,rmid} -> {1,1})            (rq->curr -> t0)
> >
> > __rdtgroup_move_task():
> >   t1->{closid,rmid} <- {2,2}
> >   curr <- t1->cpu->rq->curr
> >                                         __schedule():
> >                                           rq->curr <- t1
> >                                         resctrl_sched_in():
> >                                           t1->{closid,rmid} -> {2,2}
> >   if (curr == t1) // false
> >     IPI(t1->cpu)
>
> I understand that the test is false when it may be expected to be true, but
> there does not seem to be a problem because of that. t1 was scheduled in with
> the correct CLOSID/RMID and its CPU did not get an unnecessary IPI.

Yes, this one was reminding the reader of the correct behavior. I can
just leave it out.

-Peter