Re: [PATCH] x86/resctrl: avoid compiler optimization in __resctrl_sched_in
From: Peter Zijlstra
Date: Tue Mar 07 2023 - 06:36:50 EST
On Mon, Mar 06, 2023 at 04:16:52PM -0800, Nick Desaulniers wrote:
> Start of Lore thread:
> https://lore.kernel.org/lkml/20230303231133.1486085-1-eranian@xxxxxxxxxx/
>
> On Mon, Mar 6, 2023 at 4:01 AM Peter Zijlstra <peterz@xxxxxxxxxxxxx> wrote:
> >
> > On Fri, Mar 03, 2023 at 03:11:33PM -0800, Stephane Eranian wrote:
> >
> > > The problem is located in the __resctrl_sched_in() routine which rewrites
> > > the active closid via the PQR_ASSOC register. Because this is an expensive
> > > operation, the kernel only does it when the context switch involves tasks
> > > with different CLOSID. And to check that, it needs to access the current
> > > task's closid field using current->closid. current is actually a macro
> > > that reads the per-cpu variable pcpu_hot.current_task.
> > >
> > > After an investigation by compiler experts, the problem has been tracked down
> > > to the usage of the get_current() macro in the __resctrl_sched_in() code and
> > > in particular the per-cpu macro:
> > >
> > > static __always_inline struct task_struct *get_current(void)
> > > {
> > > return this_cpu_read_stable(pcpu_hot.current_task);
> > > }
> > >
> > > And as per percpu.h:
> > >
> > > /*
> > > * this_cpu_read() makes gcc load the percpu variable every time it is
> > > * accessed while this_cpu_read_stable() allows the value to be cached.
> > > * this_cpu_read_stable() is more efficient and can be used if its value
> > > * is guaranteed to be valid across cpus. The current users include
> > > * get_current() and get_thread_info() both of which are actually
> > > * per-thread variables implemented as per-cpu variables and thus
> > > * stable for the duration of the respective task.
> > > */
> > >
> > > The _stable version of the macro allows the value to be cached, meaning it
> > > does not force a reload.
> >
> > Right, so afaict the difference between this_cpu_read() and
> > this_cpu_read_stable() is the volatile qualifier.
> >
> > this_cpu_read() is asm volatile(), while this_cpu_read_stable() and
> > raw_cpu_read() are both an unqualified asm().
> >
> > Now, afaiu we're inlining all of this into __switch_to(), which has
> > raw_cpu_write(pcpu_hot.current_task, next_p).
> >
> > And I suppose what the compiler is doing is lifting the 'current' load
> > over that store, but how is it allowed that? I thought C was supposed to
> > have PO consistency, That raw_cpu_write() should be seen as a store to
> > to pcpu_hot.current_task, why can it lift a load over the store?
> >
> > Specifically, percpu_to_op() has a "+m" output constaint while
> > percpu_stable_op() has a "p" input constraint on the same address.
>
> I definitely think the issue is specific to "p" constraints.
> https://godbolt.org/z/34YeG6WbY is the test case I reduced which I
> think demonstrates the issue.
>
> https://reviews.llvm.org/D145416
> -> click "Show Older Changes" for the ongoing discussion.
So per that summary, I'm going to nit-pick and state we very much want
CSE. CSE good. What we don't want it violating store-load ordering.
> I don't have a satisfactory answer yet, but am looking into this.
Oh, geez, what a twisty tale that... So Linus knew back in '09 that "p"
was icky, but it sorta was the only thing and it 'worked' -- until now
:/
Is there a way to explicitly order these things? barrier() obviously
isn't going to help here.
So ideally we'd get something that respects the whole store-load
ordering but still allows agressive CSE. And works for both toolchains.
Small ask, I know :-)