Re: [PATCH] net: raise RCU qs after each threaded NAPI poll
From: Mark Rutland
Date: Thu Mar 07 2024 - 13:34:25 EST
On Thu, Mar 07, 2024 at 04:57:33PM +0000, Mark Rutland wrote:
> On Mon, Mar 04, 2024 at 04:16:01AM -0500, Joel Fernandes wrote:
> > On 3/2/2024 8:01 PM, Joel Fernandes wrote:
> > Case 1: For !CONFIG_DYNAMIC_FTRACE update of ftrace_trace_function
> >
> > This config is itself expected to be slow. However seeing what it does, it is
> > trying to make sure the global function pointer "ftrace_trace_function" is
> > updated and any readers of that pointers would have finished reading it. I don't
> > personally think preemption has to be disabled across the entirety of the
> > section that calls into this function. So sensitivity to preempt disabling
> > should not be relevant for this case IMO, but lets see if ftrace folks disagree
> > (on CC). It has more to do with, any callers of this function pointer are no
> > longer calling into the old function.
>
> I've been looking into this case for the last couple of days, and the short
> story is that the existing code is broken today for PREEMPT_FULL, and the code
> for CONFIG_DYNAMIC_FTRACE=y is similarly broken. A number of architectures have
> also implemented the entry assembly incorrectly...
> I believe our options are:
>
> * Avoid the mismatch by construction:
>
> - On architectures with trampolines, we could ensure that the list_ops gets
> its own trampoline and that we *always* use a trampoline, never using a
> common ftrace_caller. When we switch callers from one trampoline to another
> they'd atomically get the new func+ops.
>
> I reckon that might be a good option for x86-64?
>
> - On architectures without trampolines we could
> require that that the ftrace_caller
> loads ops->func from the ops pointer.
>
> That'd mean removing the 'ftrace_trace_function' pointer and removing
> patching of the call to the trace function (but the actual tracee callsites
> would still be patched).
>
> I'd be in favour of this for arm64 since that matches the way CALL_OPS
> works; the only difference is we'd load a global ops pointer rather than a
> per-callsite ops pointer.
>
> * Use rcu_tasks_trace to synchronize updates?
Having acquainted myself with the RCU flavours, I think the RCU Tasks Trace
suggestion wouldn't help, but *some* flavour of RCU might give us what we need.
That said, my preference is the "avoid the mismatch by construction" camp, as
even if we need to wait for uses of the old func+ops to finish, we'd have fewer
transitions (and consequently less patching) if we have:
switch_to_new_ops();
wait_for_old_ops_usage_to_finish();
.. rather than:
switch_to_list_func();
wait_for_old_ops_usage_to_finish();
switch_to_new_ops();
ensure_new_ops_are_visible();
switch_to_new_func();
Mark.