Re: Tasks RCU vs Preempt RCU
From: Steven Rostedt
Date: Tue May 22 2018 - 07:44:28 EST
On Mon, 21 May 2018 21:54:14 -0700
Joel Fernandes <joel@xxxxxxxxxxxxxxxxx> wrote:
> Yes, lets brain storm this if you like. One way I was thinking if we can
> manually check every CPU and see what state its in (usermode, kernel, idle
> etc) using an IPI mechanism. Once all CPUs have been seen to be in usermode,
> or idle atleast once - then we are done. You have probably already thought
Nope, it has nothing to do with CPUs, it really has to do with tasks.
CPU0
----
task 1: (pinned to CPU 0)
call func_tracer_trampoline
[on trampoline]
[timer tick, schedule ]
task 2: (higher priority, also pinned to CPU 0)
goes to user space
[ Runs for along time ]
We cannot free the trampoline until task 2 releases the CPU and lets
task 1 run again to get off the CPU.
> about this so feel free to say why its not a good idea, but to me there are 3
> places that a tasks quiescent state is recorded: during the timer tick,
> during task sleep and during rcu_note_voluntary_context_switch in
> cond_resched_rcu_qs. Of these, I feel only the cond_resched_rcu_qs case isn't
> trackable with IPI mechanism which may make the detection a bit slower, but
> tasks-RCU in mainline is slow right now anyway (~ 1 second delay if any task
> was held).
The way I was originally going to handle this was with a per task
counter, where it can be incremented at certain points via tracepoints.
Thus my synchronize tasks, would have connected to a bunch of
tracepoints at known quiescent states that would increment the counter,
and then check each task until they all pass a certain point, or are in
a quiescent state (userspace or idle). But this would be doing much of
what RCU does today, and that is why we decided to hook with the RCU
infrastructure.
I have to ask, what's your motivation for getting rid of RCU tasks?
-- Steve