[RFC PATCH 0/6] rcu: Userspace RCU extended quiescent state

From: Frederic Weisbecker
Date: Fri Jul 06 2012 - 08:00:22 EST


Although this feature is useless alone, it is necessary to prepare
our kernel to be more tickless. With this, RCU doesn't need the tick
anymore on a CPU running in userspace.

I've made it a standalone feature because maintaining it on a big
tree like nohz cpusets doesn't scale. I'm trying to integrate the
components piecewise now as much as possible.

Once we have everything in place, this should be merged into

So what do you think? The version I use in my nohz cpusets is
doing a bit differently: instead of hooking in entry.S, it hooks
in higher level exception handlers (do_debug(), do_page_fault(), etc...)
and syscall slow path.

This version rather hooks in low level entry code. I'm not sure which is
the best. Both have their pros and cons. This is mostly about details.

I can cook a patchset with hooks on the higher level handlers to show
you if you want.

I also don't know yet if I should keep the previous TIF_NOHZ and use
syscalls slow path or not.

Yeah I'm still a bit brainstorming...


Frederic Weisbecker (6):
rcu: Settle config for userspace extended quiescent state
rcu: Allow rcu_user_enter()/exit() to nest
rcu: Exit RCU extended QS on preemption in irq exit
x86: Use the new schedule_user API on user preemption
x86: Kernel entry/exit hooks for RCU
x86: Exit RCU extended QS on notify resume

arch/Kconfig | 13 +++++++++++++
arch/x86/Kconfig | 1 +
arch/x86/include/asm/rcu.h | 7 +++++++
arch/x86/kernel/entry_64.S | 33 +++++++++++++++++++++++++++++----
arch/x86/kernel/signal.c | 2 ++
init/Kconfig | 10 ++++++++++
kernel/rcutree.c | 42 ++++++++++++++++++++++++++++++++++--------
kernel/sched/core.c | 7 +++++++
8 files changed, 103 insertions(+), 12 deletions(-)
create mode 100644 arch/x86/include/asm/rcu.h


To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/