Re: [PATCH -tip v4] kprobes: Use synchronize_rcu_tasks() for optprobe with CONFIG_PREEMPT
From: Paul E. McKenney
Date: Thu Oct 19 2017 - 19:56:01 EST
On Fri, Oct 20, 2017 at 08:43:39AM +0900, Masami Hiramatsu wrote:
> We want to wait for all potentially preempted kprobes trampoline
> execution to have completed. This guarantees that any freed
> trampoline memory is not in use by any task in the system anymore.
> synchronize_rcu_tasks() gives such a guarantee, so use it.
> Also, this guarantees to wait for all potentially preempted tasks
> on the instructions which will be replaced with a jump.
>
> Since this becomes a problem only when CONFIG_PREEMPT=y, enable
> CONFIG_TASKS_RCU=y for synchronize_rcu_tasks() in that case.
>
> Signed-off-by: Masami Hiramatsu <mhiramat@xxxxxxxxxx>
;-)
Acked-by: Paul E. McKenney <paulmck@xxxxxxxxxxxxxxxxxx>
> ---
> arch/Kconfig | 2 +-
> kernel/kprobes.c | 14 ++++++++------
> 2 files changed, 9 insertions(+), 7 deletions(-)
>
> diff --git a/arch/Kconfig b/arch/Kconfig
> index d789a89cb32c..7e67191a4961 100644
> --- a/arch/Kconfig
> +++ b/arch/Kconfig
> @@ -90,7 +90,7 @@ config STATIC_KEYS_SELFTEST
> config OPTPROBES
> def_bool y
> depends on KPROBES && HAVE_OPTPROBES
> - depends on !PREEMPT
> + select TASKS_RCU if PREEMPT
>
> config KPROBES_ON_FTRACE
> def_bool y
> diff --git a/kernel/kprobes.c b/kernel/kprobes.c
> index 15fba7fe57c8..a8fc1492b308 100644
> --- a/kernel/kprobes.c
> +++ b/kernel/kprobes.c
> @@ -573,13 +573,15 @@ static void kprobe_optimizer(struct work_struct *work)
> do_unoptimize_kprobes();
>
> /*
> - * Step 2: Wait for quiesence period to ensure all running interrupts
> - * are done. Because optprobe may modify multiple instructions
> - * there is a chance that Nth instruction is interrupted. In that
> - * case, running interrupt can return to 2nd-Nth byte of jump
> - * instruction. This wait is for avoiding it.
> + * Step 2: Wait for quiesence period to ensure all potentially
> + * preempted tasks to have normally scheduled. Because optprobe
> + * may modify multiple instructions, there is a chance that Nth
> + * instruction is preempted. In that case, such tasks can return
> + * to 2nd-Nth byte of jump instruction. This wait is for avoiding it.
> + * Note that on non-preemptive kernel, this is transparently converted
> + * to synchronoze_sched() to wait for all interrupts to have completed.
> */
> - synchronize_sched();
> + synchronize_rcu_tasks();
>
> /* Step 3: Optimize kprobes after quiesence period */
> do_optimize_kprobes();
>