Re: [PATCH] kprobes: Fix to delay the kprobes jump optimization

From: Paul E. McKenney
Date: Thu Feb 18 2021 - 12:58:34 EST


On Thu, Feb 18, 2021 at 11:29:23PM +0900, Masami Hiramatsu wrote:
> Commit 36dadef23fcc ("kprobes: Init kprobes in early_initcall")
> moved the kprobe setup in early_initcall(), which includes kprobe
> jump optimization.
> The kprobes jump optimizer involves synchronize_rcu_tasks() which
> depends on the ksoftirqd and rcu_spawn_tasks_*(). However, since
> those are setup in core_initcall(), kprobes jump optimizer can not
> run at the early_initcall().
>
> To avoid this issue, make the kprobe optimization disabled in the
> early_initcall() and enables it in subsys_initcall().
>
> Note that non-optimized kprobes is still available after
> early_initcall(). Only jump optimization is delayed.
>
> Fixes: 36dadef23fcc ("kprobes: Init kprobes in early_initcall")
> Reported-by: Paul E. McKenney <paulmck@xxxxxxxxxx>

Thank you, but the original report of a problem was from Sebastian
and the connection to softirq was Uladzislau. So could you please
add these before (or even in place of) my Reported-by?

Reported-by: Sebastian Andrzej Siewior <bigeasy@xxxxxxxxxxxxx>
Reported-by: Uladzislau Rezki <urezki@xxxxxxxxx>

Other than that, looks good!

Acked-by: Paul E. McKenney <paulmck@xxxxxxxxxx>

Thanx, Paul

> Signed-off-by: Masami Hiramatsu <mhiramat@xxxxxxxxxx>
> Cc: stable@xxxxxxxxxxxxxxx
> ---
> kernel/kprobes.c | 31 +++++++++++++++++++++----------
> 1 file changed, 21 insertions(+), 10 deletions(-)
>
> diff --git a/kernel/kprobes.c b/kernel/kprobes.c
> index d5a3eb74a657..779d8322e307 100644
> --- a/kernel/kprobes.c
> +++ b/kernel/kprobes.c
> @@ -861,7 +861,6 @@ static void try_to_optimize_kprobe(struct kprobe *p)
> cpus_read_unlock();
> }
>
> -#ifdef CONFIG_SYSCTL
> static void optimize_all_kprobes(void)
> {
> struct hlist_head *head;
> @@ -887,6 +886,7 @@ static void optimize_all_kprobes(void)
> mutex_unlock(&kprobe_mutex);
> }
>
> +#ifdef CONFIG_SYSCTL
> static void unoptimize_all_kprobes(void)
> {
> struct hlist_head *head;
> @@ -2497,18 +2497,14 @@ static int __init init_kprobes(void)
> }
> }
>
> -#if defined(CONFIG_OPTPROBES)
> -#if defined(__ARCH_WANT_KPROBES_INSN_SLOT)
> - /* Init kprobe_optinsn_slots */
> - kprobe_optinsn_slots.insn_size = MAX_OPTINSN_SIZE;
> -#endif
> - /* By default, kprobes can be optimized */
> - kprobes_allow_optimization = true;
> -#endif
> -
> /* By default, kprobes are armed */
> kprobes_all_disarmed = false;
>
> +#if defined(CONFIG_OPTPROBES) && defined(__ARCH_WANT_KPROBES_INSN_SLOT)
> + /* Init kprobe_optinsn_slots for allocation */
> + kprobe_optinsn_slots.insn_size = MAX_OPTINSN_SIZE;
> +#endif
> +
> err = arch_init_kprobes();
> if (!err)
> err = register_die_notifier(&kprobe_exceptions_nb);
> @@ -2523,6 +2519,21 @@ static int __init init_kprobes(void)
> }
> early_initcall(init_kprobes);
>
> +#if defined(CONFIG_OPTPROBES)
> +static int __init init_optprobes(void)
> +{
> + /*
> + * Enable kprobe optimization - this kicks the optimizer which
> + * depends on synchronize_rcu_tasks() and ksoftirqd, that is
> + * not spawned in early initcall. So delay the optimization.
> + */
> + optimize_all_kprobes();
> +
> + return 0;
> +}
> +subsys_initcall(init_optprobes);
> +#endif
> +
> #ifdef CONFIG_DEBUG_FS
> static void report_probe(struct seq_file *pi, struct kprobe *p,
> const char *sym, int offset, char *modname, struct kprobe *pp)
>