Re: x86/kprobes: kretprobe fails to triggered if kprobe at function entry is not optimized (trigger by int3 breakpoint)
From: Masami Hiramatsu
Date: Wed Aug 26 2020 - 05:06:53 EST
On Wed, 26 Aug 2020 17:22:39 +0900
Masami Hiramatsu <mhiramat@xxxxxxxxxx> wrote:
> On Wed, 26 Aug 2020 07:07:09 +0000
> "Eddy_Wu@xxxxxxxxxxxxxx" <Eddy_Wu@xxxxxxxxxxxxxx> wrote:
>
> >
> > > -----Original Message-----
> > > From: peterz@xxxxxxxxxxxxx <peterz@xxxxxxxxxxxxx>
> > > Sent: Tuesday, August 25, 2020 8:09 PM
> > > To: Masami Hiramatsu <mhiramat@xxxxxxxxxx>
> > > Cc: Eddy Wu (RD-TW) <Eddy_Wu@xxxxxxxxxxxxxx>; linux-kernel@xxxxxxxxxxxxxxx; x86@xxxxxxxxxx; David S. Miller
> > > <davem@xxxxxxxxxxxxx>
> > > Subject: Re: x86/kprobes: kretprobe fails to triggered if kprobe at function entry is not optimized (trigger by int3 breakpoint)
> > >
> > > Surely we can do a lockless list for this. We have llist_add() and
> > > llist_del_first() to make a lockless LIFO/stack.
> > >
> >
> > llist operations require atomic cmpxchg, for some arch doesn't have CONFIG_ARCH_HAVE_NMI_SAFE_CMPXCHG, in_nmi() check might still needed.
> > (HAVE_KRETPROBES && !CONFIG_ARCH_HAVE_NMI_SAFE_CMPXCHG): arc, arm, csky, mips
>
> Good catch. In those cases, we can add in_nmi() check at arch dependent code.
Oops, in_nmi() check is needed in pre_kretprobe_handler() which has no
arch dependent code. Hmm, so we still need an weak function to check it...
Thanks,
--
Masami Hiramatsu <mhiramat@xxxxxxxxxx>