Re: [PATCH] x86/alternatives: remove false sharing in poke_int3_handler()

From: Eric Dumazet
Date: Mon Mar 24 2025 - 03:54:12 EST


On Mon, Mar 24, 2025 at 8:47 AM Eric Dumazet <edumazet@xxxxxxxxxx> wrote:
>
> On Mon, Mar 24, 2025 at 8:16 AM Ingo Molnar <mingo@xxxxxxxxxx> wrote:
> >
> >
> > * Eric Dumazet <edumazet@xxxxxxxxxx> wrote:
> >
> > > > What's the adversarial workload here? Spamming bpf_stats_enabled on all
> > > > CPUs in parallel? Or mixing it with some other text_poke_bp_batch()
> > > > user if bpf_stats_enabled serializes access?
> > ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
> >
> > > > Does anything undesirable happen in that case?
> > >
> > > The case of multiple threads trying to flip bpf_stats_enabled is
> > > handled by bpf_stats_enabled_mutex.
> >
> > So my suggested workload wasn't adversarial enough due to
> > bpf_stats_enabled_mutex: how about some other workload that doesn't
> > serialize access to text_poke_bp_batch()?
>
> Do you have a specific case in mind that I can test on these big platforms ?
>
> text_poke_bp_batch() calls themselves are serialized by text_mutex, it
> is not clear what you are looking for.


BTW the atomic_cond_read_acquire() part is never called even during my
stress test.

We could add this eventually:

diff --git a/arch/x86/kernel/alternative.c b/arch/x86/kernel/alternative.c
index d7afbf822c45..5d364e990055 100644
--- a/arch/x86/kernel/alternative.c
+++ b/arch/x86/kernel/alternative.c
@@ -2418,7 +2418,7 @@ static void text_poke_bp_batch(struct
text_poke_loc *tp, unsigned int nr_entries
for_each_possible_cpu(i) {
atomic_t *refs = per_cpu_ptr(&bp_refs, i);

- if (!atomic_dec_and_test(refs))
+ if (unlikely(!atomic_dec_and_test(refs)))
atomic_cond_read_acquire(refs, !VAL);
}
}