Re: [PATCH] x86/alternatives: remove false sharing in poke_int3_handler()
From: Ingo Molnar
Date: Mon Mar 24 2025 - 04:05:42 EST
* Eric Dumazet <edumazet@xxxxxxxxxx> wrote:
> On Mon, Mar 24, 2025 at 8:47 AM Eric Dumazet <edumazet@xxxxxxxxxx> wrote:
> >
> > On Mon, Mar 24, 2025 at 8:16 AM Ingo Molnar <mingo@xxxxxxxxxx> wrote:
> > >
> > >
> > > * Eric Dumazet <edumazet@xxxxxxxxxx> wrote:
> > >
> > > > > What's the adversarial workload here? Spamming bpf_stats_enabled on all
> > > > > CPUs in parallel? Or mixing it with some other text_poke_bp_batch()
> > > > > user if bpf_stats_enabled serializes access?
> > > ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
> > >
> > > > > Does anything undesirable happen in that case?
> > > >
> > > > The case of multiple threads trying to flip bpf_stats_enabled is
> > > > handled by bpf_stats_enabled_mutex.
> > >
> > > So my suggested workload wasn't adversarial enough due to
> > > bpf_stats_enabled_mutex: how about some other workload that doesn't
> > > serialize access to text_poke_bp_batch()?
> >
> > Do you have a specific case in mind that I can test on these big platforms ?
> >
> > text_poke_bp_batch() calls themselves are serialized by text_mutex, it
> > is not clear what you are looking for.
>
>
> BTW the atomic_cond_read_acquire() part is never called even during my
> stress test.
Yeah, that code threw me off - can it really happen with text_mutex
serializing all of it?
> @@ -2418,7 +2418,7 @@ static void text_poke_bp_batch(struct
> text_poke_loc *tp, unsigned int nr_entries
> for_each_possible_cpu(i) {
> atomic_t *refs = per_cpu_ptr(&bp_refs, i);
>
> - if (!atomic_dec_and_test(refs))
> + if (unlikely(!atomic_dec_and_test(refs)))
> atomic_cond_read_acquire(refs, !VAL);
If it could never happen then this should that condition be a
WARN_ON_ONCE() perhaps?
Thanks,
Ingo