Re: [PATCH v9 03/17] x86/split_lock: Align x86_capability to unsigned long to avoid split locked access

From: Fenghua Yu
Date: Tue Jun 25 2019 - 20:04:34 EST


On Mon, Jun 24, 2019 at 03:12:49PM +0000, David Laight wrote:
> From: Fenghua Yu
> > Sent: 18 June 2019 23:41
> >
> > set_cpu_cap() calls locked BTS and clear_cpu_cap() calls locked BTR to
> > operate on bitmap defined in x86_capability.
> >
> > Locked BTS/BTR accesses a single unsigned long location. In 64-bit mode,
> > the location is at:
> > base address of x86_capability + (bit offset in x86_capability / 64) * 8
> >
> > Since base address of x86_capability may not be aligned to unsigned long,
> > the single unsigned long location may cross two cache lines and
> > accessing the location by locked BTS/BTR introductions will cause
> > split lock.
> >
> > To fix the split lock issue, align x86_capability to size of unsigned long
> > so that the location will be always within one cache line.
> >
> > Changing x86_capability's type to unsigned long may also fix the issue
> > because x86_capability will be naturally aligned to size of unsigned long.
> > But this needs additional code changes. So choose the simpler solution
> > by setting the array's alignment to size of unsigned long.
>
> As I've pointed out several times before this isn't the only int[] data item
> in this code that gets passed to the bit operations.
> Just because you haven't got a 'splat' from the others doesn't mean they don't
> need fixing at the same time.

As Thomas suggested in https://lkml.org/lkml/2019/4/25/353, patch #0017
in this patch set implements WARN_ON_ONCE() to audit possible unalignment
in atomic bit ops.

This patch set just enables split lock detection first. Fixing ALL split
lock issues might be practical after the patch is upstreamed and used widely.

>
> > Signed-off-by: Fenghua Yu <fenghua.yu@xxxxxxxxx>
> > ---
> > arch/x86/include/asm/processor.h | 4 +++-
> > 1 file changed, 3 insertions(+), 1 deletion(-)
> >
> > diff --git a/arch/x86/include/asm/processor.h b/arch/x86/include/asm/processor.h
> > index c34a35c78618..d3e017723634 100644
> > --- a/arch/x86/include/asm/processor.h
> > +++ b/arch/x86/include/asm/processor.h
> > @@ -93,7 +93,9 @@ struct cpuinfo_x86 {
> > __u32 extended_cpuid_level;
> > /* Maximum supported CPUID level, -1=no CPUID: */
> > int cpuid_level;
> > - __u32 x86_capability[NCAPINTS + NBUGINTS];
> > + /* Aligned to size of unsigned long to avoid split lock in atomic ops */
>
> Wrong comment.
> Something like:
> /* Align to sizeof (unsigned long) because the array is passed to the
> * atomic bit-op functions which require an aligned unsigned long []. */

The problem we try to fix here is not because "the array is passed to the
atomic bit-op functions which require an aligned unsigned long []".

The problem is because of the possible split lock issue. If it's not because
of split lock issue, there is no need to have this patch.

So I would think my comment is right to point out explicitly why we need
this alignment.

>
> > + __u32 x86_capability[NCAPINTS + NBUGINTS]
> > + __aligned(sizeof(unsigned long));
>
> It might be better to use a union (maybe unnamed) here.

That would be another patch. This patch just simply fixes the split lock
issue.

Thanks.

-Fenghua