Re: [PATCH v4 2/5] mm: move per-vma lock into vm_area_struct
From: Suren Baghdasaryan
Date: Wed Nov 20 2024 - 18:44:51 EST
On Wed, Nov 20, 2024 at 3:33 PM Shakeel Butt <shakeel.butt@xxxxxxxxx> wrote:
>
> On Tue, Nov 19, 2024 at 04:08:23PM -0800, Suren Baghdasaryan wrote:
> > Back when per-vma locks were introduces, vm_lock was moved out of
> > vm_area_struct in [1] because of the performance regression caused by
> > false cacheline sharing. Recent investigation [2] revealed that the
> > regressions is limited to a rather old Broadwell microarchitecture and
> > even there it can be mitigated by disabling adjacent cacheline
> > prefetching, see [3].
> > Splitting single logical structure into multiple ones leads to more
> > complicated management, extra pointer dereferences and overall less
> > maintainable code. When that split-away part is a lock, it complicates
> > things even further. With no performance benefits, there are no reasons
> > for this split. Merging the vm_lock back into vm_area_struct also allows
> > vm_area_struct to use SLAB_TYPESAFE_BY_RCU later in this patchset.
> > Move vm_lock back into vm_area_struct, aligning it at the cacheline
> > boundary and changing the cache to be cacheline-aligned as well.
> > With kernel compiled using defconfig, this causes VMA memory consumption
> > to grow from 160 (vm_area_struct) + 40 (vm_lock) bytes to 256 bytes:
> >
> > slabinfo before:
> > <name> ... <objsize> <objperslab> <pagesperslab> : ...
> > vma_lock ... 40 102 1 : ...
> > vm_area_struct ... 160 51 2 : ...
> >
> > slabinfo after moving vm_lock:
> > <name> ... <objsize> <objperslab> <pagesperslab> : ...
> > vm_area_struct ... 256 32 2 : ...
> >
> > Aggregate VMA memory consumption per 1000 VMAs grows from 50 to 64 pages,
> > which is 5.5MB per 100000 VMAs. Note that the size of this structure is
> > dependent on the kernel configuration and typically the original size is
> > higher than 160 bytes. Therefore these calculations are close to the
> > worst case scenario. A more realistic vm_area_struct usage before this
> > change is:
> >
> > <name> ... <objsize> <objperslab> <pagesperslab> : ...
> > vma_lock ... 40 102 1 : ...
> > vm_area_struct ... 176 46 2 : ...
> >
> > Aggregate VMA memory consumption per 1000 VMAs grows from 54 to 64 pages,
> > which is 3.9MB per 100000 VMAs.
> > This memory consumption growth can be addressed later by optimizing the
> > vm_lock.
> >
> > [1] https://lore.kernel.org/all/20230227173632.3292573-34-surenb@xxxxxxxxxx/
> > [2] https://lore.kernel.org/all/ZsQyI%2F087V34JoIt@xsang-OptiPlex-9020/
> > [3] https://lore.kernel.org/all/CAJuCfpEisU8Lfe96AYJDZ+OM4NoPmnw9bP53cT_kbfP_pR+-2g@xxxxxxxxxxxxxx/
> >
> > Signed-off-by: Suren Baghdasaryan <surenb@xxxxxxxxxx>
> > Reviewed-by: Lorenzo Stoakes <lorenzo.stoakes@xxxxxxxxxx>
>
> Reviewed-by: Shakeel Butt <shakeel.butt@xxxxxxxxx>
Thanks!
>
>
> One question below.
>
> > --- a/include/linux/mm_types.h
> > +++ b/include/linux/mm_types.h
> > @@ -716,8 +716,6 @@ struct vm_area_struct {
> > * slowpath.
> > */
> > unsigned int vm_lock_seq;
> > - /* Unstable RCU readers are allowed to read this. */
> > - struct vma_lock *vm_lock;
> > #endif
> >
> > /*
> > @@ -770,6 +768,10 @@ struct vm_area_struct {
> > struct vma_numab_state *numab_state; /* NUMA Balancing state */
> > #endif
> > struct vm_userfaultfd_ctx vm_userfaultfd_ctx;
> > +#ifdef CONFIG_PER_VMA_LOCK
> > + /* Unstable RCU readers are allowed to read this. */
> > + struct vma_lock vm_lock ____cacheline_aligned_in_smp;
> > +#endif
> > } __randomize_layout;
>
> Do we just want 'struct vm_area_struct' to be cacheline aligned or do we
> want 'struct vma_lock vm_lock' to be on a separate cacheline as well?
We want both to minimize cacheline sharing.
>