Re: [PATCH v9 12/17] mm: move lesser used vma_area_struct members into the last cacheline

From: Suren Baghdasaryan
Date: Wed Jan 15 2025 - 11:39:40 EST


On Wed, Jan 15, 2025 at 2:51 AM Peter Zijlstra <peterz@xxxxxxxxxxxxx> wrote:
>
> On Fri, Jan 10, 2025 at 08:25:59PM -0800, Suren Baghdasaryan wrote:
> > Move several vma_area_struct members which are rarely or never used
> > during page fault handling into the last cacheline to better pack
> > vm_area_struct. As a result vm_area_struct will fit into 3 as opposed
> > to 4 cachelines. New typical vm_area_struct layout:
> >
> > struct vm_area_struct {
> > union {
> > struct {
> > long unsigned int vm_start; /* 0 8 */
> > long unsigned int vm_end; /* 8 8 */
> > }; /* 0 16 */
> > freeptr_t vm_freeptr; /* 0 8 */
> > }; /* 0 16 */
> > struct mm_struct * vm_mm; /* 16 8 */
> > pgprot_t vm_page_prot; /* 24 8 */
> > union {
> > const vm_flags_t vm_flags; /* 32 8 */
> > vm_flags_t __vm_flags; /* 32 8 */
> > }; /* 32 8 */
> > unsigned int vm_lock_seq; /* 40 4 */
>
> Does it not make sense to move this seq field near the refcnt?

In an earlier version, when vm_lock was not a refcount yet, I tried
that and moving vm_lock_seq introduced regression in the pft test. We
have that early vm_lock_seq check in the beginning of vma_start_read()
and if it fails we bail out early without locking. I think that might
be the reason why keeping vm_lock_seq in the first cacheling is
beneficial. But I'll try moving it again now that we have vm_refcnt
instead of the lock and see if pft still shows any regression.

>
> > /* XXX 4 bytes hole, try to pack */
> >
> > struct list_head anon_vma_chain; /* 48 16 */
> > /* --- cacheline 1 boundary (64 bytes) --- */
> > struct anon_vma * anon_vma; /* 64 8 */
> > const struct vm_operations_struct * vm_ops; /* 72 8 */
> > long unsigned int vm_pgoff; /* 80 8 */
> > struct file * vm_file; /* 88 8 */
> > void * vm_private_data; /* 96 8 */
> > atomic_long_t swap_readahead_info; /* 104 8 */
> > struct mempolicy * vm_policy; /* 112 8 */
> > struct vma_numab_state * numab_state; /* 120 8 */
> > /* --- cacheline 2 boundary (128 bytes) --- */
> > refcount_t vm_refcnt (__aligned__(64)); /* 128 4 */
> >
> > /* XXX 4 bytes hole, try to pack */
> >
> > struct {
> > struct rb_node rb (__aligned__(8)); /* 136 24 */
> > long unsigned int rb_subtree_last; /* 160 8 */
> > } __attribute__((__aligned__(8))) shared; /* 136 32 */
> > struct anon_vma_name * anon_name; /* 168 8 */
> > struct vm_userfaultfd_ctx vm_userfaultfd_ctx; /* 176 8 */
> >
> > /* size: 192, cachelines: 3, members: 18 */
> > /* sum members: 176, holes: 2, sum holes: 8 */
> > /* padding: 8 */
> > /* forced alignments: 2, forced holes: 1, sum forced holes: 4 */
> > } __attribute__((__aligned__(64)));
>
>