Re: [PATCH v2 2/4] mm/sparse: Optimize sparse_add_one_section()
From: Baoquan He
Date: Tue Mar 26 2019 - 06:08:24 EST
On 03/26/19 at 10:29am, Michal Hocko wrote:
> On Tue 26-03-19 17:02:25, Baoquan He wrote:
> > Reorder the allocation of usemap and memmap since usemap allocation
> > is much simpler and easier. Otherwise hard work is done to make
> > memmap ready, then have to rollback just because of usemap allocation
> > failure.
>
> Is this really worth it? I can see that !VMEMMAP is doing memmap size
> allocation which would be 2MB aka costly allocation but we do not do
> __GFP_RETRY_MAYFAIL so the allocator backs off early.
In !VMEMMAP case, it truly does simple allocation directly. surely
usemap which size is 32 is smaller. So it doesn't matter that much who's
ahead or who's behind. However, this benefit a little in VMEMMAP case.
And this make code a little cleaner, e.g the error handling at the end
is taken away.
>
> > And also check if section is present earlier. Then don't bother to
> > allocate usemap and memmap if yes.
>
> Moving the check up makes some sense.
>
> > Signed-off-by: Baoquan He <bhe@xxxxxxxxxx>
>
> The patch is not incorrect but I am wondering whether it is really worth
> it for the current code base. Is it fixing anything real or it is a mere
> code shuffling to please an eye?
It's not a fixing, just a tiny code refactorying inside
sparse_add_one_section(), seems it doesn't worsen thing if I got the
!VMEMMAP case correctly, not quite sure. I am fine to drop it if it's
not worth. I could miss something in different cases.
Thanks
Baoquan
>
> > ---
> > v1->v2:
> > Do section existence checking earlier to further optimize code.
> >
> > mm/sparse.c | 29 +++++++++++------------------
> > 1 file changed, 11 insertions(+), 18 deletions(-)
> >
> > diff --git a/mm/sparse.c b/mm/sparse.c
> > index b2111f996aa6..f4f34d69131e 100644
> > --- a/mm/sparse.c
> > +++ b/mm/sparse.c
> > @@ -714,20 +714,18 @@ int __meminit sparse_add_one_section(int nid, unsigned long start_pfn,
> > ret = sparse_index_init(section_nr, nid);
> > if (ret < 0 && ret != -EEXIST)
> > return ret;
> > - ret = 0;
> > - memmap = kmalloc_section_memmap(section_nr, nid, altmap);
> > - if (!memmap)
> > - return -ENOMEM;
> > - usemap = __kmalloc_section_usemap();
> > - if (!usemap) {
> > - __kfree_section_memmap(memmap, altmap);
> > - return -ENOMEM;
> > - }
> >
> > ms = __pfn_to_section(start_pfn);
> > - if (ms->section_mem_map & SECTION_MARKED_PRESENT) {
> > - ret = -EEXIST;
> > - goto out;
> > + if (ms->section_mem_map & SECTION_MARKED_PRESENT)
> > + return -EEXIST;
> > +
> > + usemap = __kmalloc_section_usemap();
> > + if (!usemap)
> > + return -ENOMEM;
> > + memmap = kmalloc_section_memmap(section_nr, nid, altmap);
> > + if (!memmap) {
> > + kfree(usemap);
> > + return -ENOMEM;
> > }
> >
> > /*
> > @@ -739,12 +737,7 @@ int __meminit sparse_add_one_section(int nid, unsigned long start_pfn,
> > section_mark_present(ms);
> > sparse_init_one_section(ms, section_nr, memmap, usemap);
> >
> > -out:
> > - if (ret < 0) {
> > - kfree(usemap);
> > - __kfree_section_memmap(memmap, altmap);
> > - }
> > - return ret;
> > + return 0;
> > }
> >
> > #ifdef CONFIG_MEMORY_HOTREMOVE
> > --
> > 2.17.2
> >
>
> --
> Michal Hocko
> SUSE Labs