Re: [PATCH v2] mm: hugetlb: optionally allocate gigantic hugepages using cma
From: Michal Hocko
Date: Tue Mar 10 2020 - 13:37:44 EST
On Tue 10-03-20 10:25:59, Roman Gushchin wrote:
> Hello, Michal!
>
> On Tue, Mar 10, 2020 at 09:45:44AM +0100, Michal Hocko wrote:
[...]
> > > + for_each_node_state(nid, N_ONLINE) {
> > > + unsigned long min_pfn = 0, max_pfn = 0;
> > > +
> > > + for_each_mem_pfn_range(i, nid, &start_pfn, &end_pfn, NULL) {
> > > + if (!min_pfn)
> > > + min_pfn = start_pfn;
> > > + max_pfn = end_pfn;
> > > + }
> >
> > Do you want to compare the range to the size?
>
> You mean add a check that the range is big enough?
Yes, size and forgot to mention alignment.
> > But besides that, I
> > believe this really needs to be much more careful. I believe you do not
> > want to eat a considerable part of the kernel memory because the
> > resulting configuration will really struggle (yeah all the low mem/high
> > mem problems all over again).
>
> Well, so far I was focused on a particular case when the target cma size
> is significantly smaller than the total RAM size (~5-10%). What is the right
> thing to do here? Fallback to the current behavior if the requested size is
> more than x% of total memory? 1/2? How do you think?
I would start by excluding restricted kernel zones (<ZONE_NORMAL).
Cutting off 1G of ZONE_DMA32 might be a real problem.
> We've discussed it with Rik in private, and he expressed an idea to start
> with ~50% always and then shrink it on-demand. Something that we might
> have here long-term.
I would start simple. Simply make it a documented behavior. And if
somebody really cares enough then we can make something more clever.
Until then just avoid zone as mentioned above. This would require that
you do few changes 1) allow fallback from CMA allocation failure 2) do
not bail out initialization on CMA reservation failure.
--
Michal Hocko
SUSE Labs