Re: [PATCH v9 0/8] stg mail -e --version=v9 \
From: Mel Gorman
Date: Thu Sep 12 2019 - 12:35:30 EST
On Thu, Sep 12, 2019 at 11:19:25AM +0200, Michal Hocko wrote:
> On Wed 11-09-19 08:12:03, Alexander Duyck wrote:
> > On Wed, Sep 11, 2019 at 4:36 AM Michal Hocko <mhocko@xxxxxxxxxx> wrote:
> > >
> > > On Tue 10-09-19 14:23:40, Alexander Duyck wrote:
> > > [...]
> > > > We don't put any limitations on the allocator other then that it needs to
> > > > clean up the metadata on allocation, and that it cannot allocate a page
> > > > that is in the process of being reported since we pulled it from the
> > > > free_list. If the page is a "Reported" page then it decrements the
> > > > reported_pages count for the free_area and makes sure the page doesn't
> > > > exist in the "Boundary" array pointer value, if it does it moves the
> > > > "Boundary" since it is pulling the page.
> > >
> > > This is still a non-trivial limitation on the page allocation from an
> > > external code IMHO. I cannot give any explicit reason why an ordering on
> > > the free list might matter (well except for page shuffling which uses it
> > > to make physical memory pattern allocation more random) but the
> > > architecture seems hacky and dubious to be honest. It shoulds like the
> > > whole interface has been developed around a very particular and single
> > > purpose optimization.
> > How is this any different then the code that moves a page that will
> > likely be merged to the tail though?
> I guess you are referring to the page shuffling. If that is the case
> then this is an integral part of the allocator for a reason and it is
> very well obvious in the code including the consequences. I do not
> really like an idea of hiding similar constrains behind a generic
> looking feature which is completely detached from the allocator and so
> any future change of the allocator might subtly break it.
It's not just that, compaction pokes into the free_area information as
well and directly takes pages from the free list without going through
the page allocator itself. It assumes that a free page is a free page
and only takes the zone and migratetype into account.
> > In our case the "Reported" page is likely going to be much more
> > expensive to allocate and use then a standard page because it will be
> > faulted back in. In such a case wouldn't it make sense for us to want
> > to keep the pages that don't require faults ahead of those pages in
> > the free_list so that they are more likely to be allocated?
> OK, I was suspecting this would pop out. And this is exactly why I
> didn't like an idea of an external code imposing a non obvious constrains
> to the allocator. You simply cannot count with any ordering with the
> page allocator.
Indeed not. It can be arbitrary and compaction can interfere with the
ordering as well. While in theory that could be addressed by always
going through an interface maintained by the page allocator, it would be
tricky to test the virtio case in particular.
> We used to distinguish cache hot/cold pages in the past
> and pushed pages to the specific end of the free list but that has been
That was always best effort too, not a hard guarantee. It was eventually
removed as the cost of figuring out the ordering exceeded the benefit.
> There are other potential changes like that possible. Shuffling
> is a good recent example.
> Anyway I am not a maintainer of this code. I would really like to hear
> opinions from Mel and Vlastimil here (now CCed - the thread starts
I worry that poking too much into the internal state of the allocator
will be fragile long-term. There is the arch alloc/free hooks but they
are typically about protections only and does not interfere with the
internal state of the allocator. Compaction pokes in as well but once
the page is off the free list, the page allocator no longer cares so
again there is on interference with the internal state. If the state is
interefered with externally, it becomes unclear what happens if things
like page merging is deferred in a way the allocator cannot control as
high-order allocation requests may fail for example. For THP, it would
not matter but failed allocation reports when pages are on the freelist,
but unsuitable for allocation because of the reported state, would be
hard to debug. Similarly, latency issues due to a reported page being
picked for allocation but requiring communication with the hypervisor
will be difficult to debug and atomic allocations may fail entirely.
Finally, if merging was broken for reported/unreported pages, it could
be a long time before such bugs were fixed.
That's a lot of caveats to optimise communication about unused free
pages to the allocator. I didn't read the patches particularly carefully
but it was not clear why a best effort was not made to track free pages
and if the metadata maintenance for that fills then do exhaustive
searches for remaining pages. It might be difficult to stabilise that as
the metadata may overflow again while the exhaustive search takes place.
Much would depend on the frequency that pages are entering/leaving