Re: [PATCH 0/8] mm/swap: optimize swap cache search space

From: Chris Li
Date: Sun Apr 28 2024 - 13:27:03 EST


On Sat, Apr 27, 2024 at 8:23 PM Huang, Ying <ying.huang@xxxxxxxxx> wrote:
>
> Chris Li <chrisl@xxxxxxxxxx> writes:
>
> > On Sat, Apr 27, 2024 at 6:16 PM Huang, Ying <ying.huang@xxxxxxxxx> wrote:
> >>
> >> Chris Li <chrisl@xxxxxxxxxx> writes:
> > Free the shadow swap entry will just set the pointer to NULL.
> > Are you concerned that the memory allocated for the pointer is not
> > free to the system after the shadow swap entry is free?
> >
> > It will be subject to fragmentation on the free swap entry.
> > In that regard, xarray is also subject to fragmentation. It will not
> > free the internal node if the node has one xa_index not freed. Even if
> > the xarray node is freed to slab, at slab level there is fragmentation
> > as well, the backing page might not free to the system.
>
> Sorry my words were confusing. What I wanted to say is that the xarray
> node may be freed.

Somehow I get that is what you mean :-) My previous reply still
applies here. The xarray node freeing will be subject to the
fragmentation at slab level. The actual backing page might not release
to the kernel after the node freeing.

>
> >> And, in current design, only swap_map[] is allocated if the swap space
> >> isn't used. That needs to be considered too.
> >
> > I am aware of that. I want to make the swap_map[] not static allocated
> > any more either.
>
> Yes. That's possible.

Of course there will be a price to pay for that. The current swap_map
is only 1 byte per entry. That swap map count size per swap entry is
going to be hard to beat in the alternatives. Hopefully find the trade
off in other places.

>
> > The swap_map static allocation forces the rest of the swap data
> > structure to have other means to sparsely allocate their data
> > structure, repeating the fragmentation elsewhere, in different
> > ways.That is also the one major source of the pain point hacking on
> > the swap code. The data structure is spread into too many different
> > places.
>
> Look forward to more details to compare :-)

Sure. When I make more progress I will post it.

Chris