Re: [PATCH v2 00/16] Multigenerational LRU Framework

From: Yu Zhao
Date: Sat Apr 24 2021 - 00:28:46 EST


On Fri, Apr 23, 2021 at 9:30 PM Andi Kleen <ak@xxxxxxxxxxxxxxx> wrote:
>
> > Now the question is how we build the bloom filter. A simple answer is
> > to let the rmap do the legwork, i.e., when it encounters dense
> > regions, add them to the filter. Of course this means we'll have to
> > use the rmap more than we do now, which is not ideal for some
> > workloads but necessary to avoid worst case scenarios.
>
> How would you maintain the bloom filter over time? Assume a process
> that always creates new mappings and unmaps old mappings. How
> do the stale old mappings get removed and avoid polluting it over time?
>
> Or are you thinking of one of the fancier bloom filter variants
> that support deletion? As I understand they're significantly less
> space efficient and more complicated.

Hi Andi,

That's where the double buffering technique comes in :)

Recap: the creation of each new generation starts with scanning page
tables to clear the accessed bit of pages referenced since the last
scan.

We scan page tables according to the current bloom filter, and at the
same time, we build a new one and write it to the second buffer.
During this step, we eliminate regions that have become invalid, e.g.,
too sparse or completely unmapped. Note that the scan *will* miss
newly mapped regions, i.e., dense regions that the rmap hasn't
discovered. Once this step is done, we flip to the second buffer. And
from now on, all the new dense regions discovered by the rmap will be
recorded into this buffer.

Each element in the bloom filter is a hash value from an address of a
page table and a node id, indicating this page table has a worth
number of pages from this node.

A single counting bloom filter works too but it doesn't seem to offer
any advantage over double buffering. And we need to handle overflow
too.