Re: [PATCH] memcontrol: only transfer the memcg data for migration
From: Johannes Weiner
Date: Wed Oct 04 2023 - 10:17:27 EST
On Tue, Oct 03, 2023 at 04:14:22PM -0700, Nhat Pham wrote:
> For most migration use cases, only transfer the memcg data from the old
> folio to the new folio, and clear the old folio's memcg data. No
> charging and uncharging will be done. These use cases include the new
> hugetlb memcg accounting behavior (which was not previously handled).
>
> This shaves off some work on the migration path, and avoids the
> temporary double charging of a folio during its migration.
>
> The only exception is replace_page_cache_folio(), which will use the old
> mem_cgroup_migrate() (now renamed to mem_cgroup_replace_folio). In that
> context, the isolation of the old page isn't quite as thorough as with
> migration, so we cannot use our new implementation directly.
>
> This patch is the result of the following discussion on the new hugetlb
> memcg accounting behavior:
>
> https://lore.kernel.org/lkml/20231003171329.GB314430@monkey/
>
> Reported-by: Mike Kravetz <mike.kravetz@xxxxxxxxxx>
> Closes: https://lore.kernel.org/lkml/20231003171329.GB314430@monkey/
> Suggested-by: Johannes Weiner <hannes@xxxxxxxxxxx>
> Signed-off-by: Nhat Pham <nphamcs@xxxxxxxxx>
For squashing, the patch title should be:
hugetlb: memcg: account hugetlb-backed memory in memory controller fix
However, I think this should actually be split out. It changes how all
pages are cgroup-migrated, which is a bit too large of a side effect
for the hugetlb accounting patch itself. Especially because the
reasoning outlined above will get lost once this fixup is folded.
IOW, send one prep patch, to go before the series, which splits
mem_cgroup_replace_folio() and does the mem_cgroup_migrate()
optimization() with the above explanation.
Then send a fixlet for the hugetlb accounting patch that removes the
!hugetlb-conditional for the mem_cgroup_migrate() call.
If you're clear in the queueing instructions for both patches, Andrew
can probably do it in-place without having to resend everything :)
> +void mem_cgroup_migrate(struct folio *old, struct folio *new)
> +{
> + struct mem_cgroup *memcg;
> +
> + VM_BUG_ON_FOLIO(!folio_test_locked(old), old);
> + VM_BUG_ON_FOLIO(!folio_test_locked(new), new);
> + VM_BUG_ON_FOLIO(folio_test_anon(old) != folio_test_anon(new), new);
> + VM_BUG_ON_FOLIO(folio_nr_pages(old) != folio_nr_pages(new), new);
> +
> + if (mem_cgroup_disabled())
> + return;
> +
> + memcg = folio_memcg(old);
> + /*
> + * Note that it is normal to see !memcg for a hugetlb folio.
> + * It could have been allocated when memory_hugetlb_accounting was not
> + * selected, for e.g.
Is that sentence truncated?
> + */
> + VM_WARN_ON_ONCE_FOLIO(!memcg, old);
> + if (!memcg)
> + return;
If this is expected to happen, it shouldn't warn:
VM_WARN_ON_ONCE(!folio_test_hugetlb(old) && !memcg, old);