Re: CVE-2024-36000: mm/hugetlb: fix missing hugetlb_lock for resv uncharge

From: Peter Xu
Date: Thu May 23 2024 - 11:42:44 EST


On Thu, May 23, 2024 at 12:33:37PM +0200, Oscar Salvador wrote:
> On the theoretical part:
>
> And we could have
>
> CPU0 CPU1
> dequeue_huge_page_vma
> dequeue_huge_page_node
> move_page_to_active_list
> release_lock
> hugetlb_cgroup_pre_destroy
> for_each_page_in_active_list
> <-- pages empty cgroups are skipped -->
> hugetlb_cgroup_move_parent
> move_page_to_parent
> hugetlb_cgroup_commit_charge <-- too late
> page[2].lru.next = (void *)h_cg;

Would this happen with/without the patch? IIUC the patch didn't change
this path yet on hugetlb_cgroup_commit_charge(), and AFAIU the release_lock
is always covering the commit charge, with/without my patch:

spin_lock_irq(&hugetlb_lock);
folio = dequeue_hugetlb_folio_vma(h, vma, addr, avoid_reserve, gbl_chg);
...
hugetlb_cgroup_commit_charge(idx, pages_per_huge_page(h), h_cg, folio);
if (deferred_reserve) {
hugetlb_cgroup_commit_charge_rsvd(idx, pages_per_huge_page(h),
h_cg, folio);
}
spin_unlock_irq(&hugetlb_lock);

What the previous patch changed, IMHO, is when the rare race happened first
on reservation, and I think Mike used to describe that with a rich comment,
which can be against a concurrent hugetlb_reserve_pages():

if (unlikely(map_chg > map_commit)) {
/*
* The page was added to the reservation map between
* vma_needs_reservation and vma_commit_reservation.
* This indicates a race with hugetlb_reserve_pages.
* Adjust for the subpool count incremented above AND
* in hugetlb_reserve_pages for the same page. Also,
* the reservation count added in hugetlb_reserve_pages
* no longer applies.
*/
long rsv_adjust;

rsv_adjust = hugepage_subpool_put_pages(spool, 1);
hugetlb_acct_memory(h, -rsv_adjust);
if (deferred_reserve) {
spin_lock_irq(&hugetlb_lock);
hugetlb_cgroup_uncharge_folio_rsvd(hstate_index(h),
pages_per_huge_page(h), folio);
spin_unlock_irq(&hugetlb_lock);
}
}

This should be after the point of hugetlb_cgroup_commit_charge(), and when
without the lock the problem is we can have concurrent accessor / updater
to the memcg.

However here after a 2nd look I don't even see at least the css offliner
would update the _hugetlb_cgroup_rsvd at all here.. so I'm not sure whether
a race could happen. I meant, hugetlb_cgroup_move_parent() doesn't even
touch _hugetlb_cgroup_rsvd which is the object that can race. It only
does:

set_hugetlb_cgroup(folio, parent);

While in this case it's only about _hugetlb_cgroup.

It's pretty confusing to me here, doesn't it mean that when someone offline
a child_cg here we'll still leave the folio's _hugetlb_cgroup_rsvd pointing
to it, even if _hugetlb_cgroup starting to point to parent?... I was
expecting hugetlb_cgroup_move_parent() also move the rsvd cg here too.

The other thing is, when hugetlb_cgroup_move_parent() does the cg movement,
does it need to css_put() ref on the child cg and css_tryget() on the
parent, just like what we did in __hugetlb_cgroup_charge_cgroup(), at least
for _hugetlb_cgroup?

I really don't know enough on these areas to tell, perhaps I missed
something. But maybe any of you may have some idea.. In general, I think
besides LOCKDEP the lock is definitely needed to at least make sure things
like:

__set_hugetlb_cgroup(folio, NULL, rsvd);
page_counter_uncharge(),

To be an atomic op, otherwise two threads can see the old memcg
concurrently and maybe they'll uncharge counter twice. But again currently
I don't know how that can be triggered if hugetlb_cgroup_move_parent() is
not even touching resv cg..

Thanks,

--
Peter Xu