Re: [RFC PATCH] mm, kvm: account kvm_vcpu_mmap to kmemcg
From: Shakeel Butt
Date: Fri Mar 29 2019 - 12:00:48 EST
On Fri, Mar 29, 2019 at 12:52 AM Michal Hocko <mhocko@xxxxxxxxxx> wrote:
>
> On Thu 28-03-19 18:28:36, Shakeel Butt wrote:
> > A VCPU of a VM can allocate upto three pages which can be mmap'ed by the
> > user space application. At the moment this memory is not charged. On a
> > large machine running large number of VMs (or small number of VMs having
> > large number of VCPUs), this unaccounted memory can be very significant.
>
> Is this really the case. How many machines are we talking about? Say I
> have a virtual machines with 1K cpus, this will result in 12MB. Is this
> significant to the overal size of the virtual machine to even care?
>
Think of having ~1K VMs having 100s of vcpus and the page size can be
larger than 4k. This is not something happening now but we are moving
in that direction. Also
> > So, this memory should be charged to a kmemcg. However that is not
> > possible as these pages are mmapped to the userspace and PageKmemcg()
> > was designed with the assumption that such pages will never be mmapped
> > to the userspace.
> >
> > One way to solve this problem is by introducing an additional memcg
> > charging API similar to mem_cgroup_[un]charge_skmem(). However skmem
> > charging API usage is contained and shared and no new users are
> > expected but the pages which can be mmapped and should be charged to
> > kmemcg can and will increase. So, requiring the usage for such API will
> > increase the maintenance burden. The simplest solution is to remove the
> > assumption of no mmapping PageKmemcg() pages to user space.
>
> IIRC the only purpose of PageKmemcg is to keep accounting in the legacy
> memcg right. Spending a page flag for that is just no-go.
PgaeKmemcg is used for both v1 and v2.
> If PageKmemcg
> cannot reuse mapping then we have to find a better place for it (e.g.
> bottom bit in the page->memcg pointer or rethink the whole PageKmemcg.
>
Johannes have proposal, I will look more into those.
Shakeel