Re: [PATCH v5 04/13] mm/shmem: Restrict MFD_INACCESSIBLE memory against RLIMIT_MEMLOCK
From: Jason Gunthorpe
Date: Wed Apr 13 2022 - 13:52:18 EST
On Wed, Apr 13, 2022 at 06:24:56PM +0200, David Hildenbrand wrote:
> On 12.04.22 16:36, Jason Gunthorpe wrote:
> > On Fri, Apr 08, 2022 at 08:54:02PM +0200, David Hildenbrand wrote:
> >
> >> RLIMIT_MEMLOCK was the obvious candidate, but as we discovered int he
> >> past already with secretmem, it's not 100% that good of a fit (unmovable
> >> is worth than mlocked). But it gets the job done for now at least.
> >
> > No, it doesn't. There are too many different interpretations how
> > MELOCK is supposed to work
> >
> > eg VFIO accounts per-process so hostile users can just fork to go past
> > it.
> >
> > RDMA is per-process but uses a different counter, so you can double up
> >
> > iouring is per-user and users a 3rd counter, so it can triple up on
> > the above two
>
> Thanks for that summary, very helpful.
I kicked off a big discussion when I suggested to change vfio to use
the same as io_uring
We may still end up trying it, but the major concern is that libvirt
sets the RLIMIT_MEMLOCK and if we touch anything here - including
fixing RDMA, or anything really, it becomes a uAPI break for libvirt..
> >> So I'm open for alternative to limit the amount of unmovable memory we
> >> might allocate for user space, and then we could convert seretmem as well.
> >
> > I think it has to be cgroup based considering where we are now :\
>
> Most probably. I think the important lessons we learned are that
>
> * mlocked != unmovable.
> * RLIMIT_MEMLOCK should most probably never have been abused for
> unmovable memory (especially, long-term pinning)
The trouble is I'm not sure how anything can correctly/meaningfully
set a limit.
Consider qemu where we might have 3 different things all pinning the
same page (rdma, iouring, vfio) - should the cgroup give 3x the limit?
What use is that really?
IMHO there are only two meaningful scenarios - either you are unpriv
and limited to a very small number for your user/cgroup - or you are
priv and you can do whatever you want.
The idea we can fine tune this to exactly the right amount for a
workload does not seem realistic and ends up exporting internal kernel
decisions into a uAPI..
Jason