Re: [PATCH 1/2] perf: Add closing sibling events' file descriptors

From: Andi Kleen
Date: Mon Aug 10 2020 - 13:50:46 EST


> It didn't. I can't figure out what to charge on the locked memory, as
> all that memory is in kernel-side objects. It also needs to make sense

I don't see how that makes a difference for the count. It just account
bytes. Can you elaborate?

> as iirc the default MLOCK_LIMIT is quite low, you'd hit it sooner than
> the file descriptor limit.

For a single process?

>
> > It has a minor issue that it might break some existing setups that rely
> > on the mmap fitting exactly into the mmap allocation, but that could
> > be solved by allowing a little slack, since the existing setups
> > likely don't have that many events.
>
> I don't see how to make this work in a sane way. Besides, if we have to
> have a limit anyway, sticking with the existing one is just easier and
> 1:1 is kind of more logical.

It's just a very wasteful way because we need an extra inode and file descriptor
for each event*cgroup.

And of course it's super user unfriendly because managing the fd limits
is a pain, apart from them not really working that well anyways
(since someone who really wants to do a memory DoS can still open
RLIMIT_NPROC*RLIMIT_NFILE fds just by forking)

Unfortunately we're kind of stuck with the old NFILE=1024 default
even though it makes little sense on modern servers. Right now a lot
of very reasonable perf stat command lines don't work out of the box
on larger machines because of this (and since cores are growing all
the time "larger machines" of today are the standard servers of
tomorrow)

Maybe an alternative would be to allow a multiplier. For each open fd
you can have N perf events. With N being a little higher than the current
cost of the inode + file descriptor together.

But would really prefer to have some kind of limit per uid that is
actually sane.

-Andi