Re: [PATCH v1] kernel/trace:check the val against the available mem

From: Michal Hocko
Date: Tue Apr 03 2018 - 08:16:24 EST


On Tue 03-04-18 07:51:58, Steven Rostedt wrote:
> On Tue, 3 Apr 2018 13:06:12 +0200
> Michal Hocko <mhocko@xxxxxxxxxx> wrote:
>
> > > I wonder if I should have the ring buffer allocate groups of pages, to
> > > avoid this. Or try to allocate with NORETRY, one page at a time, and
> > > when that fails, allocate groups of pages with RETRY_MAYFAIL, and that
> > > may keep it from causing an OOM?
> >
> > I wonder why it really matters. The interface is root only and we expect
> > some sanity from an admin, right? So allocating such a large ring buffer
> > that it sends the system to the OOM is a sign that the admin should be
> > more careful. Balancing on the OOM edge is always a risk and the result
> > will highly depend on the workload running in parallel.
>
> This came up because there's scripts or programs that set the size of
> the ring buffer. The complaint was that the application would just set
> the size to something bigger than what was available and cause an OOM
> killing other applications. The final solution is to simply check the
> available memory before allocating the ring buffer:
>
> /* Check if the available memory is there first */
> i = si_mem_available();
> if (i < nr_pages)
> return -ENOMEM;
>
> And it works well.

Except that it doesn't work. si_mem_available is not really suitable for
any allocation estimations. Its only purpose is to provide a very rough
estimation for userspace. Any other use is basically abuse. The
situation can change really quickly. Really it is really hard to be
clever here with the volatility the memory allocations can cause.
--
Michal Hocko
SUSE Labs