Re: [PATCH 00/10] OOM Debug print selection and additional information
From: Edward Chron
Date: Tue Aug 27 2019 - 22:47:39 EST
On Tue, Aug 27, 2019 at 6:32 PM Qian Cai <cai@xxxxxx> wrote:
>
>
>
> > On Aug 27, 2019, at 9:13 PM, Edward Chron <echron@xxxxxxxxxx> wrote:
> >
> > On Tue, Aug 27, 2019 at 5:50 PM Qian Cai <cai@xxxxxx> wrote:
> >>
> >>
> >>
> >>> On Aug 27, 2019, at 8:23 PM, Edward Chron <echron@xxxxxxxxxx> wrote:
> >>>
> >>>
> >>>
> >>> On Tue, Aug 27, 2019 at 5:40 AM Qian Cai <cai@xxxxxx> wrote:
> >>> On Mon, 2019-08-26 at 12:36 -0700, Edward Chron wrote:
> >>>> This patch series provides code that works as a debug option through
> >>>> debugfs to provide additional controls to limit how much information
> >>>> gets printed when an OOM event occurs and or optionally print additional
> >>>> information about slab usage, vmalloc allocations, user process memory
> >>>> usage, the number of processes / tasks and some summary information
> >>>> about these tasks (number runable, i/o wait), system information
> >>>> (#CPUs, Kernel Version and other useful state of the system),
> >>>> ARP and ND Cache entry information.
> >>>>
> >>>> Linux OOM can optionally provide a lot of information, what's missing?
> >>>> ----------------------------------------------------------------------
> >>>> Linux provides a variety of detailed information when an OOM event occurs
> >>>> but has limited options to control how much output is produced. The
> >>>> system related information is produced unconditionally and limited per
> >>>> user process information is produced as a default enabled option. The
> >>>> per user process information may be disabled.
> >>>>
> >>>> Slab usage information was recently added and is output only if slab
> >>>> usage exceeds user memory usage.
> >>>>
> >>>> Many OOM events are due to user application memory usage sometimes in
> >>>> combination with the use of kernel resource usage that exceeds what is
> >>>> expected memory usage. Detailed information about how memory was being
> >>>> used when the event occurred may be required to identify the root cause
> >>>> of the OOM event.
> >>>>
> >>>> However, some environments are very large and printing all of the
> >>>> information about processes, slabs and or vmalloc allocations may
> >>>> not be feasible. For other environments printing as much information
> >>>> about these as possible may be needed to root cause OOM events.
> >>>>
> >>>
> >>> For more in-depth analysis of OOM events, people could use kdump to save a
> >>> vmcore by setting "panic_on_oom", and then use the crash utility to analysis the
> >>> vmcore which contains pretty much all the information you need.
> >>>
> >>> Certainly, this is the ideal. A full system dump would give you the maximum amount of
> >>> information.
> >>>
> >>> Unfortunately some environments may lack space to store the dump,
> >>
> >> Kdump usually also support dumping to a remote target via NFS, SSH etc
> >>
> >>> let alone the time to dump the storage contents and restart the system. Some
> >>
> >> There is also âmakedumpfileâ that could compress and filter unwanted memory to reduce
> >> the vmcore size and speed up the dumping process by utilizing multi-threads.
> >>
> >>> systems can take many minutes to fully boot up, to reset and reinitialize all the
> >>> devices. So unfortunately this is not always an option, and we need an OOM Report.
> >>
> >> I am not sure how the system needs some minutes to reboot would be relevant for the
> >> discussion here. The idea is to save a vmcore and it can be analyzed offline even on
> >> another system as long as it having a matching âvmlinux.".
> >>
> >>
> >
> > If selecting a dump on an OOM event doesn't reboot the system and if
> > it runs fast enough such
> > that it doesn't slow processing enough to appreciably effect the
> > system's responsiveness then
> > then it would be ideal solution. For some it would be over kill but
> > since it is an option it is a
> > choice to consider or not.
>
> It sounds like you are looking for more of this,
If you want to supplement the OOM Report and keep the information together than
you could use EBPF to do that. If that really is the preference it
might make sense
to put the entire report as an EBPF script than you can modify the
script however
you choose. That would be very flexible. You can change your
configuration on the
fly. As long as it has access to everything you need it should work.
Michal would know what direction OOM is headed and if he thinks that fits with
where things are headed.
I'm flexible in he sense that I could change our submission to make
specific updates
to the existing OOM code. We kept it as separate as possible as for
ease of porting.
But if we can build an acceptable case for making updates to the existing OOM
Report code that works.
Our current implementation has some knobs to allow some limited scaling that
has advantages over print rate limiting and it may allow environments
that didn't
want to allow process printing or slab or vmalloc entry allocations
printing to do
so without generating a lot of output.
But the existing code could be modified to do the same thing. Possibly without
having a configuration interface if that is not desirable. It could look at
the number entries to potentially print for example and if the number
is small it
could print them all or scale selection based on a default memory usage. Do you
really care about slab or vmalloc entries using 1 MB or less of memory on a
256 GB system for example? Probably not. Our approach let's you size this
and has a default that may be reasonable for many environments. But it allows
you to configure things which adds some complexity.
Now you could in theory produce the entire OOM Report plus anything we've
purposed with an EBPF script. Haven't done it but assume it works with 5.3.
Problem with any type of plugin and or configurable option is testing as
Michal mentions and the fact it may or not be present.
For production systems installing and updating EBPF scripts may someday
be very common, but I wonder how data center managers feel about it now?
Developers are very excited about it and it is a very powerful tool but can I
get permission to add or replace an existing EBPF on production systems?
If there is reluctance for security or reliability or any issue than I
would rather
have the code in the kernel so I know it is there and is tested. Just as I would
prefer not to have the config options for reasons Michal cites, but
I'll take that
if that is the best I can get.
Will be interested to hear what Michal advises.
>
> https://github.com/iovisor/bcc/blob/master/tools/oomkill.py
>