Re: INFO: rcu detected stall in shmem_fault
From: Dmitry Vyukov
Date: Wed Oct 10 2018 - 09:18:21 EST
On Wed, Oct 10, 2018 at 3:10 PM, Tetsuo Handa
<penguin-kernel@xxxxxxxxxxxxxxxxxxx> wrote:
>>>>>>> Just flooding out of memory messages can trigger RCU stall problems.
>>>>>>> For example, a severe skbuff_head_cache or kmalloc-512 leak bug is causing
>>>>>>
>>>>>> [...]
>>>>>>
>>>>>> Quite some of them, indeed! I guess we want to rate limit the output.
>>>>>> What about the following?
>>>>>
>>>>> A bit unrelated, but while we are at it:
>>>>>
>>>>> I like it when we rate-limit printk-s that lookup the system.
>>>>> But it seems that default rate-limit values are not always good enough,
>>>>> DEFAULT_RATELIMIT_INTERVAL / DEFAULT_RATELIMIT_BURST can still be too
>>>>> verbose. For instance, when we have a very slow IPMI emulated serial
>>>>> console -- e.g. baud rate at 57600. DEFAULT_RATELIMIT_INTERVAL and
>>>>> DEFAULT_RATELIMIT_BURST can add new OOM headers and backtraces faster
>>>>> than we evict them.
>>>>>
>>>>> Does it sound reasonable enough to use larger than default rate-limits
>>>>> for printk-s in OOM print-outs? OOM reports tend to be somewhat large
>>>>> and the reported numbers are not always *very* unique.
>>>>>
>>>>> What do you think?
>>>>
>>>> I do not really care about the current inerval/burst values. This change
>>>> should be done seprately and ideally with some numbers.
>>>
>>> I think Sergey meant that this place may need to use
>>> larger-than-default values because it prints lots of output per
>>> instance (whereas the default limit is more tuned for cases that print
>>> just 1 line).
>
> Yes. The OOM killer tends to print a lot of messages (and I estimate that
> mutex_trylock(&oom_lock) accelerates wasting more CPU consumption by
> preemption).
>
>>>
>>> I've found at least 1 place that uses DEFAULT_RATELIMIT_INTERVAL*10:
>>> https://elixir.bootlin.com/linux/latest/source/fs/btrfs/extent-tree.c#L8365
>>> Probably we need something similar here.
>
> Since printk() is a significantly CPU consuming operation, I think that what
> we need to guarantee is interval between the end of an OOM killer messages
> and the beginning of next OOM killer messages is large enough. For example,
> setup a timer with 5 seconds timeout upon the end of an OOM killer messages
> and check whether the timer already fired upon the beginning of next OOM killer
> messages.
>
>>
>>
>> In parallel with the kernel changes I've also made a change to
>> syzkaller that (1) makes it not use oom_score_adj=-1000, this hard
>> killing limit looks like quite risky thing, (2) increase memcg size
>> beyond expected KASAN quarantine size:
>> https://github.com/google/syzkaller/commit/adedaf77a18f3d03d695723c86fc083c3551ff5b
>> If this will stop the flow of hang/stall reports, then we can just
>> close all old reports as invalid.
>
> I don't think so. Only this report was different from others because printk()
> in this report was from memcg OOM events without eligible tasks whereas printk()
> in others are from global OOM events triggered by severe slab memory leak.
Ack.
I guess I just hoped deep down that we somehow magically get rid of
all these reports with some simple change like this :)