Re: [PATCH net v2 0/2] Revert the 'socket_alloc' life cycle change
From: Eric Dumazet
Date: Tue May 05 2020 - 12:37:47 EST
On 5/5/20 9:31 AM, Eric Dumazet wrote:
>
>
> On 5/5/20 9:25 AM, Eric Dumazet wrote:
>>
>>
>> On 5/5/20 9:13 AM, SeongJae Park wrote:
>>> On Tue, 5 May 2020 09:00:44 -0700 Eric Dumazet <edumazet@xxxxxxxxxx> wrote:
>>>
>>>> On Tue, May 5, 2020 at 8:47 AM SeongJae Park <sjpark@xxxxxxxxxx> wrote:
>>>>>
>>>>> On Tue, 5 May 2020 08:20:50 -0700 Eric Dumazet <eric.dumazet@xxxxxxxxx> wrote:
>>>>>
>>>>>>
>>>>>>
>>>>>> On 5/5/20 8:07 AM, SeongJae Park wrote:
>>>>>>> On Tue, 5 May 2020 07:53:39 -0700 Eric Dumazet <edumazet@xxxxxxxxxx> wrote:
>>>>>>>
>>>>>>
>>>>>>>> Why do we have 10,000,000 objects around ? Could this be because of
>>>>>>>> some RCU problem ?
>>>>>>>
>>>>>>> Mainly because of a long RCU grace period, as you guess. I have no idea how
>>>>>>> the grace period became so long in this case.
>>>>>>>
>>>>>>> As my test machine was a virtual machine instance, I guess RCU readers
>>>>>>> preemption[1] like problem might affected this.
>>>>>>>
>>>>>>> [1] https://www.usenix.org/system/files/conference/atc17/atc17-prasad.pdf
>>>>>>>
>>>>>>>>
>>>>>>>> Once Al patches reverted, do you have 10,000,000 sock_alloc around ?
>>>>>>>
>>>>>>> Yes, both the old kernel that prior to Al's patches and the recent kernel
>>>>>>> reverting the Al's patches didn't reproduce the problem.
>>>>>>>
>>>>>>
>>>>>> I repeat my question : Do you have 10,000,000 (smaller) objects kept in slab caches ?
>>>>>>
>>>>>> TCP sockets use the (very complex, error prone) SLAB_TYPESAFE_BY_RCU, but not the struct socket_wq
>>>>>> object that was allocated in sock_alloc_inode() before Al patches.
>>>>>>
>>>>>> These objects should be visible in kmalloc-64 kmem cache.
>>>>>
>>>>> Not exactly the 10,000,000, as it is only the possible highest number, but I
>>>>> was able to observe clear exponential increase of the number of the objects
>>>>> using slabtop. Before the start of the problematic workload, the number of
>>>>> objects of 'kmalloc-64' was 5760, but I was able to observe the number increase
>>>>> to 1,136,576.
>>>>>
>>>>> OBJS ACTIVE USE OBJ SIZE SLABS OBJ/SLAB CACHE SIZE NAME
>>>>> before: 5760 5088 88% 0.06K 90 64 360K kmalloc-64
>>>>> after: 1136576 1136576 100% 0.06K 17759 64 71036K kmalloc-64
>>>>>
>>>>
>>>> Great, thanks.
>>>>
>>>> How recent is the kernel you are running for your experiment ?
>>>
>>> It's based on 5.4.35.
>>>
>>>>
>>>> Let's make sure the bug is not in RCU.
>>>
>>> One thing I can currently say is that the grace period passes at last. I
>>> modified the benchmark to repeat not 10,000 times but only 5,000 times to run
>>> the test without OOM but easily observable memory pressure. As soon as the
>>> benchmark finishes, the memory were freed.
>>>
>>> If you need more tests, please let me know.
>>>
>>
>> I would ask Paul opinion on this issue, because we have many objects
>> being freed after RCU grace periods.
>>
>> If RCU subsystem can not keep-up, I guess other workloads will also suffer.
>>
>> Sure, we can revert patches there and there trying to work around the issue,
>> but for objects allocated from process context, we should not have these problems.
>>
>
> I wonder if simply adjusting rcu_divisor to 6 or 5 would help
>
> diff --git a/kernel/rcu/tree.c b/kernel/rcu/tree.c
> index d9a49cd6065a20936edbda1b334136ab597cde52..fde833bac0f9f81e8536211b4dad6e7575c1219a 100644
> --- a/kernel/rcu/tree.c
> +++ b/kernel/rcu/tree.c
> @@ -427,7 +427,7 @@ module_param(qovld, long, 0444);
> static ulong jiffies_till_first_fqs = ULONG_MAX;
> static ulong jiffies_till_next_fqs = ULONG_MAX;
> static bool rcu_kick_kthreads;
> -static int rcu_divisor = 7;
> +static int rcu_divisor = 6;
> module_param(rcu_divisor, int, 0644);
>
> /* Force an exit from rcu_do_batch() after 3 milliseconds. */
>
To be clear, you can adjust the value without building a new kernel.
echo 6 >/sys/module/rcutree/parameters/rcu_divisor