Re: [memcg] 0f12156dff: will-it-scale.per_process_ops -33.6% regression
From: Vasily Averin
Date: Wed Sep 08 2021 - 04:14:26 EST
On 9/7/21 10:42 PM, Roman Gushchin wrote:
> On Tue, Sep 07, 2021 at 10:48:06AM -0700, Shakeel Butt wrote:
>> On Tue, Sep 7, 2021 at 10:31 AM Roman Gushchin <guro@xxxxxx> wrote:
>>>
>>> On Tue, Sep 07, 2021 at 07:14:45AM -1000, Tejun Heo wrote:
>>>> Hello,
>>>>
>>>> On Tue, Sep 07, 2021 at 10:11:21AM -0700, Roman Gushchin wrote:
>>>>> There are two polar cases:
>>>>> 1) a big number of relatively short-living allocations, which lifetime is well
>>>>> bounded (e.g. by a lifetime of a task),
>>>>> 2) a relatively small number of long-living allocations, which lifetime
>>>>> is potentially indefinite (e.g. struct mount).
>>>>>
>>>>> We can't use the same approach for both cases, otherwise we'll run into either
>>>>> performance or garbage collection problems (which also lead to performance
>>>>> problems, but delayed).
>>>>
>>>> Wouldn't a front cache which expires after some seconds catch both cases?
>>>
>>> I'm not sure. For the second case we need to pack allocations from different
>>> tasks/cgroups into a small number of shared pages. It means the front cache
>>> should be really small/non-existing. For the first case we likely need a
>>> substantial cache. Maybe we can do something really smart with scattering
>>> the cache over multiple pages, but I really doubt.
>>
>> I think we need to prototype this to sensibly evaluate. Let me know if
>> you want to take a stab at this otherwise I can try.
>
> If you have time and are ready to jump in, please, go on. Otherwise I can start
> working on it in few weeks. In any case, I'm happy to help with discussions, code
> reviews & whatever else I can do.
(Quite looking at the dubious achievement "my upstream patch was returned personally by Linus")
Please keep me informed about these tasks too. Unfortunately I cannot help right now,
however perhaps we will need to backport these patches to OpenVz kernels.
Thank you,
Vasily Averin