Re: [PATCH v8 03/17] mm: Assign id to every memcg-aware shrinker
From: Kirill Tkhai
Date: Tue Jul 03 2018 - 11:47:14 EST
On 03.07.2018 18:27, Matthew Wilcox wrote:
> On Tue, Jul 03, 2018 at 06:09:05PM +0300, Kirill Tkhai wrote:
>> +++ b/mm/vmscan.c
>> @@ -169,6 +169,49 @@ unsigned long vm_total_pages;
>> static LIST_HEAD(shrinker_list);
>> static DECLARE_RWSEM(shrinker_rwsem);
>>
>> +#ifdef CONFIG_MEMCG_KMEM
>> +static DEFINE_IDR(shrinker_idr);
>> +static int shrinker_nr_max;
>
> So ... we've now got a list_head (shrinker_list) which contains all of
> the shrinkers, plus a shrinker_idr which contains the memcg-aware shrinkers?
>
> Why not replace the shrinker_list with the shrinker_idr? It's only used
> twice in vmscan.c:
>
> void register_shrinker_prepared(struct shrinker *shrinker)
> {
> down_write(&shrinker_rwsem);
> list_add_tail(&shrinker->list, &shrinker_list);
> up_write(&shrinker_rwsem);
> }
>
> list_for_each_entry(shrinker, &shrinker_list, list) {
> ...
>
> The first is simply idr_alloc() and the second is
>
> idr_for_each_entry(&shrinker_idr, shrinker, id) {
>
> I understand there's a difference between allocating the shrinker's ID and
> adding it to the list. You can do this by calling idr_alloc with NULL
> as the pointer, and then using idr_replace() when you want to add the
> shrinker to the list. idr_for_each_entry() skips over NULL entries.
shrinker_idr now contains only memcg-aware shrinkers, so all bits from memcg map
may be potentially populated. In case of memcg-aware shrinkers and !memcg-aware
shrinkers share the same numbers like you suggest, this will lead to increasing
size of memcg maps, which is bad for memory consumption. So, memcg-aware shrinkers
should to have its own IDR and its own numbers. The tricks like allocation big
IDs for !memcg-aware shrinkers seem bad for me, since they make the code more
complicated.
> This will actually reduce the size of each shrinker and be more
> cache-efficient when calling the shrinkers. I think we can also get
> rid of the shrinker_rwsem eventually, but let's leave it for now.
This patchset does not make the cache-efficient bad, since without the patchset the situation
is so bad, that it's just impossible to talk about the cache efficiently,
so let's leave lockless iteration/etc for the future works.
Kirill