Re: [PATCH v1] mm/memory_hotplug: Don't take the cpu_hotplug_lock

From: David Hildenbrand
Date: Thu Sep 26 2019 - 03:26:19 EST


On 25.09.19 22:32, Qian Cai wrote:
> On Wed, 2019-09-25 at 21:48 +0200, David Hildenbrand wrote:
>> On 25.09.19 20:20, Qian Cai wrote:
>>> On Wed, 2019-09-25 at 19:48 +0200, Michal Hocko wrote:
>>>> On Wed 25-09-19 12:01:02, Qian Cai wrote:
>>>>> On Wed, 2019-09-25 at 09:02 +0200, David Hildenbrand wrote:
>>>>>> On 24.09.19 20:54, Qian Cai wrote:
>>>>>>> On Tue, 2019-09-24 at 17:11 +0200, Michal Hocko wrote:
>>>>>>>> On Tue 24-09-19 11:03:21, Qian Cai wrote:
>>>>>>>> [...]
>>>>>>>>> While at it, it might be a good time to rethink the whole locking over there, as
>>>>>>>>> it right now read files under /sys/kernel/slab/ could trigger a possible
>>>>>>>>> deadlock anyway.
>>>>>>>>>
>>>>>>>>
>>>>>>>> [...]
>>>>>>>>> [ÂÂ442.452090][ T5224] -> #0 (mem_hotplug_lock.rw_sem){++++}:
>>>>>>>>> [ÂÂ442.459748][ T5224]ÂÂÂÂÂÂÂÂvalidate_chain+0xd10/0x2bcc
>>>>>>>>> [ÂÂ442.464883][ T5224]ÂÂÂÂÂÂÂÂ__lock_acquire+0x7f4/0xb8c
>>>>>>>>> [ÂÂ442.469930][ T5224]ÂÂÂÂÂÂÂÂlock_acquire+0x31c/0x360
>>>>>>>>> [ÂÂ442.474803][ T5224]ÂÂÂÂÂÂÂÂget_online_mems+0x54/0x150
>>>>>>>>> [ÂÂ442.479850][ T5224]ÂÂÂÂÂÂÂÂshow_slab_objects+0x94/0x3a8
>>>>>>>>> [ÂÂ442.485072][ T5224]ÂÂÂÂÂÂÂÂtotal_objects_show+0x28/0x34
>>>>>>>>> [ÂÂ442.490292][ T5224]ÂÂÂÂÂÂÂÂslab_attr_show+0x38/0x54
>>>>>>>>> [ÂÂ442.495166][ T5224]ÂÂÂÂÂÂÂÂsysfs_kf_seq_show+0x198/0x2d4
>>>>>>>>> [ÂÂ442.500473][ T5224]ÂÂÂÂÂÂÂÂkernfs_seq_show+0xa4/0xcc
>>>>>>>>> [ÂÂ442.505433][ T5224]ÂÂÂÂÂÂÂÂseq_read+0x30c/0x8a8
>>>>>>>>> [ÂÂ442.509958][ T5224]ÂÂÂÂÂÂÂÂkernfs_fop_read+0xa8/0x314
>>>>>>>>> [ÂÂ442.515007][ T5224]ÂÂÂÂÂÂÂÂ__vfs_read+0x88/0x20c
>>>>>>>>> [ÂÂ442.519620][ T5224]ÂÂÂÂÂÂÂÂvfs_read+0xd8/0x10c
>>>>>>>>> [ÂÂ442.524060][ T5224]ÂÂÂÂÂÂÂÂksys_read+0xb0/0x120
>>>>>>>>> [ÂÂ442.528586][ T5224]ÂÂÂÂÂÂÂÂ__arm64_sys_read+0x54/0x88
>>>>>>>>> [ÂÂ442.533634][ T5224]ÂÂÂÂÂÂÂÂel0_svc_handler+0x170/0x240
>>>>>>>>> [ÂÂ442.538768][ T5224]ÂÂÂÂÂÂÂÂel0_svc+0x8/0xc
>>>>>>>>
>>>>>>>> I believe the lock is not really needed here. We do not deallocated
>>>>>>>> pgdat of a hotremoved node nor destroy the slab state because an
>>>>>>>> existing slabs would prevent hotremove to continue in the first place.
>>>>>>>>
>>>>>>>> There are likely details to be checked of course but the lock just seems
>>>>>>>> bogus.
>>>>>>>
>>>>>>> Check 03afc0e25f7f ("slab: get_online_mems for
>>>>>>> kmem_cache_{create,destroy,shrink}"). It actually talk about the races during
>>>>>>> memory as well cpu hotplug, so it might even that cpu_hotplug_lock removal is
>>>>>>> problematic?
>>>>>>>
>>>>>>
>>>>>> Which removal are you referring to? get_online_mems() does not mess with
>>>>>> the cpu hotplug lock (and therefore this patch).
>>>>>
>>>>> The one in your patch. I suspect there might be races among the whole NUMA node
>>>>> hotplug, kmem_cache_create, and show_slab_objects(). See bfc8c90139eb ("mem-
>>>>> hotplug: implement get/put_online_mems")
>>>>>
>>>>> "kmem_cache_{create,destroy,shrink} need to get a stable value of cpu/node
>>>>> online mask, because they init/destroy/access per-cpu/node kmem_cache parts,
>>>>> which can be allocated or destroyed on cpu/mem hotplug."
>>>>
>>>> I still have to grasp that code but if the slub allocator really needs
>>>> a stable cpu mask then it should be using the explicit cpu hotplug
>>>> locking rather than rely on side effect of memory hotplug locking.
>>>>
>>>>> Both online_pages() and show_slab_objects() need to get a stable value of
>>>>> cpu/node online mask.
>>>>
>>>> Could tou be more specific why online_pages need a stable cpu online
>>>> mask? I do not think that show_slab_objects is a real problem because a
>>>> potential race shouldn't be critical.
>>>
>>> build_all_zonelists()
>>> __build_all_zonelists()
>>> for_each_online_cpu(cpu)
>>>
>>
>> Two things:
>>
>> a) We currently always hold the device hotplug lock when onlining memory
>> and when onlining cpus (for CPUs at least via user space - we would have
>> to double check other call paths). So theoretically, that should guard
>> us from something like that already.
>>
>> b)
>>
>> commit 11cd8638c37f6c400cc472cc52b6eccb505aba6e
>> Author: Michal Hocko <mhocko@xxxxxxxx>
>> Date: Wed Sep 6 16:20:34 2017 -0700
>>
>> mm, page_alloc: remove stop_machine from build_all_zonelists
>>
>> Tells me:
>>
>> "Updates of the zonelists happen very seldom, basically only when a zone
>> becomes populated during memory online or when it loses all the memory
>> during offline. A racing iteration over zonelists could either miss a
>> zone or try to work on one zone twice. Both of these are something we
>> can live with occasionally because there will always be at least one
>> zone visible so we are not likely to fail allocation too easily for
>> example."
>>
>> Sounds like if there would be a race, we could live with it if I am not
>> getting that totally wrong.
>>
>
> What's the problem you are trying to solve? Why it is more important to live
> with races than to keep the correct code?

I am trying to understand, fix, cleanup and document the locking mess we
have in the memory hotplug code.

The cpu hotplug lock is one of these things nobody really has a clue why
it is still needed. It imposes locking orders (e.g., has to be taken
before the memory hotplug lock) and we are taking the cpu hotplug lock
even if we do add_memory()/remove_memory(), not only when onlining pages)

So if we agree that we need it here, I'll add documentation - especially
to build_all_zonelists(). If we agree it can go, I'll add documentation
why we don't need it in build_all_zonelists().

I am not yet convinced that we need the lock here. As I said, we do hold
the device_hotplug_lock which all sysfs
/sys/devices/system/whatever/online modifications take, and Michal even
documented why the we can live with very very rare races (again, if they
are possible at all).

I'd like to hear what Michal thinks. If we do want the cpu hotplug lock,
we can at least restrict it to the call paths (e.g., online_pages())
where the lock is really needed and document that.

--

Thanks,

David / dhildenb