Re: [PATCH v5 4/4] mm: Defer ZONE_DEVICE page initialization to the point where we init pgmap

From: Alexander Duyck
Date: Mon Oct 08 2018 - 17:38:14 EST

On 10/8/2018 2:01 PM, Dan Williams wrote:
On Tue, Sep 25, 2018 at 1:29 PM Alexander Duyck
<alexander.h.duyck@xxxxxxxxxxxxxxx> wrote:

The ZONE_DEVICE pages were being initialized in two locations. One was with
the memory_hotplug lock held and another was outside of that lock. The
problem with this is that it was nearly doubling the memory initialization
time. Instead of doing this twice, once while holding a global lock and
once without, I am opting to defer the initialization to the one outside of
the lock. This allows us to avoid serializing the overhead for memory init
and we can instead focus on per-node init times.

One issue I encountered is that devm_memremap_pages and
hmm_devmmem_pages_create were initializing only the pgmap field the same
way. One wasn't initializing hmm_data, and the other was initializing it to
a poison value. Since this is something that is exposed to the driver in
the case of hmm I am opting for a third option and just initializing
hmm_data to 0 since this is going to be exposed to unknown third party

Reviewed-by: Pavel Tatashin <pavel.tatashin@xxxxxxxxxxxxx>
Signed-off-by: Alexander Duyck <alexander.h.duyck@xxxxxxxxxxxxxxx>

v4: Moved moved memmap_init_zone_device to below memmmap_init_zone to avoid
merge conflicts with other changes in the kernel.
v5: No change

This patch appears to cause a regression in the "" unit test
in the ndctl test suite.

So all you had to do is run the script to see the issue? I just want to confirm there isn't any additional information needed before I try chasing this down.

I tried to reproduce on -next with:

2302f5ee215e mm: defer ZONE_DEVICE page initialization to the point
where we init pgmap

...but -next does not even boot for me at that commit.

What version of -next? There are a couple of patches probably needed depending on which version you are trying to boot.

Here is a warning signature that proceeds a hang with this patch
applied against v4.19-rc6:

percpu ref (blk_queue_usage_counter_release) <= 0 (-1530626) after
switching to atomic
WARNING: CPU: 24 PID: 7346 at lib/percpu-refcount.c:155
CPU: 24 PID: 7346 Comm: modprobe Tainted: G OE 4.19.0-rc6+ #2458
RIP: 0010:percpu_ref_switch_to_atomic_rcu+0x1f7/0x200
Call Trace:
? percpu_ref_reinit+0x140/0x140
RIP: 0010:lock_acquire+0xb8/0x1a0
? __put_page+0x55/0x150
? __put_page+0x55/0x150
? __put_page+0x55/0x150
? trace_hardirqs_off_thunk+0x1a/0x1c

So it looks like we are tearing down memory when this is triggered. Do we know if this is at the end of the test or if this is running in parallel with anything?