Re: [PATCH] mm: don't rely on system state to detect hot-plug operations

From: Michal Hocko
Date: Thu Sep 10 2020 - 08:54:36 EST


On Thu 10-09-20 14:47:56, Michal Hocko wrote:
> On Thu 10-09-20 14:03:48, Oscar Salvador wrote:
> > On Thu, Sep 10, 2020 at 01:35:32PM +0200, Laurent Dufour wrote:
> >
> > > That points has been raised by David, quoting him here:
> > >
> > > > IIRC, ACPI can hotadd memory while SCHEDULING, this patch would break that.
> > > >
> > > > Ccing Oscar, I think he mentioned recently that this is the case with ACPI.
> > >
> > > Oscar told that he need to investigate further on that.
> >
> > I think my reply got lost.
> >
> > We can see acpi hotplugs during SYSTEM_SCHEDULING:
> >
> > $QEMU -enable-kvm -machine pc -smp 4,sockets=4,cores=1,threads=1 -cpu host -monitor pty \
> > -m size=$MEM,slots=255,maxmem=4294967296k \
> > -numa node,nodeid=0,cpus=0-3,mem=512 -numa node,nodeid=1,mem=512 \
> > -object memory-backend-ram,id=memdimm0,size=134217728 -device pc-dimm,node=0,memdev=memdimm0,id=dimm0,slot=0 \
> > -object memory-backend-ram,id=memdimm1,size=134217728 -device pc-dimm,node=0,memdev=memdimm1,id=dimm1,slot=1 \
> > -object memory-backend-ram,id=memdimm2,size=134217728 -device pc-dimm,node=0,memdev=memdimm2,id=dimm2,slot=2 \
> > -object memory-backend-ram,id=memdimm3,size=134217728 -device pc-dimm,node=0,memdev=memdimm3,id=dimm3,slot=3 \
> > -object memory-backend-ram,id=memdimm4,size=134217728 -device pc-dimm,node=1,memdev=memdimm4,id=dimm4,slot=4 \
> > -object memory-backend-ram,id=memdimm5,size=134217728 -device pc-dimm,node=1,memdev=memdimm5,id=dimm5,slot=5 \
> > -object memory-backend-ram,id=memdimm6,size=134217728 -device pc-dimm,node=1,memdev=memdimm6,id=dimm6,slot=6 \
> >
> > kernel: [ 0.753643] __add_memory: nid: 0 start: 0100000000 - 0108000000 (size: 134217728)
> > kernel: [ 0.756950] register_mem_sect_under_node: system_state= 1
> >
> > kernel: [ 0.760811] register_mem_sect_under_node+0x4f/0x230
> > kernel: [ 0.760811] walk_memory_blocks+0x80/0xc0
> > kernel: [ 0.760811] link_mem_sections+0x32/0x40
> > kernel: [ 0.760811] add_memory_resource+0x148/0x250
> > kernel: [ 0.760811] __add_memory+0x5b/0x90
> > kernel: [ 0.760811] acpi_memory_device_add+0x130/0x300
> > kernel: [ 0.760811] acpi_bus_attach+0x13c/0x1c0
> > kernel: [ 0.760811] acpi_bus_attach+0x60/0x1c0
> > kernel: [ 0.760811] acpi_bus_scan+0x33/0x70
> > kernel: [ 0.760811] acpi_scan_init+0xea/0x21b
> > kernel: [ 0.760811] acpi_init+0x2f1/0x33c
> > kernel: [ 0.760811] do_one_initcall+0x46/0x1f4
>
> Is there any actual usecase for a configuration like this? What is the
> point to statically define additional memory like this when the same can
> be achieved on the same command line?

Forgot to ask one more thing. Who is going to online that memory when
userspace is not running yet?
--
Michal Hocko
SUSE Labs