you can use qemu-kvm and seabios from these branches:
https://github.com/vliaskov/qemu-kvm/commits/memhp-v4
https://github.com/vliaskov/seabios/commits/memhp-v4
Instructions on how to use the DIMM/memory hotplug are here:
http://lists.gnu.org/archive/html/qemu-devel/2012-12/msg02693.html
(these patchsets are not in mainline qemu/qemu-kvm and seabios)
e.g. the following creates a VM with 2G initial memory on 2 nodes (1GB on each).
There is also an extra 1GB DIMM on each node (the last 3 lines below describe
this):
/opt/qemu/bin/qemu-system-x86_64 -bios /opt/devel/seabios-upstream/out/bios.bin \
-enable-kvm -M pc -smp 4,maxcpus=8 -cpu host -m 2G \
-drive
file=/opt/images/debian.img,if=none,id=drive-virtio-disk0,format=raw,cache=none \
-device virtio-blk-pci,bus=pci.0,drive=drive-virtio-disk0,id=virtio-disk0,bootindex=1 \
-netdev type=tap,id=guest0,vhost=on -device virtio-net-pci,netdev=guest0 -vga \
std -monitor stdio \
-numa node,mem=1G,cpus=2,nodeid=0 -numa node,mem=0,cpus=2,nodeid=1 \
-device dimm,id=dimm0,size=1G,node=0,bus=membus.0,populated=off \
-device dimm,id=dimm1,size=1G,node=1,bus=membus.0,populated=off
After startup I hotplug the dimm0 on node0 (or dimm1 on node1, same result)
(qemu) device_add dimm,id=dimm0,size=1G,node=0,bus=membus.0
than i reboot VM. Kernel works without "movablecore=acpi" but panics with this
option.
Note this qemu/seabios does not model initial memory (-m 2G) as memory devices.
Only extra dimms ("device -dimm") are modeled as separate memory devices.
Now in kernel, we can recognize a node (by PXM in SRAT), but we cannot
recognize a memory device. Are you saying if we have this
entry-granularity,
we can hotplug a single memory device in a node ? (Perhaps there are more
than on memory device in a node.)
yes, this is what I mean. Multiple memory devices on one node is possible in
both a real machine and a VM.
In the VM case, seabios can present different DIMM devices for any number of
nodes. Each DIMM is also given a separate SRAT entry by seabios. So when the
kernel initially parses the entries, it sees multiple ones for the same node.
(these are merged together in numa_cleanup_meminfo though)
If so, it makes sense. But I don't the kernel is able to recognize which
device a memory range belongs to now. And I'm not sure if we can do this.
kernel knows which memory ranges belong to each DIMM (with ACPI enabled, each
DIMM is represented by an acpi memory device, see drivers/acpi/acpi_memhotplug.c)