dynamic oldmem in kdump kernel

From: Olaf Hering
Date: Thu Apr 07 2011 - 05:56:53 EST



Recently kdump for pv-on-hvm Xen guests was implemented by me.

One issue remains:
The xen_balloon driver in the guest frees guest pages and gives them
back to the hypervisor. These pages are marked as mmio in the
hypervisor. During a read of such a page via the /proc/vmcore interface
the hypervisor calls the qemu-dm process. qemu-dm tries to map the page,
this attempt fails because the page is not backed by ram and 0xff is
returned. All this generates high load in dom0 because the reads come
as 8byte requests.

There seems to be no way to make the crash kernel aware of the state of
individual pages in the crashed kernel, it is not aware of memory
ballooning. And doing that from within the "kernel to crash" seems error
prone. Since over time the fragmentation will increase, it would be best
if the crash kernel itself queries the state of oldmem pages.

If copy_oldmem_page() would call a function, a hook, provided by the Xen
pv-on-hvm drivers to query if the pfn to read from is really backed by
ram the load issue could be avoided. Unfortunately, even Xen needs to
get a new interface to query the state of individual hvm guest pfns for
the purpose mentioned above.



Another issue, slightly related, is memory hotplug.
How is this currently handled for kdump? Is there code which
automatically reconfigures the kdump kernel with the new memory ranges?


Olaf

--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/