Re: [Xen-devel] crash tool - problem with new Xen linear virtual mapped sparse p2m list
From: Jan Beulich
Date: Tue Nov 24 2015 - 08:55:47 EST
>>> On 24.11.15 at 14:46, <ian.campbell@xxxxxxxxxx> wrote:
> On Tue, 2015-11-24 at 10:35 +0000, Andrew Cooper wrote:
>> On 24/11/15 10:17, Petr Tesarik wrote:
>> > On Tue, 24 Nov 2015 10:09:01 +0000
>> > David Vrabel <david.vrabel@xxxxxxxxxx> wrote:
>> >
>> > > On 24/11/15 09:55, Malcolm Crossley wrote:
>> > > > On 24/11/15 08:59, Jan Beulich wrote:
>> > > > > > > > On 24.11.15 at 07:55, <JGross@xxxxxxxx> wrote:
>> > > > > > What about:
>> > > > > >
>> > > > > > 4) Instead of relying on the kernel maintained p2m list for m2p
>> > > > > > conversion use the hypervisor maintained m2p list which
>> > > > > > should be
>> > > > > > available in the dump as well. This is the way the alive
>> > > > > > kernel is
>> > > > > > working, so mimic it during crash dump analysis.
>> > > > > I fully agree; I have to admit that looking at the p2m when doing
>> > > > > page
>> > > > > table walks for a PV Dom0 (having all machine addresses in page
>> > > > > table
>> > > > > entries) seems kind of backwards. (But I say this knowing nothing
>> > > > > about the tool.)
>> > > > >
>> > > > I don't think we can reliably use the m2p for PV domains because
>> > > > PV domains don't always issue a m2p update hypercall when they
>> > > > change
>> > > > their p2m mapping.
>> > > This only applies to foreign pages which won't be very interesting to
>> > > a
>> > > crash tool.
>> > True. I think the main reason crash hasn't done this is that it cannot
>> > find the hypervisor maintained m2p list. It should be sufficient to add
>> > some more fields to XEN_VMCOREINFO, so that crash can locate the
>> > mapping in the dump.
>>
>> The M2P lives at an ABI-specified location in all virtual address spaces
>> for PV guests.
>>
>> Either 0xF5800000 or 0xFFFF800000000000 depending on bitness.
>
> In theory it can actually be dynamic. XENMEM_machphys_mapping is the way to
> get at it (for both bitnesses).
>
> For 64-bit guests I think that is most an "in theory" thing and it never
> has actually been so.
>
> For a 32-bit guest case I don't recall if it is just a 32on32 vs 32on64
> thing, or if something (either guest or toolstack) gets to pick more
> dynamically or even if it is a dom0 vs domU thing.
It's only for 32-on-64 where this range can change (and there it's the
64-bit address that crash would care about anyway).
Jan
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/