RE: [RFC 1/2] kvm: host-side changes for tmem on KVM
From: Dan Magenheimer
Date: Sun Mar 18 2012 - 15:53:14 EST
> From: Akshay Karle [mailto:akshay.a.karle@xxxxxxxxx]
> Subject: RE: [RFC 1/2] kvm: host-side changes for tmem on KVM
>
> > > From: Akshay Karle [mailto:akshay.a.karle@xxxxxxxxx]
> > > Subject: Re: [RFC 1/2] kvm: host-side changes for tmem on KVM
> > >
> > > >> @@ -669,7 +670,6 @@ static struct zv_hdr *zv_create(struct x
> > > >> int chunks = (alloc_size + (CHUNK_SIZE - 1)) >> CHUNK_SHIFT;
> > > >> int ret;
> > > >>
> > > >> - BUG_ON(!irqs_disabled());
> > > >
> > > > Can you explain why?
> > >
> > > Zcache is by default used in the non-virtualized environment for page compression. Whenever
> > > a page is to be evicted from the page cache the spin_lock_irq is held on the page mapping.
> > > To ensure that this is done, the BUG_ON(!irqs_disabled()) was used.
> > > But now the situation is different, we are using zcache functions for kvm VM's.
> > > So if any page of the guest is to be evicted the irqs should be disabled in just that
> > > guest and not the host, so we removed the BUG_ON(!irqs_disabled()); line.
> >
> > I think irqs may still need to be disabled (in your code by the caller)
> > since the tmem code (in tmem.c) takes spinlocks with this assumption.
> > I'm not sure since I don't know what can occur with scheduling a
> > kvm guest during an interrupt... can a different vcpu of the same guest
> > be scheduled on this same host pcpu?
>
> The irqs are disabled but only in the guest kernel not in the host. We
> tried adding the spin_lock_irq code into the host but that was resulting
> in host panic as the lock is being taken on the entire mapping. If the
> irqs are disabled in the guest, is there a need to disable them on the
> host as well? Because the mappings maybe different in the host and the
> guest.
The issue is that interrupts MUST be disabled in code this is
called by zcache_put_page() and by zv_create() because the
called code (tmem_put and xv_malloc) takes locks. This may
be difficult to reproduce, but if an interrupt occurs during
a critical region, a deadlock is possible.
You don't need to do a spin_lock_irq. You just need to do a local_irq_save
and restore in zcache_put_page if kvm_tmem_enabled. Look at zcache_get_page
as an example... the code in zcache_put_page would be something like:
{
if (kvm_tmem_enabled)
local_irq_save(flags);
:
:
out:
if (kvm_tmem_enabled)
local_irq_restore(flags);
return ret;
}
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/