Re: [PATCH v8 1/5] mm: introduce a common interface for balloonpages mobility

From: Rafael Aquini
Date: Wed Aug 22 2012 - 22:19:34 EST


On Wed, Aug 22, 2012 at 12:33:17PM +0300, Michael S. Tsirkin wrote:
> Hmm, so this will busy wait which is unelegant.
> We need some event IMO.

No, it does not busy wait. leak_balloon() is mutual exclusive with migration
steps, so for the case we have one racing against the other, we really want
leak_balloon() dropping the mutex temporarily to allow migration complete its
work of refilling vb->pages list. Also, leak_balloon() calls tell_host(), which
will potentially make it to schedule for each round of vb->pfns leak_balloon()
will release. So, when remove_common() calls leak_balloon() looping on
vb->num_pages, that won't become a tight loop.
The scheme was apparently working before this series, and it will remain working
after it.


> Also, reading num_pages without a lock here
> which seems wrong.

I'll protect it with vb->balloon_lock mutex. That will be consistent with the
lock protection scheme this patch is introducing for struct virtio_balloon
elements.


> A similar concern applies to normal leaking
> of the balloon: here we might leak less than
> required, then wait for the next config change
> event.

Just as before, same thing here. If you leaked less than required, balloon()
will keep calling leak_balloon() until the balloon target is reached. This
scheme was working before, and it will keep working after this patch.


> How about we signal config_change
> event when pages are back to pages_list?

I really don't know what to tell you here, but, to me, it seems like an
overcomplication that isn't directly entangled with this patch purposes.
Besides, you cannot expect compation / migration happening and racing against
leak_balloon() all the time to make them signal events to the later, so we might
just be creating a wait-forever condition for leak_balloon(), IMHO.

Cheers!

--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/