Simple policies must exist and must be enforced by the hypervisor to ensureNO! Frontswap on Xen+tmem never *never* _never_ NEVER resultsThat's a bug. You're giving the guest memory without the means to take
in host swapping.
it back. The result is that you have to _undercommit_ your memory
resources.
Consider a machine running a guest, with most of its memory free. You
give the memory via frontswap to the guest. The guest happily swaps to
frontswap, and uses the freed memory for something unswappable, like
mlock()ed memory or hugetlbfs.
Now the second node dies and you need memory to migrate your guests
into. But you can't, and the hypervisor is at the mercy of the guest
for getting its memory back; and the guest can't do it (at least not
quickly).
this doesn't happen. Xen+tmem provides these policies and enforces them.
And it enforces them very _dynamically_ to constantly optimize
RAM utilization across multiple guests each with dynamically varying RAM
usage. Frontswap fits nicely into this framework.
Huge performance hits that are completely inexplicable to a userHost swapping is evil. Host swapping isIn this case the guest expects that swapped out memory will be slow
the root of most of the bad reputation that memory overcommit
has gotten from VMware customers. Host swapping can't be
avoided with some memory overcommit technologies (such as page
sharing), but frontswap on Xen+tmem CAN and DOES avoid it.
(since was freed via the swap API; it will be slow if the host happened
to run out of tmem). So by storing this memory on disk you aren't
reducing performance beyond what you promised to the guest.
Swapping guest RAM will indeed cause a performance hit, but sometimes
you need to do it.
give virtualization a bad reputation. If the user (i.e. guest,
not host, administrator) can at least see "Hmmm... I'm doing a lot
of swapping, guess I'd better pay for more (virtual) RAM", then
the user objections are greatly reduced.
Xen+tmem uses the SAME internal kernel interface. The Xen-specificSo, to summarize:Because the interface is internal to the kernel.
1) You agreed that a synchronous interface for frontswap makes
sense for swap-to-in-kernel-compressed-RAM because it is
truly swapping to RAM.
code which performs the Xen-specific stuff (hypercalls) is only in
the Xen-specific directory.
The missing part again is dynamicity. How large is the virtual2) You have pointed out that an asynchronous interface forkvm's host swapping is unrelated. Host swapping swaps guest-owned
frontswap makes more sense for KVM than a synchronous
interface, because KVM does host swapping.
memory; that's not what we want here. We want to cache guest swap in
RAM, and that's easily done by having a virtual disk cached in main
memory. We're simply presenting a disk with a large write-back cache
to the guest.
disk?
Or are you proposing that disks can dramatically vary
in size across time?
I suspect that would be a very big patch.
And you're talking about a disk that doesn't have all the
overhead of blockio, right?
You could just as easily cache a block device in free RAM with Xen.A block device of what size? Again, I don't think this will be
Have a tmem domain behave as the backend for your swap device. Use
ballooning to force tmem to disk, or to allow more cache when memory is
free.
dynamic enough.
Voila: you no longer depend on guests (you depend on the tmem domain,Ummm... no guest modifications, yet this special disk does everything
but that's part of the host code), you don't need guest modifications,
so it works across a wider range of guests.
you've described above (and, to meet my dynamicity requirements,
varies in size as well)?
As I described in a separate reply, this is simply not true.BUT frontswap on Xen+tmem always truly swaps to RAM.AND that's a problem because it puts the hypervisor at the mercy of the
guest.
Could you please explicitly identify what you are referringSo there are two users of frontswap for which the synchronousI believe there is only one. See below.
interface makes sense.
The problem is not the complexity of the patch itself. It's the fact
that it introduces a new external API. If we refactor swapping, that
stands in the way.
to as a new external API? The part this is different from
the "only one" internal user?
a synchronous single-page DMAAs noted VERY early in this thread, if/when it makes sense, frontswap
API is a bad idea. Look at the Xen network and block code, while they
eventually do a memory copy for every page they see, they try to batch
multiple pages into an exit, and make the response asynchronous.
can do exactly the same thing by adding a buffering layer invisible
to the internal kernel interfaces.
As an example, with a batched API you could save/restore the fpuI think we agree that DMA makes sense when there is a lot of data to
context
and use sse for copying the memory, while with a single page API you'd
probably lost out. Synchronous DMA, even for emulated hardware, is out
of place in 2010.
copy and makes little sense when there is only a little (e.g. a
single page) to copy. So I guess we need to understand what the
tradeoff is. So, do you have any idea what the breakeven point is
for your favorite DMA engine for amount of data copied vs
1) locking the memory pages
2) programming the DMA engine
3) responding to the interrupt from the DMA engine
And the simple act of waiting to collect enough pages to "batch"
means none of those pages can be used until the last page is collected
and the DMA engine is programmed and the DMA is complete.
A page-at-a-time interface synchronously releases the pages
for other (presumably more important) needs and thus, when
memory is under extreme pressure, also reduces the probability
of a (guest) OOM.