slots would be one implementation, if you can think of others then you'dI'm more interested in *how* you'd add them more than "if" we would add
add them.
them. What I am getting at are the logistics of such a beast.
For instance, would I have /dev/slots-vas with ioctls for adding slots,
and /dev/foo-vas for adding foos? And each one would instantiate a
different vas_struct object with its own vas_struct->ops? Or were you
thinking of something different.
If you can't, I think it indicates that the whole thing isn't necessaryI'm not sure if we are talking about the same thing yet, but if we are,
and we're better off with slots and virtual memory.
there are uses of a generalized interface outside of slots/virtual
memory (Ira's physical box being a good example).
In any case, I think the best approach is what I already proposed.
KVM's arrangement of memory is going to tend to be KVM specific, and
what better place to implement the interface than close to the kvm.ko core.
The only thing missing is dma, which you don't deal with anyway.Afaict I do support dma in the generalized vbus::memctx, though I do not
use it on anything related to KVM or xinterface. Can you elaborate on
the problem here? Does the SG interface in 4/4 help get us closer to
what you envision?
You'd have to copy the entire range since you don't know what the guest? The vmap is presumably part of an ABI between guest and host, so the
might put there. I guess it's acceptable for small areas.
host should always know what structure is present within the region, and
what is relevant from within that structure to migrate once that state
is "frozen".
These regions (for vbus, anyway) equate to things like virtqueue
metadata, and presumably the same problem exists for virtio-net in
userspace as it does here, since that is another form of a "vmap". So
whatever solution works for virtio-net migrating its virtqueues in
userspace should be close to what will work here. The primary
difference is the location of the serializer.
rmb()s are only needed if an external agent can issue writes, otherwiseI was following lessons learned here:
you'd need one after every statement.
http://lkml.org/lkml/2009/7/7/175
Perhaps mb() or barrier() are more appropriate than rmb()? I'm CC'ing
David Howells in case he has more insight.
It primarily assumes a low _migration_ rate, since you do not typicallyThis just assumes a low context switch rate.A simple per-vcpu cache (in struct kvm_vcpu) is likely to give betterper-vcpu will not work well here, unfortunately, since this is an
results.
external interface mechanism. The callers will generally be from a
kthread or some other non-vcpu related context. Even if we could figure
out a vcpu to use as a basis, we would require some kind of
heavier-weight synchronization which would not be as desirable.
Therefore, I opted to go per-cpu and use the presumably lighterweight
get_cpu/put_cpu() instead.
have two contexts on the same cpu pounding on the memslots.
And even if
you did, there's a good chance for locality between the threads, since
the IO activity is likely related. For the odd times where locality
fails to yield a hit, the time-slice or migration rate should be
sufficiently infrequent enough to still yield high 90s hit rates for
when it matters. For low-volume, the lookup is in the noise so I am not
as concerned.
IOW: Where the lookup hurts the most is trying to walk an SG list, since
each pointer is usually within the same slot. For this case, at least,
this cache helps immensely, at least according to profiles.
How about a gfn_to_pfn_cached(..., struct gfn_to_pfn_cache *cache)?Sounds good. I will incorporate this into the split patch.
Each user can place it in a natural place.
I am not an mm expert, but iiuc you cannot call switch_to() fromStill, why can't you switch temporarily?Thats actually what I do for the fast-path (use_mm() does a switch_to()+static unsigned longCan't you switch the mm temporarily instead of this?
+xinterface_copy_to(struct kvm_xinterface *intf, unsigned long gpa,
+ const void *src, unsigned long n)
+{
+ struct _xinterface *_intf = to_intf(intf);
+ unsigned long dst;
+ bool kthread = !current->mm;
+
+ down_read(&_intf->kvm->slots_lock);
+
+ dst = gpa_to_hva(_intf, gpa);
+ if (!dst)
+ goto out;
+
+ if (kthread)
+ use_mm(_intf->mm);
+
+ if (kthread || _intf->mm == current->mm)
+ n = copy_to_user((void *)dst, src, n);
+ else
+ n = _slow_copy_to_user(_intf, dst, src, n);
internally).
The slow-path is only there for completeness for when switching is not
possible (such as if called with an mm already active i.e.
process-context).
anything other than kthread context. Thats what the doc says, anyway.
Actually, no. Before Michael enlightened me recently regardingIn practice, however, this doesnt happen. VirtuallySo you have 100% untested code here.
100% of the calls in vbus hit the fast-path here, and I suspect most
xinterface clients would find the same conditions as well.
switch_to/use_mm, the '_slow_xx' functions were my _only_ path. So they
have indeed multiple months (and multiple GB) of testing, as it turns
out. I only recently optimized them away to "backup" duty.