[PATCH] KVM: use slowpath for cross page cached accesses

From: Radim KrÄmÃÅ
Date: Wed Apr 08 2015 - 08:16:58 EST

2015-04-08 12:43+0200, Paolo Bonzini:
> On 08/04/2015 11:26, Radim KrÄmÃÅ wrote:
>> 2015-04-08 10:49+0200, Paolo Bonzini:
>>> On 07/04/2015 22:34, Radim KrÄmÃÅ wrote:
>>>> We dirtied only one page because writes originally couldn't span more.
>>>> Use improved syntax for '>> PAGE_SHIFT' while at it.
>>>> Fixes: 8f964525a121 ("KVM: Allow cross page reads and writes from cached translations.")
>>>> Signed-off-by: Radim KrÄmÃÅ <rkrcmar@xxxxxxxxxx>
>>> Cross-page reads and writes should never get here; they have
>>> ghc->memslot set to NULL and go through the slow path in kvm_write_guest.
>> Only cross-memslot writes have NULL memslot.
> The power of wrong comments...
> Considering how kvm_gfn_to_hva_cache_init is used (one 1-byte field, two
> 4-byte fields, one 28-bytes struct that is 32-bytes aligned, one
> 32-bytes field that is in practice cacheline-aligned), I wonder if we
> should just use ghc->memslot = NULL for cross page writes. This would
> bypass the bug you are fixing here, and avoid worries about partial writes.

Good idea, and it could make those comments right :)
(Though in general, I prefer less constraints on APIs ...)

Partial writes would be a pain; copy_to_user API does not define which
bytes were not written. I think the write can't fail mid-page, which
makes our implementation ok, but I still worry a bit about it.

Anyway, here's the patch:

kvm_write_guest_cached() does not mark all written pages as dirty and
code comments in kvm_gfn_to_hva_cache_init() talk about NULL memslot
with cross page accesses. Fix all the easy way.

The check is '<= 1' to have the same result for 'len = 0' cache anywhere
in the page. (nr_pages_needed is 0 on page boundary.)

Fixes: 8f964525a121 ("KVM: Allow cross page reads and writes from cached translations.")
Signed-off-by: Radim KrÄmÃÅ <rkrcmar@xxxxxxxxxx>
virt/kvm/kvm_main.c | 4 ++--
1 file changed, 2 insertions(+), 2 deletions(-)

diff --git a/virt/kvm/kvm_main.c b/virt/kvm/kvm_main.c
index aadef264bed1..f3dc641f9640 100644
--- a/virt/kvm/kvm_main.c
+++ b/virt/kvm/kvm_main.c
@@ -1637,8 +1637,8 @@ int kvm_gfn_to_hva_cache_init(struct kvm *kvm, struct gfn_to_hva_cache *ghc,
ghc->generation = slots->generation;
ghc->len = len;
ghc->memslot = gfn_to_memslot(kvm, start_gfn);
- ghc->hva = gfn_to_hva_many(ghc->memslot, start_gfn, &nr_pages_avail);
- if (!kvm_is_error_hva(ghc->hva) && nr_pages_avail >= nr_pages_needed) {
+ ghc->hva = gfn_to_hva_many(ghc->memslot, start_gfn, NULL);
+ if (!kvm_is_error_hva(ghc->hva) && nr_pages_needed <= 1) {
ghc->hva += offset;
} else {
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/