On Thu, Jan 28, 2010 at 12:37:57PM +0100, Joerg Roedel wrote:
This patch changes the implementation of ofIf gfn_to_pfn_memslot returns pfn of bad_page, you might create a
kvm_iommu_map_pages to map the pages with the host page size
into the io virtual address space.
Signed-off-by: Joerg Roedel<joerg.roedel@xxxxxxx>
---
virt/kvm/iommu.c | 106 ++++++++++++++++++++++++++++++++++++++++++-----------
1 files changed, 84 insertions(+), 22 deletions(-)
diff --git a/virt/kvm/iommu.c b/virt/kvm/iommu.c
index 65a5143..92a434d 100644
--- a/virt/kvm/iommu.c
+++ b/virt/kvm/iommu.c
@@ -32,12 +32,27 @@ static int kvm_iommu_unmap_memslots(struct kvm *kvm);
static void kvm_iommu_put_pages(struct kvm *kvm,
gfn_t base_gfn, unsigned long npages);
+static pfn_t kvm_pin_pages(struct kvm *kvm, struct kvm_memory_slot *slot,
+ gfn_t gfn, unsigned long size)
+{
+ gfn_t end_gfn;
+ pfn_t pfn;
+
+ pfn = gfn_to_pfn_memslot(kvm, slot, gfn);
large iommu translation for it?