[PATCH 3/7] Intel pci: Dont cache iova above 32bit
From: Mike Travis
Date: Sat May 28 2011 - 14:16:51 EST
Mike Travis and Mike Habeck reported an issue where iova allocation
would return a range that was larger than a device's dma mask.
https://lkml.org/lkml/2011/3/29/423
The dmar initialization code will reserve all PCI MMIO regions and copy
those reservations into a domain specific iova tree. It is possible for
one of those regions to be above the dma mask of a device. It is typical
to allocate iovas with a 32bit mask (despite device's dma mask possibly
being larger) and cache the result until it exhausts the lower 32bit
address space. Freeing the iova range that is >= the last iova in the
lower 32bit range when there is still an iova above the 32bit range will
corrupt the cached iova by pointing it to a region that is above 32bit.
If that region is also larger than the device's dma mask, a subsequent
allocation will return an unusable iova and cause dma failure.
Simply don't cache an iova that is above the 32bit caching boundary.
From: Chris Wright <chrisw@xxxxxxxxxxxx>
Reported-by: Mike Travis <travis@xxxxxxx>
Reported-by: Mike Habeck <habeck@xxxxxxx>
Cc: David Woodhouse <dwmw2@xxxxxxxxxxxxx>
Cc: stable@xxxxxxxxxx
Acked-by: Mike Travis <travis@xxxxxxx>
Tested-by: Mike Habeck <habeck@xxxxxxx>
Signed-off-by: Chris Wright <chrisw@xxxxxxxxxxxx>
---
v3: rb_next() can return NULL, found when testing on my hw
David, Mike Travis will collect and resumbit full series when he's back.
drivers/pci/iova.c | 12 ++++++++++--
1 file changed, 10 insertions(+), 2 deletions(-)
--- linux.orig/drivers/pci/iova.c
+++ linux/drivers/pci/iova.c
@@ -63,8 +63,16 @@ __cached_rbnode_delete_update(struct iov
curr = iovad->cached32_node;
cached_iova = container_of(curr, struct iova, node);
- if (free->pfn_lo >= cached_iova->pfn_lo)
- iovad->cached32_node = rb_next(&free->node);
+ if (free->pfn_lo >= cached_iova->pfn_lo) {
+ struct rb_node *node = rb_next(&free->node);
+ struct iova *iova = container_of(node, struct iova, node);
+
+ /* only cache if it's below 32bit pfn */
+ if (node && iova->pfn_lo < iovad->dma_32bit_pfn)
+ iovad->cached32_node = node;
+ else
+ iovad->cached32_node = NULL;
+ }
}
/* Computes the padding size required, to make the
--
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/