[RFC v2 1/2] powerpc/pseries/iommu: Share the per-cpu TCE page with the hypervisor.

From: Ram Pai
Date: Fri Nov 08 2019 - 18:00:57 EST


The hypervisor needs to access the contents of the page holding the TCE
entries while setting up the TCE entries in the IOMMU's TCE table.

For SecureVMs, since this page is encrypted, the hypervisor cannot
access valid entries. Share the page with the hypervisor. This ensures
that the hypervisor sees those valid entries.

Why is this safe?
The page contains only TCE entries; not any sensitive data
belonging to the Secure VM. The hypervisor has a genuine need to know
the value of the TCE entries, without which it will not be able to
DMA to/from the pages pointed to by the TCE entries. In a Secure
VM the TCE entries point to pages that are also shared with the
hypervisor; example: pages containing bounce buffers.

Signed-off-by: Ram Pai <linuxram@xxxxxxxxxx>
---
arch/powerpc/platforms/pseries/iommu.c | 23 ++++++++++++++++++++---
1 file changed, 20 insertions(+), 3 deletions(-)

diff --git a/arch/powerpc/platforms/pseries/iommu.c b/arch/powerpc/platforms/pseries/iommu.c
index 8d9c2b1..a302aaa 100644
--- a/arch/powerpc/platforms/pseries/iommu.c
+++ b/arch/powerpc/platforms/pseries/iommu.c
@@ -37,6 +37,7 @@
#include <asm/mmzone.h>
#include <asm/plpar_wrappers.h>
#include <asm/svm.h>
+#include <asm/ultravisor.h>

#include "pseries.h"

@@ -179,6 +180,23 @@ static int tce_build_pSeriesLP(struct iommu_table *tbl, long tcenum,

static DEFINE_PER_CPU(__be64 *, tce_page);

+/*
+ * Allocate a tce page. If secure VM, share the page with the hypervisor.
+ *
+ * NOTE: the TCE page is shared with the hypervisor explicitly and remains
+ * shared for the lifetime of the kernel. It is implicitly unshared at kernel
+ * shutdown through a UV_UNSHARE_ALL_PAGES ucall.
+ */
+static __be64 *alloc_tce_page(void)
+{
+ __be64 *tcep = (__be64 *)__get_free_page(GFP_ATOMIC);
+
+ if (tcep && is_secure_guest())
+ uv_share_page(PHYS_PFN(__pa(tcep)), 1);
+
+ return tcep;
+}
+
static int tce_buildmulti_pSeriesLP(struct iommu_table *tbl, long tcenum,
long npages, unsigned long uaddr,
enum dma_data_direction direction,
@@ -206,8 +224,7 @@ static int tce_buildmulti_pSeriesLP(struct iommu_table *tbl, long tcenum,
* from iommu_alloc{,_sg}()
*/
if (!tcep) {
- tcep = (__be64 *)__get_free_page(GFP_ATOMIC);
- /* If allocation fails, fall back to the loop implementation */
+ tcep = alloc_tce_page();
if (!tcep) {
local_irq_restore(flags);
return tce_build_pSeriesLP(tbl, tcenum, npages, uaddr,
@@ -405,7 +422,7 @@ static int tce_setrange_multi_pSeriesLP(unsigned long start_pfn,
tcep = __this_cpu_read(tce_page);

if (!tcep) {
- tcep = (__be64 *)__get_free_page(GFP_ATOMIC);
+ tcep = alloc_tce_page();
if (!tcep) {
local_irq_enable();
return -ENOMEM;
--
1.8.3.1