On 08/23/2018 01:47 AM, Christian KÃnig wrote:
Am 22.08.2018 um 22:57 schrieb Jiandi An:Thanks for the suggestion and guidance. So you want me to register the callbacks something like virtio_gpu_ttm_populate() and virtio_gpu_ttm_unpopulate() in the virtio_gpu_bo_driver,
On 08/22/2018 02:09 PM, Christian KÃnig wrote:
Am 22.08.2018 um 20:57 schrieb Jiandi An:The path where the issue comes in is booting guest with AMD SEV enabled while using virtio-vga device.
Framebuffer memory needs to be accessed decrypted. Ensure theNAK, the memory not only needs to be decrypted while CPU accessed but all the time.
memory encryption mask is not set for the ttm framebuffer mappings.
ttm_page_alloc.c and ttm_page_alloc_dma.c should already take care of that while mapping the pages.
Regards,
Christian.
The ttm->pages doesn't go through ttm_populate_and_map_pages().
And that is the point where you need to jump in and fix the problem.
When a driver implements the populate() and unpopulate() callbacks themselves it needs to take care of all that handling.
and perform mapping by calling ttm_bo_kmap()? Then bring setting memory decrypted/encryped by calling set_memory_decrypted()/set_memory_encrypted() outside of ttm_bo_kmap(), do it within virtio_gpu_ttm_populate() and virtio_gpu_ttm_unpopulate() callbacks?
Then get rid of the separate call of virtio_gpu_vmap_fb() the virtio_gpu driver does?
In the kernel path, the virtio_gpu driver calls virtio_gpu_alloc_object() and goes down to ttm to
allocate pages in ttm_pool_populate(). Driver in guest then does virtio_gpu_vmap_fb() which goes
down to ttm_bo_kmap_ttm() and does a vmap() for GVA to GPA for those pages. If SEV is enabled,
decryption should be set in this GVA to GPA mapping.
That assumption is what is incorrect here.
TTM can't work with encrypted pages, so you need to set the pages as decrypted directly after calling ttm_pool_populate().
And obviously set it to encrypted again before calling ttm_pool_unpopulate().
Probably best if we add that to ttm_tt_populate()/ttm_tt_unpopulate().
Here when you say ttm_tt_populate() and ttm_tt_unpopulate() you mean the virtio_gpu_ttm_populate() and virtio_gpu_ttm_unpopulate() that I register in virtio_gpu_bo_driver right?
Regards,
Christian.
ÂÂ Guest then sends object attach command to host
via virtio_gpu_object_attach() which stuffs the pages' physical addresses (guest physical address)
in a scatter list and send them to host QEMU. Host QEMU maps those guest pages GPA to HPA and access
via HVA while guest write stuff in those pages via GVA.
virtio_gpu_alloc_object()
ÂÂÂ virtio_gpu_object_create()
ÂÂÂÂÂÂ sturct virtio_gpu_object bo kzalloc()
ÂÂÂÂÂÂÂÂÂ ttm_bo_init()
ÂÂÂÂÂÂÂÂÂ ...
ÂÂÂÂÂÂÂÂÂÂÂÂ ttm_tt_create()
ÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂ bo->ttm = bdev->driver->ttm_tt_create()
ÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂ virtio_gpu_ttm_tt_create()
ÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂ ...
ÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂ ttm_tt_populate()
ÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂ ttm_pool_populate()
ÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂ ttm_get_pages(ttm->pages, ttm->num_pages)
virtio_gpu_vmap_fb()
ÂÂÂ virtio_gpu_object_kmap(obj, NULL)
ÂÂÂÂÂÂ ttm_bo_kmap
ÂÂÂÂÂÂÂÂÂ ttm_mem_io_reserve()
ÂÂÂÂÂÂÂÂÂÂÂÂ ttm_bo_kmap_ttm()
ÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂ vmap()
ÂÂÂÂÂÂÂÂÂÂÂÂ struct virtio_gpu_object bo->vmap = ttm_kmap_obj_virtual(&bo->kmap, &is_iomem);
virtio_gpu_object_attach() <<== Sending attach backing command
ÂÂÂ virtio_gpu_object_get_sg_table()
ÂÂÂÂÂÂ if (bo->tbo.ttm->state == tt_unpopulated)
ÂÂÂÂÂÂÂÂÂ virtio_gpu_object bo.ttm->bdev->driver->ttm_tt_populate(bo->tbo.ttm, &ctx);
ÂÂÂÂÂÂÂÂÂÂÂÂ bo->pages = kmalloc(sizeof(struct sg_table), GFP_KERNEL);
sg_alloc_table_from_pages(bo->tbo.ttm->pages)Â //Getfrom ttm->pages and put in sg
ÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂ __sg_alloc_table_from_pages()
There is a separate userspace path for example when displace manager kicks in,
virtio_gpu_alloc_object() followed by virtio_gpu_object_attach() are called through
the ioctl virtio_gpu_resource_create_ioctl(). The mapping of the pages created in this
page are handled in the do_page_fault() path in ttm_bo_vm_ops's ttm_bo_vm_fault() handler.
do_page_fault()
ÂÂÂ handle_mm_fault()
ÂÂÂÂÂÂ __do_page_fault()
ÂÂÂÂÂÂÂÂÂ ttm_bo_vm_fault()
ÂÂÂÂÂÂÂÂÂÂÂÂ ttm_bo_reserve()
ÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂ __ttm_bo_reserve()
ÂÂÂÂÂÂÂÂÂÂÂÂ ttm_mem_io_reserve_vm()
ÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂ ttm_mem_io_reserve()
ÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂ bdev->driver->io_mem_reserve()
ÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂ virtio_gpu_ttm_io_mem_reserve()
ÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂ mem->bus.is_iomem = false
ÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂ mem->mem_type == TTM_PL_TT
I originally handled setting pages decrypted for the kernel path for GVA to GPA mapping in guest
in virtio_gpu_vmap_fb() at the virtio_gpu driver layer. But for the user space path
virtio_gpu_vmap_fb() is not called. Mapping is handled in the page_fault handler. So I moved it
to the ttm layer.
What layer do you recommend as more appropriate to handle setting memory decrypted for GVA to GPA
mapping for thos ttm->pages?
Thanks
Jiandi
Signed-off-by: Jiandi An <jiandi.an@xxxxxxx>
---
ÂÂ drivers/gpu/drm/ttm/ttm_bo_util.c | 12 +++++++++++-
ÂÂ drivers/gpu/drm/ttm/ttm_bo_vm.cÂÂ |Â 6 ++++--
ÂÂ 2 files changed, 15 insertions(+), 3 deletions(-)
diff --git a/drivers/gpu/drm/ttm/ttm_bo_util.c b/drivers/gpu/drm/ttm/ttm_bo_util.c
index 046a6dda690a..b3f5d26f571e 100644
--- a/drivers/gpu/drm/ttm/ttm_bo_util.c
+++ b/drivers/gpu/drm/ttm/ttm_bo_util.c
@@ -29,6 +29,7 @@
ÂÂÂ * Authors: Thomas Hellstrom <thellstrom-at-vmware-dot-com>
ÂÂÂ */
ÂÂ +#include <asm/set_memory.h>
ÂÂ #include <drm/ttm/ttm_bo_driver.h>
ÂÂ #include <drm/ttm/ttm_placement.h>
ÂÂ #include <drm/drm_vma_manager.h>
@@ -639,7 +640,11 @@ int ttm_bo_kmap(struct ttm_buffer_object *bo,
ÂÂÂÂÂÂ if (ret)
ÂÂÂÂÂÂÂÂÂÂ return ret;
ÂÂÂÂÂÂ if (!bo->mem.bus.is_iomem) {
-ÂÂÂÂÂÂÂ return ttm_bo_kmap_ttm(bo, start_page, num_pages, map);
+ÂÂÂÂÂÂÂ ret = ttm_bo_kmap_ttm(bo, start_page, num_pages, map);
+ÂÂÂÂÂÂÂ if (!ret && sev_active())
+ÂÂÂÂÂÂÂÂÂÂÂ set_memory_decrypted((unsigned long) map->virtual,
+ÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂ num_pages);
+ÂÂÂÂÂÂÂ return ret;
ÂÂÂÂÂÂ } else {
ÂÂÂÂÂÂÂÂÂÂ offset = start_page << PAGE_SHIFT;
ÂÂÂÂÂÂÂÂÂÂ size = num_pages << PAGE_SHIFT;
@@ -661,9 +666,14 @@ void ttm_bo_kunmap(struct ttm_bo_kmap_obj *map)
ÂÂÂÂÂÂÂÂÂÂ iounmap(map->virtual);
ÂÂÂÂÂÂÂÂÂÂ break;
ÂÂÂÂÂÂ case ttm_bo_map_vmap:
+ÂÂÂÂÂÂÂ if (sev_active())
+ÂÂÂÂÂÂÂÂÂÂÂ set_memory_encrypted((unsigned long) map->virtual,
+ÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂ bo->num_pages);
ÂÂÂÂÂÂÂÂÂÂ vunmap(map->virtual);
ÂÂÂÂÂÂÂÂÂÂ break;
ÂÂÂÂÂÂ case ttm_bo_map_kmap:
+ÂÂÂÂÂÂÂ if (sev_active())
+ÂÂÂÂÂÂÂÂÂÂÂ set_memory_encrypted((unsigned long) map->virtual, 1);
ÂÂÂÂÂÂÂÂÂÂ kunmap(map->page);
ÂÂÂÂÂÂÂÂÂÂ break;
ÂÂÂÂÂÂ case ttm_bo_map_premapped:
diff --git a/drivers/gpu/drm/ttm/ttm_bo_vm.c b/drivers/gpu/drm/ttm/ttm_bo_vm.c
index 6fe91c1b692d..211d3549fd9f 100644
--- a/drivers/gpu/drm/ttm/ttm_bo_vm.c
+++ b/drivers/gpu/drm/ttm/ttm_bo_vm.c
@@ -249,10 +249,12 @@ static vm_fault_t ttm_bo_vm_fault(struct vm_fault *vmf)
ÂÂÂÂÂÂÂ * Speculatively prefault a number of pages. Only error on
ÂÂÂÂÂÂÂ * first page.
ÂÂÂÂÂÂÂ */
+
+ÂÂÂ /* Mark framebuffer pages decrypted */
+ÂÂÂ cvma.vm_page_prot = pgprot_decrypted(cvma.vm_page_prot);
+
ÂÂÂÂÂÂ for (i = 0; i < TTM_BO_VM_NUM_PREFAULT; ++i) {
ÂÂÂÂÂÂÂÂÂÂ if (bo->mem.bus.is_iomem) {
-ÂÂÂÂÂÂÂÂÂÂÂ /* Iomem should not be marked encrypted */
-ÂÂÂÂÂÂÂÂÂÂÂ cvma.vm_page_prot = pgprot_decrypted(cvma.vm_page_prot);
ÂÂÂÂÂÂÂÂÂÂÂÂÂÂ pfn = ttm_bo_io_mem_pfn(bo, page_offset);
ÂÂÂÂÂÂÂÂÂÂ } else {
ÂÂÂÂÂÂÂÂÂÂÂÂÂÂ page = ttm->pages[page_offset];