Re: [PATCH v2 3/4] drm/ttm, drm/vmwgfx: Correctly support support AMD memory encryption

From: Thomas HellstrÃm (VMware)
Date: Wed Sep 04 2019 - 02:49:23 EST


On 9/4/19 1:15 AM, Andy Lutomirski wrote:

On Sep 3, 2019, at 3:15 PM, Thomas HellstrÃm (VMware) <thomas_os@xxxxxxxxxxxx> wrote:

On 9/4/19 12:08 AM, Thomas HellstrÃm (VMware) wrote:
On 9/3/19 11:46 PM, Andy Lutomirski wrote:
On Tue, Sep 3, 2019 at 2:05 PM Thomas HellstrÃm (VMware)
<thomas_os@xxxxxxxxxxxx> wrote:
On 9/3/19 10:51 PM, Dave Hansen wrote:
On 9/3/19 1:36 PM, Thomas HellstrÃm (VMware) wrote:
So the question here should really be, can we determine already at mmap
time whether backing memory will be unencrypted and adjust the *real*
vma->vm_page_prot under the mmap_sem?

Possibly, but that requires populating the buffer with memory at mmap
time rather than at first fault time.
I'm not connecting the dots.

vma->vm_page_prot is used to create a VMA's PTEs regardless of if they
are created at mmap() or fault time. If we establish a good
vma->vm_page_prot, can't we just use it forever for demand faults?
With SEV I think that we could possibly establish the encryption flags
at vma creation time. But thinking of it, it would actually break with
SME where buffer content can be moved between encrypted system memory
and unencrypted graphics card PCI memory behind user-space's back. That
would imply killing all user-space encrypted PTEs and at fault time set
up new ones pointing to unencrypted PCI memory..

Or, are you concerned that if an attempt is made to demand-fault page
that's incompatible with vma->vm_page_prot that we have to SEGV?

And it still requires knowledge whether the device DMA is always
unencrypted (or if SEV is active).
I may be getting mixed up on MKTME (the Intel memory encryption) and
SEV. Is SEV supported on all memory types? Page cache, hugetlbfs,
anonymous? Or just anonymous?
SEV AFAIK encrypts *all* memory except DMA memory. To do that it uses a
SWIOTLB backed by unencrypted memory, and it also flips coherent DMA
memory to unencrypted (which is a very slow operation and patch 4 deals
with caching such memory).

I'm still lost. You have some fancy VMA where the backing pages
change behind the application's back. This isn't particularly novel
-- plain old anonymous memory and plain old mapped files do this too.
Can't you all the insert_pfn APIs and call it a day? What's so
special that you need all this magic? ISTM you should be able to
allocate memory that's addressable by the device (dma_alloc_coherent()
or whatever) and then map it into user memory just like you'd map any
other page.

I feel like I'm missing something here.
Yes, so in this case we use dma_alloc_coherent().

With SEV, that gives us unencrypted pages. (Pages whose linear kernel map is marked unencrypted). With SME that (typcially) gives us encrypted pages. In both these cases, vm_get_page_prot() returns
an encrypted page protection, which lands in vma->vm_page_prot.

In the SEV case, we therefore need to modify the page protection to unencrypted. Hence we need to know whether we're running under SEV and therefore need to modify the protection. If not, the user-space PTE would incorrectly have the encryption flag set.

Iâm still confused. You got unencrypted pages with an unencrypted PFN. Why do you need to fiddle? You have a PFN, and youâre inserting it with vmf_insert_pfn(). This should just work, no?

OK now I see what causes the confusion.

With SEV, the encryption state is, while *physically* encoded in an address bit, from what I can tell, not *logically* encoded in the pfn, but in the page_prot for cpu mapping purposes. That is, page_to_pfn() returns the same pfn whether the page is encrypted or unencrypted. Hence nobody can't tell from the pfn whether the page is unencrypted or encrypted.

For device DMA address purposes, the encryption status is encoded in the dma address by the dma layer in phys_to_dma().


There doesnât seem to be any real funny business in dma_mmap_attrs() or dma_common_mmap().

No, from what I can tell the call in these functions to dma_pgprot() generates an incorrect page protection since it doesn't take unencrypted coherent memory into account. I don't think anybody has used these functions yet with SEV.


But, reading this, I have more questions:

Canât you get rid of cvma by using vmf_insert_pfn_prot()?

It looks like that, although there are comments in the code about serious performance problems using VM_PFNMAP / vmf_insert_pfn() with write-combining and PAT, so that would require some serious testing with hardware I don't have. But I guess there is definitely room for improvement here. Ideally we'd like to be able to change the vma->vm_page_prot within fault(). But we can


Would it make sense to add a vmf_insert_dma_page() to directly do exactly what youâre trying to do?

Yes, but as a longer term solution I would prefer a general dma_pgprot() exported, so that we could, in a dma-compliant way, use coherent pages with other apis, like kmap_atomic_prot() and vmap(). That is, basically split coherent page allocation in two steps: Allocation and mapping.


And a broader question just because Iâm still confused: why isnât the encryption bit in the PFN? The whole SEV/SME system seems like itâs trying a bit to hard to be fully invisible to the kernel.

I guess you'd have to ask AMD about that. But my understanding is that encoding it in an address bit does make it trivial to do decryption / encryption on the fly to DMA devices that are not otherwise aware of it, just by handing them a special physical address. For cpu mapping purposes it might become awkward to encode it in the pfn since pfn_to_page and friends would need knowledge about this. Personally I think it would have made sense to track it like PAT in track_pfn_insert().

Thanks,

Thomas