Re: [PATCH] dma-direct: Force unencrypted DMA under SME for certain DMA masks

From: Kirill A. Shutemov
Date: Wed Jul 24 2019 - 14:11:48 EST


On Wed, Jul 24, 2019 at 05:34:26PM +0000, Lendacky, Thomas wrote:
> On 7/24/19 12:06 PM, Robin Murphy wrote:
> > On 24/07/2019 17:42, Lendacky, Thomas wrote:
> >> On 7/24/19 10:55 AM, Kirill A. Shutemov wrote:
> >>> On Wed, Jul 10, 2019 at 07:01:19PM +0000, Lendacky, Thomas wrote:
> >>>> @@ -351,6 +355,32 @@ bool sev_active(void)
> >>>>   }
> >>>>   EXPORT_SYMBOL(sev_active);
> >>>>   +/* Override for DMA direct allocation check -
> >>>> ARCH_HAS_FORCE_DMA_UNENCRYPTED */
> >>>> +bool force_dma_unencrypted(struct device *dev)
> >>>> +{
> >>>> +    /*
> >>>> +     * For SEV, all DMA must be to unencrypted addresses.
> >>>> +     */
> >>>> +    if (sev_active())
> >>>> +        return true;
> >>>> +
> >>>> +    /*
> >>>> +     * For SME, all DMA must be to unencrypted addresses if the
> >>>> +     * device does not support DMA to addresses that include the
> >>>> +     * encryption mask.
> >>>> +     */
> >>>> +    if (sme_active()) {
> >>>> +        u64 dma_enc_mask = DMA_BIT_MASK(__ffs64(sme_me_mask));
> >>>> +        u64 dma_dev_mask = min_not_zero(dev->coherent_dma_mask,
> >>>> +                        dev->bus_dma_mask);
> >>>> +
> >>>> +        if (dma_dev_mask <= dma_enc_mask)
> >>>> +            return true;
> >>>
> >>> Hm. What is wrong with the dev mask being equal to enc mask? IIUC, it
> >>> means that device mask is wide enough to cover encryption bit, doesn't it?
> >>
> >> Not really...  it's the way DMA_BIT_MASK works vs bit numbering. Let's say
> >> that sme_me_mask has bit 47 set. __ffs64 returns 47 and DMA_BIT_MASK(47)
> >> will generate a mask without bit 47 set (0x7fffffffffff). So the check
> >> will catch anything that does not support at least 48-bit DMA.
> >
> > Couldn't that be expressed as just:
> >
> >     if (sme_me_mask & dma_dev_mask == sme_me_mask)
>
> Actually !=, but yes, it could have been done like that, I just didn't
> think of it.

I'm looking into generalizing the check to cover MKTME.

Leaving off the Kconfig changes and moving the check to other file, doest
the change below look reasonable to you. It's only build tested so far.

diff --git a/arch/x86/mm/mem_encrypt.c b/arch/x86/mm/mem_encrypt.c
index fece30ca8b0c..6c86adcd02da 100644
--- a/arch/x86/mm/mem_encrypt.c
+++ b/arch/x86/mm/mem_encrypt.c
@@ -355,6 +355,8 @@ EXPORT_SYMBOL(sev_active);
/* Override for DMA direct allocation check - ARCH_HAS_FORCE_DMA_UNENCRYPTED */
bool force_dma_unencrypted(struct device *dev)
{
+ u64 dma_enc_mask;
+
/*
* For SEV, all DMA must be to unencrypted addresses.
*/
@@ -362,18 +364,20 @@ bool force_dma_unencrypted(struct device *dev)
return true;

/*
- * For SME, all DMA must be to unencrypted addresses if the
- * device does not support DMA to addresses that include the
- * encryption mask.
+ * For SME and MKTME, all DMA must be to unencrypted addresses if the
+ * device does not support DMA to addresses that include the encryption
+ * mask.
*/
- if (sme_active()) {
- u64 dma_enc_mask = DMA_BIT_MASK(__ffs64(sme_me_mask));
- u64 dma_dev_mask = min_not_zero(dev->coherent_dma_mask,
- dev->bus_dma_mask);
+ if (!sme_active() && !mktme_enabled())
+ return false;

- if (dma_dev_mask <= dma_enc_mask)
- return true;
- }
+ dma_enc_mask = sme_me_mask | mktme_keyid_mask();
+
+ if (dev->coherent_dma_mask && (dev->coherent_dma_mask & dma_enc_mask) != dma_enc_mask)
+ return true;
+
+ if (dev->bus_dma_mask && (dev->bus_dma_mask & dma_enc_mask) != dma_enc_mask)
+ return true;

return false;
}
--
Kirill A. Shutemov