Re: [PATCH, RFC 13/62] x86/mm: Add hooks to allocate and free encrypted pages

From: Kirill A. Shutemov
Date: Fri Jun 14 2019 - 09:19:42 EST


On Fri, Jun 14, 2019 at 11:34:09AM +0200, Peter Zijlstra wrote:
> On Wed, May 08, 2019 at 05:43:33PM +0300, Kirill A. Shutemov wrote:
>
> > +/* Prepare page to be used for encryption. Called from page allocator. */
> > +void __prep_encrypted_page(struct page *page, int order, int keyid, bool zero)
> > +{
> > + int i;
> > +
> > + /*
> > + * The hardware/CPU does not enforce coherency between mappings
> > + * of the same physical page with different KeyIDs or
> > + * encryption keys. We are responsible for cache management.
> > + */
>
> On alloc we should flush the unencrypted (key=0) range, while on free
> (below) we should flush the encrypted (key!=0) range.
>
> But I seem to have missed where page_address() does the right thing
> here.

As you've seen by now, it will be addressed later in the patchset. I'll
update the changelog to indicate that page_address() handles KeyIDs
correctly.

> > + clflush_cache_range(page_address(page), PAGE_SIZE * (1UL << order));
> > +
> > + for (i = 0; i < (1 << order); i++) {
> > + /* All pages coming out of the allocator should have KeyID 0 */
> > + WARN_ON_ONCE(lookup_page_ext(page)->keyid);
> > + lookup_page_ext(page)->keyid = keyid;
> > +
>
> So presumably page_address() is affected by this keyid, and the below
> clear_highpage() then accesses the 'right' location?

Yes. clear_highpage() -> kmap_atomic() -> page_address().

> > + /* Clear the page after the KeyID is set. */
> > + if (zero)
> > + clear_highpage(page);
> > +
> > + page++;
> > + }
> > +}
> > +
> > +/*
> > + * Handles freeing of encrypted page.
> > + * Called from page allocator on freeing encrypted page.
> > + */
> > +void free_encrypted_page(struct page *page, int order)
> > +{
> > + int i;
> > +
> > + /*
> > + * The hardware/CPU does not enforce coherency between mappings
> > + * of the same physical page with different KeyIDs or
> > + * encryption keys. We are responsible for cache management.
> > + */
>
> I still don't like that comment much; yes the hardware doesn't do it,
> and yes we have to do it, but it doesn't explain the actual scheme
> employed to do so.

Fair enough. I'll do better.

> > + clflush_cache_range(page_address(page), PAGE_SIZE * (1UL << order));
> > +
> > + for (i = 0; i < (1 << order); i++) {
> > + /* Check if the page has reasonable KeyID */
> > + WARN_ON_ONCE(lookup_page_ext(page)->keyid > mktme_nr_keyids);
>
> It should also check keyid > 0, so maybe:
>
> (unsigned)(keyid - 1) > keyids-1
>
> instead?

Makes sense.

--
Kirill A. Shutemov