Re: [PATCHv3 10/17] x86/mm: Implement prep_encrypted_page() and arch_free_page()
From: Dave Hansen
Date: Wed Jun 13 2018 - 14:26:15 EST
On 06/12/2018 07:39 AM, Kirill A. Shutemov wrote:
> prep_encrypted_page() also takes care about zeroing the page. We have to
> do this after KeyID is set for the page.
This is an implementation detail that has gone unmentioned until now but
has impacted at least half a dozen locations in previous patches. Can
you rectify that, please?
> +void prep_encrypted_page(struct page *page, int order, int keyid, bool zero)
> +{
> + int i;
> +
> + /*
> + * The hardware/CPU does not enforce coherency between mappings of the
> + * same physical page with different KeyIDs or encrypt ion keys.
What are "encrypt ion"s? :)
> + * We are responsible for cache management.
> + *
> + * We flush cache before allocating encrypted page
> + */
> + clflush_cache_range(page_address(page), PAGE_SIZE << order);
> +
> + for (i = 0; i < (1 << order); i++) {
> + WARN_ON_ONCE(lookup_page_ext(page)->keyid);
/* All pages coming out of the allocator should have KeyID 0 */
> + lookup_page_ext(page)->keyid = keyid;
> + /* Clear the page after the KeyID is set. */
> + if (zero)
> + clear_highpage(page);
> + }
> +}
How expensive is this?
> +void arch_free_page(struct page *page, int order)
> +{
> + int i;
>
/* KeyId-0 pages were not used for MKTME and need no work */
... or something
> + if (!page_keyid(page))
> + return;
Is page_keyid() optimized so that all this goes away automatically when
MKTME is compiled out or unsupported?
> + for (i = 0; i < (1 << order); i++) {
> + WARN_ON_ONCE(lookup_page_ext(page)->keyid > mktme_nr_keyids);
> + lookup_page_ext(page)->keyid = 0;
> + }
> +
> + clflush_cache_range(page_address(page), PAGE_SIZE << order);
> +}