Re: [RFC, PATCH 19/22] x86/mm: Implement free_encrypt_page()

From: Kirill A. Shutemov
Date: Tue Mar 06 2018 - 03:54:34 EST


On Mon, Mar 05, 2018 at 11:07:16AM -0800, Dave Hansen wrote:
> On 03/05/2018 08:26 AM, Kirill A. Shutemov wrote:
> > +void free_encrypt_page(struct page *page, int keyid, unsigned int order)
> > +{
> > + int i;
> > + void *v;
> > +
> > + for (i = 0; i < (1 << order); i++) {
> > + v = kmap_atomic_keyid(page, keyid + i);
> > + /* See comment in prep_encrypt_page() */
> > + clflush_cache_range(v, PAGE_SIZE);
> > + kunmap_atomic(v);
> > + }
> > +}
>
> Have you measured how slow this is?

No, I have not.

> It's an optimization, but can we find a way to only do this dance when
> we *actually* change the keyid? Right now, we're doing mapping at alloc
> and free, clflushing at free and zeroing at alloc. Let's say somebody does:
>
> ptr = malloc(PAGE_SIZE);
> *ptr = foo;
> free(ptr);
>
> ptr = malloc(PAGE_SIZE);
> *ptr = bar;
> free(ptr);
>
> And let's say ptr is in encrypted memory and that we actually munmap()
> at free(). We can theoretically skip the clflush, right?

Yes we can. Theoretically. We would need to find a way to keep KeyID
around after the page is removed from rmap. That's not so trivial as far
as I can see.

I will look into optimization after I'll got functionality in place.

--
Kirill A. Shutemov