Re: [PATCH v10 02/15] set_memory: add folio_{zap, restore}_direct_map helpers

From: Nikita Kalyazin

Date: Fri Mar 06 2026 - 10:45:01 EST




On 06/03/2026 15:17, David Hildenbrand (Arm) wrote:
On 3/6/26 15:48, Nikita Kalyazin wrote:


On 06/03/2026 14:17, David Hildenbrand (Arm) wrote:
On 3/6/26 13:48, Nikita Kalyazin wrote:



Will update, thanks.


Absolutely!


Yes, on x86 we need an explicit flush. Other architectures deal with it
internally.

So, we call a _noflush function and it performs a ... flush. What.

Yeah, that's unfortunately the status quo as pointed by Aneesh [1]

[1] https://lore.kernel.org/kvm/yq5ajz07czvz.fsf@xxxxxxxxxx/


Take a look at secretmem_fault(), where we do an unconditional
flush_tlb_kernel_range().

Do we end up double-flushing in that case?

Yes, looks like that. I'll remove the explicit flush and rely on
folio_zap_direct_map().


Do you propose a bespoke implementation for x86 and a
"generic" one for others?

We have to find a way to have a single set of functions for all archs
that support directmap removal.

I believe Dave meant to address that with folio_{zap,restore}
_direct_map() [2].

[2] https://lore.kernel.org/kvm/9409531b-589b-4a54-
b122-06a3cf0846f3@xxxxxxxxx/


One option might be to have some indication from the architecture that
no flush_tlb_kernel_range() is required.

Could be a config option or some simple helper function.

I'd be inclined to know what arch maintainers think because I don't have
a strong opinion on that.

You could also just perform a double flush, and let people that
implemented a _noflush() to perform a flush optimize that later.

Do you propose to just universalise the one from x86?

int folio_zap_direct_map(struct folio *folio)
{
const void *addr = folio_address(folio);
int ret;

ret = set_direct_map_valid_noflush(addr, folio_nr_pages(folio), false);
flush_tlb_kernel_range((unsigned long)addr,
(unsigned long)addr + folio_size(folio));

return ret;
}

I'm fine with that too.


I mean, that's what secretmem did :)

With the solution above, secretmem stays where it was: no optimisation so far :)


--
Cheers,

David