Re: [PATCH v2 4/4] x86/vmalloc: Add TLB efficient x86 arch_vunmap
From: Nadav Amit
Date: Wed Dec 12 2018 - 16:17:39 EST
> On Dec 12, 2018, at 1:05 PM, Edgecombe, Rick P <rick.p.edgecombe@xxxxxxxxx> wrote:
> On Wed, 2018-12-12 at 06:30 +0000, Nadav Amit wrote:
>>> On Dec 11, 2018, at 4:03 PM, Rick Edgecombe <rick.p.edgecombe@xxxxxxxxx>
>>> This adds a more efficient x86 architecture specific implementation of
>>> arch_vunmap, that can free any type of special permission memory with only 1
>>> In order to enable this, _set_pages_p and _set_pages_np are made non-static
>>> renamed set_pages_p_noflush and set_pages_np_noflush to better communicate
>>> their different (non-flushing) behavior from the rest of the set_pages_*
>>> The method for doing this with only 1 TLB flush was suggested by Andy
>>> + /*
>>> + * If the vm being freed has security sensitive capabilities such as
>>> + * executable we need to make sure there is no W window on the directmap
>>> + * before removing the X in the TLB. So we set not present first so we
>>> + * can flush without any other CPU picking up the mapping. Then we reset
>>> + * RW+P without a flush, since NP prevented it from being cached by
>>> + * other cpus.
>>> + */
>>> + set_area_direct_np(area);
>>> + vm_unmap_aliases();
>> Does vm_unmap_aliases() flush in the TLB the direct mapping range as well? I
>> can only find the flush of the vmalloc range.
> Hmmm. It should usually (I tested), but now I wonder if there are cases where it
> doesn't and it could depend on architecture as well. I'll have to trace through
> this to verify, thanks.
I think that it mostly does, since you try to flush more than 33 PTEs (the
threshold to flush the whole TLB instead of individual entries). But you
shouldnât count on it. Even this threshold is configurable.