Re: [PATCH 0/4] arch, mm: improve robustness of direct map manipulation

From: David Hildenbrand
Date: Tue Oct 27 2020 - 06:36:02 EST


On 27.10.20 10:47, Mike Rapoport wrote:
On Tue, Oct 27, 2020 at 09:46:35AM +0100, David Hildenbrand wrote:
On 27.10.20 09:38, Mike Rapoport wrote:
On Mon, Oct 26, 2020 at 06:05:30PM +0000, Edgecombe, Rick P wrote:
On Mon, 2020-10-26 at 11:05 +0200, Mike Rapoport wrote:
On Mon, Oct 26, 2020 at 01:13:52AM +0000, Edgecombe, Rick P wrote:
On Sun, 2020-10-25 at 12:15 +0200, Mike Rapoport wrote:
Indeed, for architectures that define
CONFIG_ARCH_HAS_SET_DIRECT_MAP
it is
possible that __kernel_map_pages() would fail, but since this
function is
void, the failure will go unnoticed.

Could you elaborate on how this could happen? Do you mean during
runtime today or if something new was introduced?

A failure in__kernel_map_pages() may happen today. For instance, on
x86
if the kernel is built with DEBUG_PAGEALLOC.

__kernel_map_pages(page, 1, 0);

will need to split, say, 2M page and during the split an allocation
of
page table could fail.

On x86 at least, DEBUG_PAGEALLOC expects to never have to break a page
on the direct map and even disables locking in cpa because it assumes
this. If this is happening somehow anyway then we should probably fix
that. Even if it's a debug feature, it will not be as useful if it is
causing its own crashes.

I'm still wondering if there is something I'm missing here. It seems
like you are saying there is a bug in some arch's, so let's add a WARN
in cross-arch code to log it as it crashes. A warn and making things
clearer seem like good ideas, but if there is a bug we should fix it.
The code around the callers still functionally assume re-mapping can't
fail.

Oh, I've meant x86 kernel *without* DEBUG_PAGEALLOC, and indeed the call
that unmaps pages back in safe_copy_page will just reset a 4K page to
NP because whatever made it NP at the first place already did the split.

Still, on arm64 with DEBUG_PAGEALLOC=n there is a possibility of a race
between map/unmap dance in __vunmap() and safe_copy_page() that may
cause access to unmapped memory:

__vunmap()
vm_remove_mappings()
set_direct_map_invalid()
safe_copy_page()
__kernel_map_pages()
return
do_copy_page() -> fault

This is a theoretical bug, but it is still not nice :)

Currently, the only user of __kernel_map_pages() outside
DEBUG_PAGEALLOC
is hibernation, but I think it would be safer to entirely prevent
usage
of __kernel_map_pages() when DEBUG_PAGEALLOC=n.

I totally agree it's error prone FWIW. On x86, my mental model of how
it is supposed to work is: If a page is 4k and NP it cannot fail to be
remapped. set_direct_map_invalid_noflush() should result in 4k NP
pages, and DEBUG_PAGEALLOC should result in all 4k pages on the direct
map. Are you seeing this violated or do I have wrong assumptions?

You are right, there is a set of assumptions about the remapping of the
direct map pages that make it all work, at least on x86.
But this is very subtle and it's not easy to wrap one's head around
this.

That's why putting __kernel_map_pages() out of "common" use and
keep it only for DEBUG_PAGEALLOC would make things clearer.

Beyond whatever you are seeing, for the latter case of new things
getting introduced to an interface with hidden dependencies... Another
edge case could be a new caller to set_memory_np() could result in
large NP pages. None of the callers today should cause this AFAICT, but
it's not great to rely on the callers to know these details.
A caller of set_memory_*() or set_direct_map_*() should expect a failure
and be ready for that. So adding a WARN to safe_copy_page() is the first
step in that direction :)


I am probably missing something important, but why are we saving/restoring
the content of pages that were explicitly removed from the identity mapping
such that nobody will access them?

Pages that are not allocated should contain garbage or be zero
(init_on_free). That should be easy to handle without ever reading the page
content.

I'm not familiar with hibernation to say anything smart here, but the
help text of DEBUG_PAGEALLOC in Kconfig says:

... this option cannot be enabled in combination with
hibernation as that would result in incorrect warnings of memory
corruption after a resume because free pages are not saved to
the suspend image.

Probably you are right and free pages need to be handled differently,
but it does not seem the case now.

The other user seems to be vm_remove_mappings(), where we only *temporarily*
remove the mapping - while hibernating, that code shouldn't be active
anymore I guess - or we could protect it from happening.

Hmm, I _think_ vm_remove_mappings() shouldn't be active while
hibernating, but I'm not 100% sure.

As I expressed in another mail, secretmem pages should rather not be saved
when hibernating - hibernation should be rather be disabled.

Agree.

What am I missing?

I think I miscommunicated the purpose of this set, which was to hide
__kernel_map_pages() under DEBUG_PAGEALLOC and make hibernation use
set_direct_map_*() explictly without major rework of free pages handling
during hibernation.

Does it help?


Heh, as always, once you touch questionable code, people will beg for proper cleanups instead :)


--
Thanks,

David / dhildenb