Re: [kernel-hardening] [PATCH 4/6] Protectable Memory

From: Laura Abbott
Date: Tue Feb 13 2018 - 11:09:45 EST


On 02/12/2018 07:39 PM, Jann Horn wrote:
On Tue, Feb 13, 2018 at 2:25 AM, Kees Cook <keescook@xxxxxxxxxxxx> wrote:
On Mon, Feb 12, 2018 at 4:40 PM, Laura Abbott <labbott@xxxxxxxxxx> wrote:
On 02/12/2018 03:27 PM, Kees Cook wrote:

On Sun, Feb 4, 2018 at 7:05 AM, Igor Stoppa <igor.stoppa@xxxxxxxxxx>
wrote:

On 04/02/18 00:29, Boris Lukashev wrote:

On Sat, Feb 3, 2018 at 3:32 PM, Igor Stoppa <igor.stoppa@xxxxxxxxxx>
wrote:


[...]

What you are suggesting, if I have understood it correctly, is that,
when the pool is protected, the addresses already given out, will
become
traps that get resolved through a lookup table that is built based on
the content of each allocation.

That seems to generate a lot of overhead, not to mention the fact that
it might not play very well with the MMU.


That is effectively what i'm suggesting - as a form of protection for
consumers against direct reads of data which may have been corrupted
by some irrelevant means. In the context of pmalloc, it would probably
be a separate type of ro+verified pool

ok, that seems more like an extension though.

ATM I am having problems gaining traction to get even the basic merged
:-)

I would consider this as a possibility for future work, unless it is
said that it's necessary for pmalloc to be accepted ...


I would agree: let's get basic functionality in first. Both
verification and the physmap part can be done separately, IMO.


Skipping over physmap leaves a pretty big area of exposure that could
be difficult to solve later. I appreciate this might block basic
functionality but I don't think we should just gloss over it without
at least some idea of what we would do.

What's our exposure on physmap for other regions? e.g. things that are
executable, or made read-only later (like __ro_after_init)?

I just checked on a system with a 4.9 kernel, and there seems to be no
physical memory that is mapped as writable in the init PGD and
executable elsewhere.

Ah, I think I missed something. At least on X86, set_memory_ro,
set_memory_rw, set_memory_nx and set_memory_x all use
change_page_attr_clear/change_page_attr_set, which use
change_page_attr_set_clr, which calls __change_page_attr_set_clr()
with a second parameter "checkalias" that is set to 1 unless the bit
being changed is the NX bit, and that parameter causes the invocation
of cpa_process_alias(), which will, for mapped ranges, also change the
attributes of physmap ranges. set_memory_ro() and so on are also used
by the module loading code.

But in the ARM64 code, I don't see anything similar. Does anyone with
a better understanding of ARM64 want to check whether I missed
something? Or maybe, with a recent kernel, check whether executable
module pages show up with a second writable mapping in the
"kernel_page_tables" file in debugfs?


No, arm64 doesn't fixup the aliases, mostly because arm64 uses larger
page sizes which can't be broken down at runtime. CONFIG_PAGE_POISONING
does use 4K pages which could be adjusted at runtime. So yes, you are
right we would have physmap exposure on arm64 as well.

To the original question, it does sound like we are actually okay
with the physmap.

Thanks,
Laura