Interesting mitigation and discussion!
Regarding the impact on the AnC attack, indeed fine-grained (or full)
mmap() randomization affects AnC in two ways: (i) it breaks the
contiguity of the mmap() region, crippling the sliding primitive AnC
relies on; (ii) it ensures an attacker leaking an address in a
particular VMA can't easily infer addresses in other VMAs. So, in
short, the mitigation does raise the bar against AnC-like attacks and
I see this as a useful addition.
Indeed, we're aware some vendors implemented a similar randomization
strategy in the browser as a mitigation against AnC.
Nonetheless, some additional notes on the two points I raised above:
- (i) [Sliding] Note that an attacker can do away with sliding
depending on the randomization entropy and other available side
channels. For instance, with the recent TagBleed, we show how to
combine a TLB side channel with AnC to exhaust the KASLR entropy.
However, similar attacks should be possible in userland, again
depending on the randomization entropy used. See
https://download.vusec.net/papers/tagbleed_eurosp20.pdf. Combining
side channels with transient/speculative execution attacks can further
lower the bar.
- (ii) [Leaks] Depending on the software vulnerability used for
exploitation, it might not be difficult for an attacker to break
fine-grained randomization across VMAs. That is, leak an address from
VMA 1, use the vulnerability to trigger a normally illegal access to
VMA 2, leak an address from VMA 2, repeat. Of course, the exploit
might take much longer depending on how far on the pointer chasing
chain the target is.
Best,
Cristiano
On Wed, Nov 18, 2020 at 6:40 PM Mike Rapoport <rppt@xxxxxxxxxx> wrote:
(added one of the AnC paper authors)
On Tue, Nov 17, 2020 at 10:21:30PM +0200, Topi Miettinen wrote:
On 17.11.2020 18.54, Matthew Wilcox wrote:
On Mon, Oct 26, 2020 at 06:05:18PM +0200, Topi Miettinen wrote:
Writing a new value of 3 to /proc/sys/kernel/randomize_va_space
enables full randomization of memory mappings created with mmap(NULL,
...). With 2, the base of the VMA used for such mappings is random,
but the mappings are created in predictable places within the VMA and
in sequential order. With 3, new VMAs are created to fully randomize
the mappings. Also mremap(..., MREMAP_MAYMOVE) will move the mappings
even if not necessary.
Is this worth it?
https://www.ndss-symposium.org/ndss2017/ndss-2017-programme/aslrcache-practical-cache-attacks-mmu/
Thanks, very interesting. The paper presents an attack (AnC) which can break
ASLR even from JavaScript in browsers. In the process it compares the memory
allocators of Firefox and Chrome. Firefox relies on Linux mmap() to
randomize the memory location, but Chrome internally chooses the randomized
address. The paper doesn't present exact numbers to break ASLR for Chrome
case, but it seems to require more effort. Chrome also aggressively
randomizes the memory on each allocation, which seems to enable further
possibilities for AnC to probe the MMU tables.
Disregarding the difference in aggressiveness of memory allocators, I think
with sysctl.kernel.randomize_va_space=3, the effort for breaking ASLR with
Firefox should be increased closer to Chrome case since mmap() will use the
address space more randomly.
I have used this setting now for a month without any visible performance
issues, so I think the extra bits (for some additional effort to attackers)
are definitely worth the low cost.
Furthermore, the paper does not describe in detail how the attack would
continue after breaking ASLR. Perhaps there are assumptions which are not
valid when the different memory areas are no longer sequential. For example,
if ASLR is initially broken wrt. the JIT buffer but continuing the attack
would require other locations to be determined (like stack, data segment for
main exe or libc etc), further efforts may be needed to resolve these
locations. With randomize_va_space=2, resolving any address (JIT buffer) can
reveal the addresses of many other memory areas but this is not the case
with 3.
-Topi
--
Sincerely yours,
Mike.