Re: [kernel-hardening] [PATCH v5 04/10] arm64: Add __flush_tlb_one()

From: Tycho Andersen
Date: Wed Aug 30 2017 - 12:47:33 EST


On Wed, Aug 30, 2017 at 07:31:25AM +0200, Juerg Haefliger wrote:
>
>
> On 08/23/2017 07:04 PM, Mark Rutland wrote:
> > On Wed, Aug 23, 2017 at 10:58:42AM -0600, Tycho Andersen wrote:
> >> Hi Mark,
> >>
> >> On Mon, Aug 14, 2017 at 05:50:47PM +0100, Mark Rutland wrote:
> >>> That said, is there any reason not to use flush_tlb_kernel_range()
> >>> directly?
> >>
> >> So it turns out that there is a difference between __flush_tlb_one() and
> >> flush_tlb_kernel_range() on x86: flush_tlb_kernel_range() flushes all the TLBs
> >> via on_each_cpu(), where as __flush_tlb_one() only flushes the local TLB (which
> >> I think is enough here).
> >
> > That sounds suspicious; I don't think that __flush_tlb_one() is
> > sufficient.
> >
> > If you only do local TLB maintenance, then the page is left accessible
> > to other CPUs via the (stale) kernel mappings. i.e. the page isn't
> > exclusively mapped by userspace.
>
> We flush all CPUs to get rid of stale entries when a new page is
> allocated to userspace that was previously allocated to the kernel.
> Is that the scenario you were thinking of?

I think there are two cases, the one you describe above, where the
pages are first allocated, and a second one, where e.g. the pages are
mapped into the kernel because of DMA or whatever. In the case you
describe above, I think we're doing the right thing (which is why my
test worked correctly, because it tested this case).

In the second case, when the pages are unmapped (i.e. the kernel is
done doing DMA), do we need to flush the other CPUs TLBs? I think the
current code is not quite correct, because if multiple tasks (CPUs)
map the pages, only the TLB of the last one is flushed when the
mapping is cleared, because the tlb is only flushed when ->mapcount
drops to zero, leaving stale entries in the other TLBs. It's not clear
to me what to do about this case.

Thoughts?

Tycho

> ...Juerg
>
>
> > Thanks,
> > Mark.
> >
>