Re: [PATCH] arm64: Add flush_cache_vmap call in __early_set_fixmap

From: Catalin Marinas
Date: Mon Jun 09 2014 - 07:04:20 EST


On Fri, Jun 06, 2014 at 04:09:33PM +0100, Mark Salter wrote:
> On Fri, 2014-06-06 at 15:53 +0100, Leif Lindholm wrote:
> > On Fri, Jun 06, 2014 at 10:37:29AM -0400, Mark Salter wrote:
> > > On Fri, 2014-06-06 at 11:29 +0100, Leif Lindholm wrote:
> > > > __early_set_fixmap does not do any synchronization when called to set a
> > > > fixmap entry. Add call to flush_vmap_cache().

Did you hit a problem or it was just for safety?

> > > > Tested on hardware.
> > > >
> > > > Signed-off-by: Leif Lindholm <leif.lindholm@xxxxxxxxxx>
> > > > Tested-by: Graeme Gregory <graeme.gregory@xxxxxxxxxx>
> > > > Cc: Steve Capper <steve.capper@xxxxxxxxxx>
> > > > ---
> > > > arch/arm64/mm/ioremap.c | 5 +++--
> > > > 1 file changed, 3 insertions(+), 2 deletions(-)
> > > >
> > > > diff --git a/arch/arm64/mm/ioremap.c b/arch/arm64/mm/ioremap.c
> > > > index 7ec3283..5b8766c 100644
> > > > --- a/arch/arm64/mm/ioremap.c
> > > > +++ b/arch/arm64/mm/ioremap.c
> > > > @@ -176,9 +176,10 @@ void __init __early_set_fixmap(enum fixed_addresses idx,
> > > >
> > > > pte = early_ioremap_pte(addr);
> > > >
> > > > - if (pgprot_val(flags))
> > > > + if (pgprot_val(flags)) {
> > > > set_pte(pte, pfn_pte(phys >> PAGE_SHIFT, flags));
> > > > - else {
> > > > + flush_cache_vmap(addr, addr + PAGE_SIZE);
> > > > + } else {
> > > > pte_clear(&init_mm, addr, pte);
> > > > flush_tlb_kernel_range(addr, addr+PAGE_SIZE);
> > > > }
> > >
> > > I'm confused by the commit message mentioning synchronization but
> > > the code doing a cache flush. I see that arm64 implementation of
> > > flush_cache_vmap() is just a dsb(). If it is synchronization that
> > > we need here (and it certainly looks like we do), why not just add
> > > the dsb() directly to make that clear?
> >
> > It needs this Linux-semantically for the same reason remap_page_range
> > needs it. From the ARM architectural point of view, the reason is that
> > the translation table walk is considered a separate observer from the
> > core data interface.
> >
> > But since there is a common Linux semantic for this, I preferred
> > reusing that over just throwing in a dsb(). My interpretation of
> > flush_cache_vmap() was "flush mappings from cache, so they can be
> > picked up by table walk". While we don't technically need to flush the
> > cache here, the underlying requirement is the same.
>
> But the range you are flushing is not a range seen by the table walk
> observer. I just think it is clearer to explicitly show that it is
> the pte write which we want the table walk to see rather than to
> rely on the implicit behavior of a cache flush routine.

I think that's a valid point. flush_cache_vmap() is used to remove any
cached entries for the ioremap/vmap'ed range. That's not the aim here.

As an optimisation, set_pte() doesn't have a dsb(). We do this on the
clearing/invalidating path via the TLB flushing routines but not on the
re-enabling path. Here we just added dsb() in the relevant functions
that were called from the generic code (flush_cache_vmap,
update_mmu_cache).

A quick grep through the kernel shows that we have other set_pte() calls
without additional dsb() like create_mapping(), I think kvm_set_pte() as
well.

So I'm proposing an alternative patch (which needs some benchmarking as
well to see if anything is affected, maybe application startup time).

------------------8<-------------------------------