Re: [PATCH V5 06/30] csky: Cache and TLB routines

From: Guo Ren
Date: Thu Sep 27 2018 - 01:28:20 EST


On Tue, Sep 25, 2018 at 09:24:07AM +0200, Peter Zijlstra wrote:
> On Mon, Sep 24, 2018 at 10:36:22PM +0800, Guo Ren wrote:
> > diff --git a/arch/csky/abiv1/inc/abi/cacheflush.h b/arch/csky/abiv1/inc/abi/cacheflush.h
> > new file mode 100644
> > index 0000000..f0de49c
> > --- /dev/null
> > +++ b/arch/csky/abiv1/inc/abi/cacheflush.h
> > @@ -0,0 +1,43 @@
> > +// SPDX-License-Identifier: GPL-2.0
> > +// Copyright (C) 2018 Hangzhou C-SKY Microsystems co.,ltd.
> > +
> > +#ifndef __ABI_CSKY_CACHEFLUSH_H
> > +#define __ABI_CSKY_CACHEFLUSH_H
> > +
> > +#include <linux/compiler.h>
> > +#include <asm/string.h>
> > +#include <asm/cache.h>
> > +
> > +#define ARCH_IMPLEMENTS_FLUSH_DCACHE_PAGE 1
> > +extern void flush_dcache_page(struct page *);
> > +
> > +#define flush_cache_mm(mm) cache_wbinv_all()
> > +#define flush_cache_page(vma,page,pfn) cache_wbinv_all()
> > +#define flush_cache_dup_mm(mm) cache_wbinv_all()
> > +
> > +#define flush_cache_range(mm,start,end) cache_wbinv_range(start, end)
> ^^^ should be vma
Yes, I'll change it to:
#define flush_cache_range(mm,start,end) cache_wbinv_all()

I'll improve it later after test.

>
> > +#endif /* __ABI_CSKY_CACHEFLUSH_H */
>
>
> > diff --git a/arch/csky/abiv1/inc/abi/tlb.h b/arch/csky/abiv1/inc/abi/tlb.h
> > new file mode 100644
> > index 0000000..6d461f3
> > --- /dev/null
> > +++ b/arch/csky/abiv1/inc/abi/tlb.h
> > @@ -0,0 +1,12 @@
> > +// SPDX-License-Identifier: GPL-2.0
> > +// Copyright (C) 2018 Hangzhou C-SKY Microsystems co.,ltd.
> > +
> > +#ifndef __ABI_CSKY_TLB_H
> > +#define __ABI_CSKY_TLB_H
> > +
> > +#define tlb_start_vma(tlb, vma) \
> > + do { \
> > + if (!tlb->fullmm) \
> > + cache_wbinv_all(); \
> > + } while (0)
> > +#endif /* __ABI_CSKY_TLB_H */
>
> That should be:
>
> if (!tlb->fullmm)
> flush_cache_range(vma, vma->vm_start, vma->vm_end);
>
> Because as per the whole abiv1 vs abiv2, you don't need write back
> invalidation for v2 at all, also, you only need to invalidate the vma
> range, no reason to shoot everything down.
>
> Also, I'll be shortly removing this:
>
> https://lkml.kernel.org/r/20180913092812.071989585@xxxxxxxxxxxxx
Ok, I'll follow the rules.

>
> > diff --git a/arch/csky/abiv2/inc/abi/cacheflush.h b/arch/csky/abiv2/inc/abi/cacheflush.h
> > new file mode 100644
> > index 0000000..756beb7
> > --- /dev/null
> > +++ b/arch/csky/abiv2/inc/abi/cacheflush.h
> > @@ -0,0 +1,40 @@
> > +// SPDX-License-Identifier: GPL-2.0
> > +
> > +#ifndef __ABI_CSKY_CACHEFLUSH_H
> > +#define __ABI_CSKY_CACHEFLUSH_H
> > +
> > +/* Keep includes the same across arches. */
> > +#include <linux/mm.h>
> > +
> > +/*
> > + * The cache doesn't need to be flushed when TLB entries change when
> > + * the cache is mapped to physical memory, not virtual memory
> > + */
> > +#define flush_cache_all() do { } while (0)
> > +#define flush_cache_mm(mm) do { } while (0)
> > +#define flush_cache_dup_mm(mm) do { } while (0)
> > +#define flush_cache_range(vma, start, end) do { } while (0)
> ^^^ like here..
#define flush_cache_range(vma, start, end) \
do { \
if (vma->vm_flags & VM_EXEC) \
icache_inv_all(); \
}

Hmm ?

I'll improve it later after test.

Best Regards
Guo Ren