Re: [RFC PATCH V2 0/5] vhost: accelerate metadata access through vmap()
From: James Bottomley
Date: Tue Mar 12 2019 - 18:03:02 EST
On Tue, 2019-03-12 at 17:53 -0400, Andrea Arcangeli wrote:
> On Tue, Mar 12, 2019 at 02:19:15PM -0700, James Bottomley wrote:
> > I mean in the sequence
> >
> > flush_dcache_page(page);
> > flush_dcache_page(page);
> >
> > The first flush_dcache_page did all the work and the second it a
> > tightly pipelined no-op. That's what I mean by there not really
> > being
> > a double hit.
>
> Ok I wasn't sure it was clear there was a double (profiling) hit on
> that function.
>
> void flush_kernel_dcache_page_addr(void *addr)
> {
> unsigned long flags;
>
> flush_kernel_dcache_page_asm(addr);
> purge_tlb_start(flags);
> pdtlb_kernel(addr);
> purge_tlb_end(flags);
> }
>
> #define purge_tlb_start(flags) spin_lock_irqsave(&pa_tlb_lock,
> flags)
> #define purge_tlb_end(flags) spin_unlock_irqrestore(&pa_tlb_lo
> ck, flags)
>
> You got a system-wide spinlock in there that won't just go away the
> second time. So it's a bit more than a tightly pipelined "noop".
Well, yes, guilty as charged. That particular bit of code is a work
around for an N class system which has an internal cross CPU coherency
bus but helpfully crashes if two different CPUs try to use it at once.
Since the N class was a huge power hog, I thought they'd all been
decommisioned and this was an irrelevant anachronism (or at the very
least runtime patched).
> Your logic of adding the flush on kunmap makes sense, all I'm saying
> is that it's sacrificing some performance for safety. You asked
> "optimized what", I meant to optimize away all the above quoted code
> that will end running twice for each vhost set_bit when it should run
> just once like in other archs. And it clearly paid off until now
> (until now it run just once and it was the only safe one).
I'm sure there must be workarounds elsewhere in the other arch code
otherwise things like this, which appear all over drivers/, wouldn't
work:
drivers/scsi/isci/request.c:1430
kaddr = kmap_atomic(page);
memcpy(kaddr + sg->offset, src_addr, copy_len);
kunmap_atomic(kaddr);
the sequence dirties the kernel virtual address but doesn't flush
before doing kunmap. There are hundreds of other examples which is why
I think adding flush_kernel_dcache_page() is an already lost cause.
> Before we can leverage your idea to flush the dcache on kunmap in
> common code without having to sacrifice performance in arch code,
> we'd need to change all other archs to add the cache flushes on
> kunmap too, and then remove the cache flushes from the other places
> like copy_page or we'd waste CPU. Then you'd have the best of both
> words, no double flush and kunmap would be enough.
Actually copy_user_page() is unused in the main kernel. The big
problem is copy_user_highpage() but that's mostly highly optimised by
the VIPT architectures (in other words you can fiddle with kmap without
impacting it).
James