Re: [PATCH 1/1] riscv: prevent pipeline stall in __asm_to/copy_from_user
From: Akira Tsukamoto
Date: Wed Jun 16 2021 - 06:24:28 EST
On Sat, Jun 12, 2021 at 9:17 PM David Laight <David.Laight@xxxxxxxxxx> wrote:
>
> From: Palmer Dabbelt
> > Sent: 12 June 2021 05:05
> ...
> > > I don't know the architecture, but unless there is a stunning
> > > pipeline delay for memory reads a simple interleaved copy
> > > may be fast enough.
> > > So something like:
> > > a = src[0];
> > > do {
> > > b = src[1];
> > > src += 2;
> > > dst[0] = a;
> > > dst += 2;
> > > a = src[0];
> > > dst[-1] = b;
> > > } while (src != src_end);
> > > dst[0] = a;
> > >
> > > It is probably worth doing benchmarks of the copy loop
> > > in userspace.
> >
> > I also don't know this microarchitecture, but this seems like a pretty
> > wacky load-use delay.
>
> It is quite sane really.
>
> While many cpu can use the result of the ALU in the next clock
> (there is typically special logic to bypass the write to the
> register file) this isn't always true for memory (cache) reads.
> It may even be that the read itself takes more than one cycle
> (probably pipelined so they can happen every cycle).
>
> So a simple '*dest = *src' copy loop suffers the 'memory read'
> penalty ever iteration.
> At out-of-order execution unit that uses register renames
> (like most x86) will just defer the writes until the data
> is available - so isn't impacted.
>
> Interleaving the reads and writes means you issue a read
> while waiting for the value from the previous read to
> get to the register file - and be available for the
> write instruction.
>
> Moving the 'src/dst += 2' into the loop gives a reasonable
> chance that they are executed in parallel with a memory
> access (on in-order superscaler cpu) rather than bunching
> them up at the end where the start adding clocks.
>
> If your cpu can only do one memory read or one memory write
> per clock then you only need it to execute two instructions
> per clock for the loop above to run at maximum speed.
> Even with a 'read latency' of two clocks.
> (Especially since riscv has 'mips like' 'compare and branch'
> instructions that probably execute in 1 clock when predicted
> taken.)
>
> If the cpu can do a read and a write in one clock then the
> loop may still run at the maximum speed.
> For this to happen you do need he read data to be available
> next clock and to run load, store, add and compare instructions
> in a single clock.
> Without that much parallelism it might be necessary to add
> an extra read/write interleave (an maybe a 4th to avoid a
> divide by three).
It is becoming like a computer architecture discussion, I agree with David's
simple interleaved copy would speed up with the same hardware
reason.
I used to get this kind of confirmation from cpu designers when they were
working on the same floor.
I am fine either way. I used the simple unrolling just because all other
existing copy functions for riscv and other cpus do the same.
I am lazy of porting C version interlive memcpy to assembly.
I wrote in the cover letter for using assembler inside uaccess.S is
because the
__asm_to/copy_from_user() handling page fault must be done manually inside
the functions.
Akira
>
> The 'elephant in the room' is a potential additional stall
> on reads if the previous cycle is a write to the same cache area.
> For instance the nios2 (a soft cpu for altera fpga) can do
> back to back reads or back to back writes, but since the reads
> are done speculatively (regardless of the opcode!) they have to
> be deferred when a write is using the memory block.
>
> David
>
> -
> Registered Address Lakeside, Bramley Road, Mount Farm, Milton Keynes, MK1 1PT, UK
> Registration No: 1397386 (Wales)