Re: [PATCH v3 01/13] x86/retpoline: Add initial retpoline support

From: Paul Turner
Date: Fri Jan 05 2018 - 05:28:35 EST


On Thu, Jan 04, 2018 at 07:27:58PM +0000, David Woodhouse wrote:
> On Thu, 2018-01-04 at 10:36 -0800, Alexei Starovoitov wrote:
> >
> > Pretty much.
> > Paul's writeup: https://support.google.com/faqs/answer/7625886
> > tldr: jmp *%r11 gets converted to:
> > call set_up_target;
> > capture_spec:
> > Â pause;
> > Â jmp capture_spec;
> > set_up_target:
> > Â mov %r11, (%rsp);
> > Â ret;
> > where capture_spec part will be looping speculatively.
>
> That is almost identical to what's in my latest patch set, except that
> the capture_spec loop has 'lfence' instead of 'pause'.

When choosing this sequence I benchmarked several alternatives here, including
(nothing, nops, fences, and other serializing instructions such as cpuid).

The "pause; jmp" sequence proved minutely faster than "lfence;jmp" which is why
it was chosen.

"pause; jmp" 33.231 cycles/call 9.517 ns/call
"lfence; jmp" 33.354 cycles/call 9.552 ns/call

(Timings are for a complete retpolined indirect branch.)
>
> As Andi says, I'd want to see explicit approval from the CPU architects
> for making that change.

Beyond guaranteeing that speculative execution is constrained, the choice of
sequence here is a performance detail and not one of correctness.

>
> We've already had false starts there â for a long time, Intel thought
> that a much simpler option with an lfence after the register load was
> sufficient, and then eventually worked out that in some rare cases it
> wasn't. While AMD still seem to think it *is* sufficient for them,
> apparently.

As an interesting aside, that speculation proceeds beyond lfence can be
trivially proven using the timings above. In fact, if we substitute only:
"lfence" (with no jmp)

We see:
29.573 cycles/call 8.469 ns/call

Now, the only way for this timing to be different, is if speculation beyond the
lfence was executed differently.

That said, this is a negative result, it does suggest that the jmp is
contributing a larger than realized cost to our speculative loop. We can likely
shave off some additional time with some unrolling. I did try this previously,
but did not see results above the noise floor; it seems worth trying this again;
will take a look tomorrow.