Re: [PATCH 3/4] x86,asm: Re-work smp_store_mb()
From: Linus Torvalds
Date: Tue Jan 12 2016 - 16:37:45 EST
On Tue, Jan 12, 2016 at 12:59 PM, Andy Lutomirski <luto@xxxxxxxxxxxxxx> wrote:
>
> Here's an article with numbers:
>
> http://shipilev.net/blog/2014/on-the-fence-with-dependencies/
Well, that's with the busy loop and one set of code generation. It
doesn't show the "oops, deeper stack isn't even in the cache any more
due to call chains" issue.
But yes:
> I think they're suggesting using a negative offset, which is safe as
> long as it doesn't page fault, even though we have the redzone
> disabled.
I think a negative offset might work very well. Partly exactly
*because* we have the redzone disabled: we know that inside the
kernel, we'll never have any live stack frame accesses under the stack
pointer, so "-4(%rsp)" sounds good to me. There should never be any
pending writes in the write buffer, because even if it *was* live, it
would have been read off first.
Yeah, it potentially does extend the stack cache footprint by another
4 bytes, but that sounds very benign.
So perhaps it might be worth trying to switch the "mfence" to "lock ;
addl $0,-4(%rsp)" in the kernel for x86-64, and remove the alternate
for x86-32.
I'd still want to see somebody try to benchmark it. I doubt it's
noticeable, but making changes because you think it might save a few
cycles without then even measuring it is just wrong.
Linus