Re: [GIT PULL] vsprintf: Replace memory barrier with static_key for random_ptr_key update

From: Tobin C. Harding
Date: Wed May 16 2018 - 17:08:28 EST


On Wed, May 16, 2018 at 09:12:41AM -0400, Steven Rostedt wrote:
>
> Linus,
>
> The memory barrier usage in updating the random ptr hash for %p in
> vsprintf is incorrect. Instead of adding the read memory barrier
> into vsprintf() which will cause a slight degradation to a commonly
> used function in the kernel just to solve a very unlikely race
> condition that can only happen at boot up, change the code from
> using a variable branch to a static_branch. Not only does this solve
> the race condition, it actually will improve the performance of
> vsprintf() by removing the conditional branch that is only needed
> at boot.
>
>
> Please pull the latest trace-v4.17-rc5-vsprintf tree, which can be found at:
>
>
> git://git.kernel.org/pub/scm/linux/kernel/git/rostedt/linux-trace.git
> trace-v4.17-rc5-vsprintf
>
> Tag SHA1: 3e2a2dfc8987e9a2b4e185b65a9b48c374c80791
> Head SHA1: 85f4f12d51397f1648e1f4350f77e24039b82d61
>
>
> Steven Rostedt (VMware) (1):
> vsprintf: Replace memory barrier with static_key for random_ptr_key update
>
> ----
> lib/vsprintf.c | 26 +++++++++++++++-----------
> 1 file changed, 15 insertions(+), 11 deletions(-)
> ---------------------------
> commit 85f4f12d51397f1648e1f4350f77e24039b82d61
> Author: Steven Rostedt (VMware) <rostedt@xxxxxxxxxxx>
> Date: Tue May 15 22:24:52 2018 -0400
>
> vsprintf: Replace memory barrier with static_key for random_ptr_key update
>
> Reviewing Tobin's patches for getting pointers out early before
> entropy has been established, I noticed that there's a lone smp_mb() in
> the code. As with most lone memory barriers, this one appears to be
> incorrectly used.
>
> We currently basically have this:
>
> get_random_bytes(&ptr_key, sizeof(ptr_key));
> /*
> * have_filled_random_ptr_key==true is dependent on get_random_bytes().
> * ptr_to_id() needs to see have_filled_random_ptr_key==true
> * after get_random_bytes() returns.
> */
> smp_mb();
> WRITE_ONCE(have_filled_random_ptr_key, true);
>
> And later we have:
>
> if (unlikely(!have_filled_random_ptr_key))
> return string(buf, end, "(ptrval)", spec);
>
> /* Missing memory barrier here. */
>
> hashval = (unsigned long)siphash_1u64((u64)ptr, &ptr_key);
>
> As the CPU can perform speculative loads, we could have a situation
> with the following:
>
> CPU0 CPU1
> ---- ----
> load ptr_key = 0
> store ptr_key = random
> smp_mb()
> store have_filled_random_ptr_key
>
> load have_filled_random_ptr_key = true
>
> BAD BAD BAD! (you're so bad!)

The additional clarification in this line, added for the pull request, is pure gold.


Tobin