Re: clean up and streamline probe_kernel_* and friends v2
From: Daniel Borkmann
Date: Wed May 13 2020 - 19:04:46 EST
On 5/13/20 6:00 PM, Christoph Hellwig wrote:
Hi all,
this series start cleaning up the safe kernel and user memory probing
helpers in mm/maccess.c, and then allows architectures to implement
the kernel probing without overriding the address space limit and
temporarily allowing access to user memory. It then switches x86
over to this new mechanism by reusing the unsafe_* uaccess logic.
This version also switches to the saner copy_{from,to}_kernel_nofault
naming suggested by Linus.
I kept the x86 helprs as-is without calling unsage_{get,put}_user as
that avoids a number of hard to trace casts, and it will still work
with the asm-goto based version easily.
Aside from comments on list, the series looks reasonable to me. For BPF
the bpf_probe_read() helper would be slightly penalized for probing user
memory given we now test on copy_from_kernel_nofault() first and if that
fails only then fall back to copy_from_user_nofault(), but it seems
small enough that it shouldn't matter too much and aside from that we have
the newer bpf_probe_read_kernel() and bpf_probe_read_user() anyway that
BPF progs should use instead, so I think it's okay.
For patch 14 and patch 15, do you roughly know the performance gain with
the new probe_kernel_read_loop() + arch_kernel_read() approach?
Thanks,
Daniel