Re: [PATCH v4 0/3] KASLR feature to randomize each loadable module
From: Edgecombe, Rick P
Date: Thu Aug 30 2018 - 14:25:10 EST
On Wed, 2018-08-29 at 19:27 -0700, Alexei Starovoitov wrote:
> On Wed, Aug 29, 2018 at 03:59:36PM -0700, Rick Edgecombe wrote:
> > Changes for V3:
> > Â- Code cleanup based on internal feedback. (thanks to Dave Hansen and
> > Andriy
> > ÂÂÂShevchenko)
> > Â- Slight refactor of existing algorithm to more cleanly live along side new
> > ÂÂÂone.
> > Â- BPF synthetic benchmark
> I don't see this benchmark in this patch set.
> Could you prepare it as a test in tools/testing/selftests/bpf/ ?
> so we can double check what is being tested and run it regularly
> like we do for all other tests in there.
Sure.
There were two benchmarks I had run with BPF in mind, one was the timing the
module_alloc function in different scenarios, looking to make sure there were no
slowdowns for insertions.
The other was to check if the fragmentation caused any measurable runtime
performance:
"For runtime performance, a synthetic benchmark was run that does 5000000 BPF
JIT invocations each, from varying numbers of parallel processes, while the
kernel compiles sharing the same CPU to stand in for the cache impact of a real
workload. The seccomp filter invocations were just Jann Horn's seccomp filtering
test from this thread http://openwall.com/lists/kernel-hardening/2018/07/18/2,
except non-real time priority. The kernel was configured with KPTI and
retpoline, and pcid was disabled. There wasn't any significant difference
between the new and the old."
From what I know about the bpf kselftest, the first one would probably be a
better fit. Not sure if the second one would fit, with the kernel compiling
sharing the same CPU, a special config, and a huge amount of processes being
spawned... I can try to add a micro-benchmark instead if that sounds good.
Rick