Re: [PATCH bpf-next v3 0/3] bpf, riscv: use BPF prog pack allocator in BPF JIT

From: Puranjay Mohan
Date: Thu Aug 31 2023 - 09:16:09 EST


On Thu, Aug 31, 2023 at 12:48 AM Daniel Borkmann <daniel@xxxxxxxxxxxxx> wrote:
>
> On 8/30/23 3:54 PM, Palmer Dabbelt wrote:
> > On Wed, 30 Aug 2023 01:18:46 PDT (-0700), daniel@xxxxxxxxxxxxx wrote:
> >> On 8/29/23 12:06 PM, Björn Töpel wrote:
> >>> Puranjay Mohan <puranjay12@xxxxxxxxx> writes:
> >>>
> >>>> Changes in v2 -> v3:
> >>>> 1. Fix maximum width of code in patches from 80 to 100. [All patches]
> >>>> 2. Add checks for ctx->ro_insns == NULL. [Patch 3]
> >>>> 3. Fix check for edge condition where amount of text to set > 2 * pagesize
> >>>> [Patch 1 and 2]
> >>>> 4. Add reviewed-by in patches.
> >>>> 5. Adding results of selftest here:
> >>>> Using the command: ./test_progs on qemu
> >>>> Without the series: Summary: 336/3162 PASSED, 56 SKIPPED, 90 FAILED
> >>>> With this series: Summary: 336/3162 PASSED, 56 SKIPPED, 90 FAILED
> >>>>
> >>>> Changes in v1 -> v2:
> >>>> 1. Implement a new function patch_text_set_nosync() to be used in bpf_arch_text_invalidate().
> >>>> The implementation in v1 called patch_text_nosync() in a loop and it was bad as it would
> >>>> call flush_icache_range() for every word making it really slow. This was found by running
> >>>> the test_tag selftest which would take forever to complete.
> >>>>
> >>>> Here is some data to prove the V2 fixes the problem:
> >>>>
> >>>> Without this series:
> >>>> root@rv-selftester:~/src/kselftest/bpf# time ./test_tag
> >>>> test_tag: OK (40945 tests)
> >>>>
> >>>> real 7m47.562s
> >>>> user 0m24.145s
> >>>> sys 6m37.064s
> >>>>
> >>>> With this series applied:
> >>>> root@rv-selftester:~/src/selftest/bpf# time ./test_tag
> >>>> test_tag: OK (40945 tests)
> >>>>
> >>>> real 7m29.472s
> >>>> user 0m25.865s
> >>>> sys 6m18.401s
> >>>>
> >>>> BPF programs currently consume a page each on RISCV. For systems with many BPF
> >>>> programs, this adds significant pressure to instruction TLB. High iTLB pressure
> >>>> usually causes slow down for the whole system.
> >>>>
> >>>> Song Liu introduced the BPF prog pack allocator[1] to mitigate the above issue.
> >>>> It packs multiple BPF programs into a single huge page. It is currently only
> >>>> enabled for the x86_64 BPF JIT.
> >>>>
> >>>> I enabled this allocator on the ARM64 BPF JIT[2]. It is being reviewed now.
> >>>>
> >>>> This patch series enables the BPF prog pack allocator for the RISCV BPF JIT.
> >>>> This series needs a patch[3] from the ARM64 series to work.
> >>>>
> >>>> ======================================================
> >>>> Performance Analysis of prog pack allocator on RISCV64
> >>>> ======================================================
> >>>>
> >>>> Test setup:
> >>>> ===========
> >>>>
> >>>> Host machine: Debian GNU/Linux 11 (bullseye)
> >>>> Qemu Version: QEMU emulator version 8.0.3 (Debian 1:8.0.3+dfsg-1)
> >>>> u-boot-qemu Version: 2023.07+dfsg-1
> >>>> opensbi Version: 1.3-1
> >>>>
> >>>> To test the performance of the BPF prog pack allocator on RV, a stresser
> >>>> tool[4] linked below was built. This tool loads 8 BPF programs on the system and
> >>>> triggers 5 of them in an infinite loop by doing system calls.
> >>>>
> >>>> The runner script starts 20 instances of the above which loads 8*20=160 BPF
> >>>> programs on the system, 5*20=100 of which are being constantly triggered.
> >>>> The script is passed a command which would be run in the above environment.
> >>>>
> >>>> The script was run with following perf command:
> >>>> ./run.sh "perf stat -a \
> >>>> -e iTLB-load-misses \
> >>>> -e dTLB-load-misses \
> >>>> -e dTLB-store-misses \
> >>>> -e instructions \
> >>>> --timeout 60000"
> >>>>
> >>>> The output of the above command is discussed below before and after enabling the
> >>>> BPF prog pack allocator.
> >>>>
> >>>> The tests were run on qemu-system-riscv64 with 8 cpus, 16G memory. The rootfs
> >>>> was created using Bjorn's riscv-cross-builder[5] docker container linked below.
> >>>>
> >>>> Results
> >>>> =======
> >>>>
> >>>> Before enabling prog pack allocator:
> >>>> ------------------------------------
> >>>>
> >>>> Performance counter stats for 'system wide':
> >>>>
> >>>> 4939048 iTLB-load-misses
> >>>> 5468689 dTLB-load-misses
> >>>> 465234 dTLB-store-misses
> >>>> 1441082097998 instructions
> >>>>
> >>>> 60.045791200 seconds time elapsed
> >>>>
> >>>> After enabling prog pack allocator:
> >>>> -----------------------------------
> >>>>
> >>>> Performance counter stats for 'system wide':
> >>>>
> >>>> 3430035 iTLB-load-misses
> >>>> 5008745 dTLB-load-misses
> >>>> 409944 dTLB-store-misses
> >>>> 1441535637988 instructions
> >>>>
> >>>> 60.046296600 seconds time elapsed
> >>>>
> >>>> Improvements in metrics
> >>>> =======================
> >>>>
> >>>> It was expected that the iTLB-load-misses would decrease as now a single huge
> >>>> page is used to keep all the BPF programs compared to a single page for each
> >>>> program earlier.
> >>>>
> >>>> --------------------------------------------
> >>>> The improvement in iTLB-load-misses: -30.5 %
> >>>> --------------------------------------------
> >>>>
> >>>> I repeated this expriment more than 100 times in different setups and the
> >>>> improvement was always greater than 30%.
> >>>>
> >>>> This patch series is boot tested on the Starfive VisionFive 2 board[6].
> >>>> The performance analysis was not done on the board because it doesn't
> >>>> expose iTLB-load-misses, etc. The stresser program was run on the board to test
> >>>> the loading and unloading of BPF programs
> >>>>
> >>>> [1] https://lore.kernel.org/bpf/20220204185742.271030-1-song@xxxxxxxxxx/
> >>>> [2] https://lore.kernel.org/all/20230626085811.3192402-1-puranjay12@xxxxxxxxx/
> >>>> [3] https://lore.kernel.org/all/20230626085811.3192402-2-puranjay12@xxxxxxxxx/
> >>>> [4] https://github.com/puranjaymohan/BPF-Allocator-Bench
> >>>> [5] https://github.com/bjoto/riscv-cross-builder
> >>>> [6] https://www.starfivetech.com/en/site/boards
> >>>>
> >>>> Puranjay Mohan (3):
> >>>> riscv: extend patch_text_nosync() for multiple pages
> >>>> riscv: implement a memset like function for text
> >>>> bpf, riscv: use prog pack allocator in the BPF JIT
> >>>
> >>> Thank you! For the series:
> >>>
> >>> Acked-by: Björn Töpel <bjorn@xxxxxxxxxx>
> >>> Tested-by: Björn Töpel <bjorn@xxxxxxxxxxxx>
> >>>
> >>> @Alexei @Daniel This series depends on a core BPF patch from the Arm
> >>> series [3].
> >>>
> >>> @Palmer LMK if you have any concerns taking the RISC-V text patching
> >>> stuff via the BPF tree.
> >>
> >> Palmer, did the riscv PR already go to Linus?
> >
> > Not yet, I usually send on Friday mornings -- and I also generally send two, as there's some stragglers/fixes for the second week. I'm fine taking it (Bjorn just poked me), can someone provide a base commit? Bjorn says it depends on something in Linus' tree, so I'll just pick it up as a straggler for next week.
>
> Okay, sgtm.
>
> > Also, do you mind sending an Ack?
>
> Björn / Puranjay, just to clarify since the arm64 series did not land, you are referring
> to this one as a dependency [0], right? Meaning, you'd route [0] + this series via riscv
> PR to Linus then during this merge win.
>
> If yes, could one of you send the complete 4-patch series with the prior Acks from [0] + this
> series collected to both bpf+riscv list (with the small request to extend the commit desc
> in [0] a bit to better document implications of the change itself for other JITs)? After a
> final look and if BPF CI goes through we can then ack as well and unblock the routing.
>
> Thanks,
> Daniel
>
> [0] https://lore.kernel.org/all/20230626085811.3192402-2-puranjay12@xxxxxxxxx/

Hi Daniel,

I have sent the v4[0] of this with the core patch included.

[0]https://lore.kernel.org/all/20230831131229.497941-1-puranjay12@xxxxxxxxx/

Thanks.
Puranjay