Re: [PATCH v6 0/7] refactor file signing program
From: Masahiro Yamada
Date: Mon Aug 07 2023 - 14:36:27 EST
On Mon, Aug 7, 2023 at 5:17 PM Shreenidhi Shedi <yesshedi@xxxxxxxxx> wrote:
>
> On 07/08/23 07:53, Masahiro Yamada wrote:
> > On Thu, Jun 1, 2023 at 6:08 PM Greg KH <gregkh@xxxxxxxxxxxxxxxxxxx> wrote:
> >>
> >> On Thu, Jun 01, 2023 at 02:33:23PM +0530, Shreenidhi Shedi wrote:
> >>> On Wed, 31-May-2023 22:20, Greg KH wrote:
> >>>> On Wed, May 31, 2023 at 09:01:24PM +0530, Shreenidhi Shedi wrote:
> >>>>> On Wed, 31-May-2023 20:08, Greg KH wrote:
> >>>>>> On Tue, Apr 25, 2023 at 04:14:49PM +0530, Shreenidhi Shedi wrote:
> >>>>>>> On Wed, 22-Mar-2023 01:03, Shreenidhi Shedi wrote:
> >>>>>>> Can you please review the latest patch series? I think I have addressed your
> >>>>>>> concerns. Thanks.
> >>>>>>
> >>>>>> The big question is, "who is going to use these new features"? This
> >>>>>> tool is only used by the in-kernel build scripts, and if they do not
> >>>>>> take advantage of these new options you have added, why are they needed?
> >>>>>>
> >>>>>> thanks,
> >>>>>>
> >>>>>> greg k-h
> >>>>>
> >>>>> Hi Greg,
> >>>>>
> >>>>> Thanks for the response.
> >>>>>
> >>>>> We use it in VMware Photon OS. Following is the link for the same.
> >>>>> https://github.com/vmware/photon/blob/master/SPECS/linux/spec_install_post.inc#L4
> >>>>>
> >>>>> If this change goes in, it will give a slight push to our build performance.
> >>>>
> >>>> What exactly do you mean by "slight push"?
> >>>
> >>> Instead of invoking the signing tool binary for each module, we can pass
> >>> modules in bulk and it will reduce the build time by couple of seconds.
> >>
> >> Then why not modify the in-kernel build system to also do this, allowing
> >> everyone to save time and money (i.e. energy)?
> >>
> >> Why keep the build savings to yourself?
> >>
> >> thanks,
> >>
> >> greg k-h
> >
> >
> > If I understand correctly,
> > "sign-file: add support to sign modules in bulk"
> > is the only benefit in the patchset.
> >
> > I tested the bulk option, but I did not see build savings.
> >
> >
> >
> > My evaluation:
> > 1. 'make allmodconfig all', then 'make modules_install'.
> > (9476 modules installed)
> >
> > 2. I ran 'perf stat' for single signing vs bulk signing
> > (5 runs for each).
> > I changed the -n option in scripts/signfile.sh
> >
> >
> >
> >
> > A. single sign
> >
> > Sign one module per scripts/sign-file invocation.
> >
> > find "${MODULES_PATH}" -name *.ko -type f -print0 | \
> > xargs -r -0 -P$(nproc) -x -n1 sh -c "..."
> >
> >
> >
> > Performance counter stats for './signfile-single.sh' (5 runs):
> >
> > 22.33 +- 2.26 seconds time elapsed ( +- 10.12% )
> >
> >
> >
> >
> > B. bulk sign
> >
> > Sign 32 modules per scripts/sign-file invocation
> >
> > find "${MODULES_PATH}" -name *.ko -type f -print0 | \
> > xargs -r -0 -P$(nproc) -x -n32 sh -c "..."
> >
> >
> > Performance counter stats for './signfile-bulk.sh' (5 runs):
> >
> > 24.78 +- 3.01 seconds time elapsed ( +- 12.14% )
> >
> >
> >
> >
> > The bulk option decreases the process forks of scripts/sign-file
> > but I did not get even "slight push".
> >
> >
> >
>
> That's some really interesting data. I'm surprised that with stand alone
> run bulk signing is not performing as expected. Can you give the full
> command you used for bulk sign?
Attached.
> Reduced number of forks should
> eventually lead to getting more done in less time.
Not necessarily.
You sign 32 modules per thread.
Assume you have 4 CPUs and 160 modules to sign.
For simplicity, assume every module takes the same time to sign.
[Sign 32 modules per 1 process]
CPU0: <=Sign 32 mods=><=Sign 32 mods=> Finish
CPU1: <=Sign 32 mods=><==== Idle ====>
CPU2: <=Sign 32 mods=><==== Idle ====>
CPU3: <=Sign 32 mods=><==== Idle ====>
[Sign 1 modules per 1 process]
CPU0: <===Sign 40 mods===> Finish
CPU1: <===Sign 40 mods===>
CPU2: <===Sign 40 mods===>
CPU3: <===Sign 40 mods===>
In your approach, if CPU0 eats the last 32 modules
from the command line, the other CPUs end up
just waiting for CPU0 to complete the task.
The more CPUs you have, the more idle CPUs
at the last portion of the signing stage.
Do not try to save such a small cost of process forking.
Leave the task scheduling to GNU Make.
>
> But I got ~1.4 seconds boost when I did "make module_install".
>
> I gave the data in my other response as well. Copying the same here
> because this has in better context.
>
> root@ph5dev:~/linux-6.3.5 # ./test.sh orig
>
> real 0m14.699s
> user 0m55.519s
> sys 0m9.036s
>
> root@ph5dev:~/linux-6.3.5 # ./test.sh new
>
> real 0m13.327s
> user 0m46.885s
> sys 0m6.770s
>
> Here is my test script.
> ```
> #!/bin/bash
>
> set -e
>
> if [ "$1" != "new" ] && [ "$1" != "orig" ]; then
> echo "invalid arg, ($0 [orig|new])" >&2
> exit 1
> fi
>
> rm -rf $PWD/tmp
>
> s="scripts/sign-file.c"
> m="scripts/Makefile.modinst"
> fns=($s $m)
>
> for f in ${fns[@]}; do
> cp $f.$1 $f
> done
>
> cd scripts
> gcc -o sign-file sign-file.c -lcrypto
> cd -
>
> time make modules_install INSTALL_MOD_PATH=$PWD/tmp -s -j$(nproc)
> ```
Just in case, I followed your
'time make modules_install' measurement.
allmodconfig based on the mainline.
[Setup]
masahiro@oscar:~$ nproc
24
masahiro@oscar:~$ for ((cpu=0; cpu<$(nproc); cpu++)); do sudo sh -c
"echo performance >
/sys/devices/system/cpu/cpu${cpu}/cpufreq/scaling_governor"; done
[Mainline (3 runs)]
masahiro@oscar:~/ref/linux(master)$ git log -1 --oneline
0108963f14e9 (HEAD -> master, origin/master, origin/HEAD) Merge tag
'v6.5-rc5.vfs.fixes' of
git://git.kernel.org/pub/scm/linux/kernel/git/vfs/vfs
masahiro@oscar:~/ref/linux(master)$ make -s -j24
masahiro@oscar:~/ref/linux(master)$ wc modules.order
9476 9476 314661 modules.order
masahiro@oscar:~/ref/linux(master)$ rm -rf /tmp/modules; sudo sh -c
"echo 1 > /proc/sys/vm/drop_caches"; time make -s -j24
INSTALL_MOD_PATH=/tmp/modules modules_install
real 0m21.743s
user 1m49.849s
sys 0m18.345s
masahiro@oscar:~/ref/linux(master)$ rm -rf /tmp/modules; sudo sh -c
"echo 1 > /proc/sys/vm/drop_caches"; time make -s -j24
INSTALL_MOD_PATH=/tmp/modules modules_install
real 0m22.246s
user 1m49.303s
sys 0m18.390s
masahiro@oscar:~/ref/linux(master)$ rm -rf /tmp/modules; sudo sh -c
"echo 1 > /proc/sys/vm/drop_caches"; time make -s -j24
INSTALL_MOD_PATH=/tmp/modules modules_install
real 0m28.210s
user 1m46.584s
sys 0m17.678s
[Mainline + your patch set (3 runs)]
masahiro@oscar:~/ref/linux(sig-file)$ git log -9 --oneline
7c95522599c5 (HEAD -> sig-file) kbuild: modinst: do modules_install step by step
1cc212890d9a sign-file: fix do while styling issue
0f9c5c01d378 sign-file: use const with a global string constant
6ddc845c5976 sign-file: improve help message
4573855b7e90 sign-file: add support to sign modules in bulk
f9f0c2c4200c sign-file: move file signing logic to its own function
41cb7c94595d sign-file: inntroduce few new flags to make argument
processing easy.
af85d6eaa481 sign-file: use getopt_long_only for parsing input args
0108963f14e9 (origin/master, origin/HEAD, master) Merge tag
'v6.5-rc5.vfs.fixes' of
git://git.kernel.org/pub/scm/linux/kernel/git/vfs/vfs
masahiro@oscar:~/ref/linux(sig-file)$ make -s -j24
masahiro@oscar:~/ref/linux(sig-file)$ wc modules.order
9476 9476 314661 modules.order
masahiro@oscar:~/ref/linux(sig-file)$ rm -rf /tmp/modules; sudo sh -c
"echo 1 > /proc/sys/vm/drop_caches"; time make -s -j24
INSTALL_MOD_PATH=/tmp/modules modules_install
real 0m25.931s
user 1m40.787s
sys 0m8.390s
masahiro@oscar:~/ref/linux(sig-file)$ rm -rf /tmp/modules; sudo sh -c
"echo 1 > /proc/sys/vm/drop_caches"; time make -s -j24
INSTALL_MOD_PATH=/tmp/modules modules_install
real 0m33.393s
user 1m41.782s
sys 0m8.390s
masahiro@oscar:~/ref/linux(sig-file)$ rm -rf /tmp/modules; sudo sh -c
"echo 1 > /proc/sys/vm/drop_caches"; time make -s -j24
INSTALL_MOD_PATH=/tmp/modules modules_install
real 0m26.311s
user 1m39.205s
sys 0m8.196s
With your apprach, 'sys' is much shorter,
presumably due to less number of process forks.
But, 'real' is not attractive.
--
Best Regards
Masahiro Yamada
Attachment:
signfile-single.sh
Description: application/shellscript
Attachment:
signfile-bulk.sh
Description: application/shellscript