Re: [PATCH-RT sched v1 0/2] Optimize the RT group scheduling
From: Michal Koutný
Date: Mon Jul 29 2024 - 05:33:00 EST
On Fri, Jun 28, 2024 at 01:21:54AM GMT, Xavier <xavier_qy@xxxxxxx> wrote:
> The first patch optimizes the enqueue and dequeue of rt_se, the strategy
> employs a bottom-up removal approach.
I haven't read the patches, I only have a remark to the numbers.
> The second patch provides validation for the efficiency improvements made
> by patch 1. The test case count the number of infinite loop executions for
> all threads.
>
> origion optimized
>
> 10242794134 10659512784
> 13650210798 13555924695
> 12953159254 13733609646
> 11888973428 11742656925
> 12791797633 13447598015
> 11451270205 11704847480
> 13335320346 13858155642
> 10682907328 10513565749
> 10173249704 10254224697
> 8309259793 8893668653
^^^ This is fine, that's what you measured.
> avg 11547894262 11836376429
But providing averages with that many significant digit is nonsensical
(most of them are noise).
If I put your columns into D (Octave) and estimate some errors:
(std(D)/sqrt(10)) ./ mean(D)
ans =
0.046626 0.046755
the error itself would be rounded to ~5%, so the averages measured
should be rounded accordingly
avg 11500000000 11800000000
or even more conservatively
avg 12000000000 12000000000
> Run two QEMU emulators simultaneously, one running the original kernel and the
> other running the optimized kernel, and compare the average of the results over
> 10 runs. After optimizing, the number of iterations in the infinite loop increased
> by approximately 2.5%.
Notice that the measure changed is on par with noise in the data (i.e.
it may be accidental). You may need more iterations to get cleaner
result (more convincing data).
HTH,
Michal
Attachment:
signature.asc
Description: PGP signature