Re: [RFC PATCH 0/4] Scheduler time slice extension

From: Prakash Sangappa
Date: Wed Nov 13 2024 - 14:57:19 EST




> On Nov 12, 2024, at 9:43 PM, K Prateek Nayak <kprateek.nayak@xxxxxxx> wrote:
>
> Hello Prakash,
>
> Few questions around the benchmarks.
>
> On 11/13/2024 5:31 AM, Prakash Sangappa wrote:
>> [..snip..] Test results:
>> =============
>> Test system 2 socket AMD Genoa
>> Lock table test:- a simple database test to grab table lock(spin lock).
>> Simulates sql query executions.
>> 300 clients + 400 cpu hog tasks to generate load.
>
> Have you tried running the 300 clients with a nice value of -20 and 400
> CPU hogs with the default nice value / nice 19? Does that help this
> particular case?

Have not tried this with the database. Will have to try it.


>
>> Without extension : 182K SQL exec/sec
>> With extension : 262K SQL exec/sec
>> 44% improvement.
>> Swingbench - standard database benchmark
>> Cached(database files on tmpfs) run, with 1000 clients.
>
> In this case, how does the performance fare when running the clients
> under SCHED_BATCH? What does the "TASK_PREEMPT_DELAY_REQ" count vs
> "TASK_PREEMPT_DELAY_GRANTED" count look like for the benchmark run?

Not tried SCHED_BATCH.

With this run, there were about avg ‘166' TASK_PREEMPT_DELAY_GRANTED grants per task, collected from the scheduler stats captured at the end of the run. Test runs for about 5min. Don't have the count of how many times preempt delay was requested. If the task completes the critical section, it clears the TASK_PREEMPT_DELAY_REQ flag, so kernel would not see it many cases as this may not be near the end of the time slice. We would have to capture the count in the application.


>
> I'm trying to understand what the performance looks like when using
> existing features that inhibit preemption vs putting forward the
> preemption when the userspace is holding a lock. Feel free to quote
> the latency comparisons too if using the existing features lead to
> unacceptable avg/tail latencies.
>
>> Without extension : 99K SQL exec/sec
>> with extension : 153K SQL exec/sec
>> 55% improvement in throughput.
>> [..snip..]
>
> --
> Thanks and Regards,
> Prateek