Re: [RFC PATCH 00/16] Core scheduling v6(Internet mail)

From: benbjiang(蒋彪)
Date: Fri Aug 14 2020 - 00:04:18 EST




> On Aug 14, 2020, at 9:36 AM, Li, Aubrey <aubrey.li@xxxxxxxxxxxxxxx> wrote:
>
> On 2020/8/14 8:26, benbjiang(蒋彪) wrote:
>>
>>
>>> On Aug 13, 2020, at 12:28 PM, Li, Aubrey <aubrey.li@xxxxxxxxxxxxxxx> wrote:
>>>
>>> On 2020/8/13 7:08, Joel Fernandes wrote:
>>>> On Wed, Aug 12, 2020 at 10:01:24AM +0800, Li, Aubrey wrote:
>>>>> Hi Joel,
>>>>>
>>>>> On 2020/8/10 0:44, Joel Fernandes wrote:
>>>>>> Hi Aubrey,
>>>>>>
>>>>>> Apologies for replying late as I was still looking into the details.
>>>>>>
>>>>>> On Wed, Aug 05, 2020 at 11:57:20AM +0800, Li, Aubrey wrote:
>>>>>> [...]
>>>>>>> +/*
>>>>>>> + * Core scheduling policy:
>>>>>>> + * - CORE_SCHED_DISABLED: core scheduling is disabled.
>>>>>>> + * - CORE_COOKIE_MATCH: tasks with same cookie can run
>>>>>>> + * on the same core concurrently.
>>>>>>> + * - CORE_COOKIE_TRUST: trusted task can run with kernel
>>>>>>> thread on the same core concurrently.
>>>>>>> + * - CORE_COOKIE_LONELY: tasks with cookie can run only
>>>>>>> + * with idle thread on the same core.
>>>>>>> + */
>>>>>>> +enum coresched_policy {
>>>>>>> + CORE_SCHED_DISABLED,
>>>>>>> + CORE_SCHED_COOKIE_MATCH,
>>>>>>> + CORE_SCHED_COOKIE_TRUST,
>>>>>>> + CORE_SCHED_COOKIE_LONELY,
>>>>>>> +};
>>>>>>>
>>>>>>> We can set policy to CORE_COOKIE_TRUST of uperf cgroup and fix this kind
>>>>>>> of performance regression. Not sure if this sounds attractive?
>>>>>>
>>>>>> Instead of this, I think it can be something simpler IMHO:
>>>>>>
>>>>>> 1. Consider all cookie-0 task as trusted. (Even right now, if you apply the
>>>>>> core-scheduling patchset, such tasks will share a core and sniff on each
>>>>>> other. So let us not pretend that such tasks are not trusted).
>>>>>>
>>>>>> 2. All kernel threads and idle task would have a cookie 0 (so that will cover
>>>>>> ksoftirqd reported in your original issue).
>>>>>>
>>>>>> 3. Add a config option (CONFIG_SCHED_CORE_DEFAULT_TASKS_UNTRUSTED). Default
>>>>>> enable it. Setting this option would tag all tasks that are forked from a
>>>>>> cookie-0 task with their own cookie. Later on, such tasks can be added to
>>>>>> a group. This cover's PeterZ's ask about having 'default untrusted').
>>>>>> (Users like ChromeOS that don't want to userspace system processes to be
>>>>>> tagged can disable this option so such tasks will be cookie-0).
>>>>>>
>>>>>> 4. Allow prctl/cgroup interfaces to create groups of tasks and override the
>>>>>> above behaviors.
>>>>>
>>>>> How does uperf in a cgroup work with ksoftirqd? Are you suggesting I set uperf's
>>>>> cookie to be cookie-0 via prctl?
>>>>
>>>> Yes, but let me try to understand better. There are 2 problems here I think:
>>>>
>>>> 1. ksoftirqd getting idled when HT is turned on, because uperf is sharing a
>>>> core with it: This should not be any worse than SMT OFF, because even SMT OFF
>>>> would also reduce ksoftirqd's CPU time just core sched is doing. Sure
>>>> core-scheduling adds some overhead with IPIs but such a huge drop of perf is
>>>> strange. Peter any thoughts on that?
>>>>
>>>> 2. Interface: To solve the performance problem, you are saying you want uperf
>>>> to share a core with ksoftirqd so that it is not forced into idle. Why not
>>>> just keep uperf out of the cgroup?
>>>
>>> I guess this is unacceptable for who runs their apps in container and vm.
>> IMHO, just as Joel proposed,
>> 1. Consider all cookie-0 task as trusted.
>> 2. All kernel threads and idle task would have a cookie 0
>> In that way, all tasks with cookies(including uperf in a cgroup) could run
>> concurrently with kernel threads.
>> That could be a good solution for the issue. :)
>
> From uperf point of review, it can trust cookie-0(I assume we still need
> some modifications to change cookie-match to cookie-compatible to allow
> ZERO and NONZERO run together).
>
> But from kernel thread point of review, it can NOT trust uperf, unless
> we set uperf's cookie to 0.
That’s right. :)
Could we set the cookie of cgroup where uperf lies to 0?

Thx.
Regards,
Jiang

>
> Thanks,
> -Aubrey
>