Re: [RFC][PATCH] sched: Use lightweight hazard pointers to grab lazy mms
From: Andy Lutomirski
Date: Thu Jun 17 2021 - 10:11:14 EST
On Thu, Jun 17, 2021, at 2:08 AM, Peter Zijlstra wrote:
> On Wed, Jun 16, 2021 at 10:32:15PM -0700, Andy Lutomirski wrote:
> > Here it is. Not even boot tested!
>
> It is now, it even builds a kernel.. so it must be perfect :-)
>
> > https://git.kernel.org/pub/scm/linux/kernel/git/luto/linux.git/commit/?h=sched/lazymm&id=ecc3992c36cb88087df9c537e2326efb51c95e31
>
> Since I had to turn it into a patch to post, so that I could comment on
> it, I've cleaned it up a little for you.
>
> I'll reply to self with some notes, but I think I like it.
>
> ---
> arch/x86/include/asm/mmu.h | 5 ++
> include/linux/sched/mm.h | 3 +
> kernel/fork.c | 2 +
> kernel/sched/core.c | 138 ++++++++++++++++++++++++++++++++++++---------
> kernel/sched/sched.h | 10 +++-
> 5 files changed, 130 insertions(+), 28 deletions(-)
>
> diff --git a/arch/x86/include/asm/mmu.h b/arch/x86/include/asm/mmu.h
> index 5d7494631ea9..ce94162168c2 100644
> --- a/arch/x86/include/asm/mmu.h
> +++ b/arch/x86/include/asm/mmu.h
> @@ -66,4 +66,9 @@ typedef struct {
> void leave_mm(int cpu);
> #define leave_mm leave_mm
>
> +/* On x86, mm_cpumask(mm) contains all CPUs that might be lazily using mm */
> +#define for_each_possible_lazymm_cpu(cpu, mm) \
> + for_each_cpu((cpu), mm_cpumask((mm)))
> +
> +
> #endif /* _ASM_X86_MMU_H */
> diff --git a/include/linux/sched/mm.h b/include/linux/sched/mm.h
> index e24b1fe348e3..5c7eafee6fea 100644
> --- a/include/linux/sched/mm.h
> +++ b/include/linux/sched/mm.h
> @@ -77,6 +77,9 @@ static inline bool mmget_not_zero(struct mm_struct *mm)
>
> /* mmput gets rid of the mappings and all user-space */
> extern void mmput(struct mm_struct *);
> +
> +extern void mm_unlazy_mm_count(struct mm_struct *mm);
You didn't like mm_many_words_in_the_name_of_the_function()? :)
> - if (mm) {
> - membarrier_mm_sync_core_before_usermode(mm);
> - mmdrop(mm);
> - }
What happened here?
I think that my membarrier work should land before this patch. Specifically, I want the scheduler to be in a state where nothing depends on the barrier-ness of mmdrop() so that we can change the mmdrop() calls to stop being barriers without our brains exploding trying to understand two different fancy synchronization schemes at the same time.
Other than that I like your cleanups.