Re: [PATCH 08/20] powerpc: Preemptible mmu_gather

From: Benjamin Herrenschmidt
Date: Tue Aug 31 2010 - 02:28:15 EST


On Sat, 2010-08-28 at 16:16 +0200, Peter Zijlstra wrote:
> Fix up powerpc to the new mmu_gather stuffs.

Unfortunately, I think this is broken...

First there's an actual bug here:

> last = _switch(old_thread, new_thread);
>
> +#ifdef CONFIG_PPC64
> + if (task_thread_info(new)->local_flags & _TLF_LAZY_MMU) {
> + task_thread_info(new)->local_flags &= ~_TLF_LAZY_MMU;
> + batch = &__get_cpu_var(ppc64_tlb_batch);
> + batch->active = 1;
> + }
> +#endif
> +

Here, you are coming out of _switch() which will have swapped the
stack and non-volatile registers to the state they were in when the
new task was originally switched-out. Thus "new" which is a local variable
(either on stack or in a non-volatile register) will now refer to whatever
was the next task back then.

I suppose that's what's causing the similar patch you have in -rt to
fail btw. This could be fixed easily by using "current" instead.

However, there I have another concern.

> PPC has an extra batching queue to RCU free the actual pagetable
> allocations, use the ARCH extentions for that for now.

Right, so far that looks fine (at least after a quick look).

> For the ppc64_tlb_batch, which tracks the vaddrs to unhash from the
> hardware hash-table, keep using per-cpu arrays but flush on context
> switch and use a TLF bit to track the laxy_mmu state.

However, that doesn't seem necessary at all, at least not for !-rt, or
unless you broke something that I would need to look at very closely
then :-)

IE. Enable/disable the batch only within "lazy_mmu_mode" sections. We do
that in large part because we do not want non-flushed pages to exist
outside of the pte spinlock.

The reason is that if we let that happen, a small possibility exist for
our MMU hash page handling to try to insert a duplicate entry for a
given PTE into the hash table, which is basically fatal.

Thus, we only exist during that lazy period, which means with a lock
held. Hence we can't schedule and the changes you do regarding
get/put_cpu_var are unnecessary.

Another "trick" here btw is that fork() is currently not using a batch,
but with our technique, we do get batching there too.

So unless something else is broken that makes the above not true
anymore, which would be a concern, most of the changes you did to the
flush batch are unnecessary for your preemptible mmu_gather on non-rt
kernels.

Of course, with -rt and the pte lock becoming a mutex, all of your
changes do become necessary (and I suppose that's where they come from).

Now, those changes won't technically hurt on a non-rt kernel, tho they
will add a tiny bit of overhead. I'll see if I can measure it.

Cheers,
Ben.


--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/