Re: [PATCH] kasan: convert kasan/quarantine_lock to raw_spinlock

From: Sebastian Andrzej Siewior
Date: Wed Oct 10 2018 - 05:29:38 EST


On 2018-10-10 10:25:42 [+0200], Dmitry Vyukov wrote:
> > That loop should behave like your on_each_cpu() except it does not
> > involve the remote CPU.
>
>
> The problem is that it can squeeze in between:
>
> + spin_unlock(&q->lock);
>
> spin_lock(&quarantine_lock);
>
> as far as I see. And then some objects can be left in the quarantine.

Okay. But then once you are at CPU10 (in the on_each_cpu() loop) there
can be objects which are added to CPU0, right? So based on that, I
assumed that this would be okay to drop the lock here.

> > But this is debug code anyway, right? And it is highly complex imho.
> > Well, maybe only for me after I looked at it for the first timeâ
>
> It is debug code - yes.
> Nothing about its performance matters - no.
>
> That's the way to produce unusable debug tools.
> With too much overhead timeouts start to fire and code behaves not the
> way it behaves in production.
> The tool is used in continuous integration and developers wait for
> test results before merging code.
> The tool is used on canary devices and directly contributes to usage experience.

Completely understood. What I meant is that debug code in general (from
RT perspective) increases latency to a level where the device can not
operate. Take lockdep for instance. Debug facility which is required
for RT as it spots locking problems early. It increases the latency
(depending on the workload) by 50ms+ and can't be used in production.
Same goes for SLUB debug and most other.

> We of course don't want to trade a page of assembly code for cutting
> few cycles here (something that could make sense for some networking
> code maybe). But otherwise let's not introduce spinlocks on fast paths
> just for refactoring reasons.

Sure. As I said. I'm fine with patch Clark initially proposed. I assumed
the refactoring would make things simpler and avoiding the cross-CPU
call a good thing.

> > Can you take it as-is or should I repost it with an acked-by?
>
> Perhaps it's the problem with the way RT kernel changes things then?
> This is not specific to quarantine, right?

We got rid of _a lot_ of local_irq_disable/save() + spin_lock() combos
which were there for reasons which are no longer true or due to lack of
the API. And this kasan thing is just something Clark stumbled upon
recently. And I try to negotiate something where everyone can agree on.

> Should that mutex detect
> that IRQs are disabled and not try to schedule? If this would happen
> in some networking code, what would we do?

It is not only that it is supposed not to schedule. Assuming the "mutex"
is not owned you could acquire it right away. No scheduling. However,
you would record current() as the owner of the lock which is wrong and
you get into other trouble later on. The list goes on :)
However, networking. If there is something that breaks then it will be
addressed. It will be forwarded upstream if this something where it
is likely to assume that RT won't change. So networking isn't special.

Should I repost Clark's patch?

Sebastian