Re: [Linuxarm] Re: [PATCH for next v1 0/2] gpio: few clean up patches to replace spin_lock_irqsave with spin_lock
From: Andy Shevchenko
Date: Thu Feb 11 2021 - 05:04:14 EST
On Wed, Feb 10, 2021 at 10:42 PM Song Bao Hua (Barry Song)
<song.bao.hua@xxxxxxxxxxxxx> wrote:
> > -----Original Message-----
> > From: Andy Shevchenko [mailto:andy.shevchenko@xxxxxxxxx]
> > Sent: Thursday, February 11, 2021 3:57 AM
> > On Wed, Feb 10, 2021 at 11:50:45AM +0000, Song Bao Hua (Barry Song) wrote:
> > > > -----Original Message-----
> > > > From: Andy Shevchenko [mailto:andy.shevchenko@xxxxxxxxx]
> > > > Sent: Wednesday, February 10, 2021 11:51 PM
> > > > On Wed, Feb 10, 2021 at 5:43 AM luojiaxing <luojiaxing@xxxxxxxxxx> wrote:
> > > > > On 2021/2/9 17:42, Andy Shevchenko wrote:
> >
> > ...
> >
> > > > > Between IRQ handler A and IRQ handle A, it's no need for a SLIS.
> > > >
> > > > Right, but it's not the case in the patches you provided.
> > >
> > > The code still holds spin_lock. So if two cpus call same IRQ handler,
> > > spin_lock makes them spin; and if interrupts are threaded, spin_lock
> > > makes two threads run the same handler one by one.
> >
> > If you run on an SMP system and it happens that spin_lock_irqsave() just
> > immediately after spin_unlock(), you will get into the troubles. Am I mistaken?
>
> Hi Andy,
> Thanks for your reply.
>
> But I don't agree spin_lock_irqsave() just immediately after spin_unlock()
> could a problem on SMP.
> When the 1st cpu releases spinlock by spin_unlock, it has completed its section
> of accessing the critical data, then 2nd cpu gets the spin_lock. These two CPUs
> won't have overlap on accessing the same data.
>
> >
> > I think this entire activity is a carefully crafted mine field for the future
> > syzcaller and fuzzers alike. I don't believe there are no side effects in a
> > long
> > term on all possible systems and configurations (including forced threaded IRQ
> > handlers).
>
> Also I don't understand why forced threaded IRQ could be a problem. Since IRQ has
> been a thread, this actually makes the situation much simpler than non-threaded
> IRQ. Since all threads including those IRQ threads want to hold spin_lock,
> they won't access the same critical data at the same time either.
>
> >
> > I would love to see a better explanation in the commit message of such patches
> > which makes it clear that there are *no* side effects.
> >
>
> People had the same questions before, But I guess the discussion in this commit
> has led to a better commit log:
>
> https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/commit/?id=4eb7d0cd59
>
> > For time being, NAK to the all patches of this kind.
>
> Fair enough, if you expect better explanation, I agree the commit log is too
> short.
Yes, my main concern that the commit message style as "I feel it's
wrong" is inappropriate to this kind of patch. The one you pointed out
above is better, you may give it even more thrust by explaining why it
was in the first place and what happened between the driver gained
this type of spinlock and your patch.
--
With Best Regards,
Andy Shevchenko