Re: [Question] iforce_serio lockup
From: Dmitry Torokhov
Date: Mon Aug 29 2022 - 14:13:45 EST
On Sun, Aug 28, 2022 at 05:05:09PM +0800, Hillf Danton wrote:
> On Fri, 29 Jul 2022 21:24:13 +0200 Greg Tulli <greg.iforce@xxxxxxxxx>
> >
> > Chain exists of:
> > &iforce->xmit_lock --> &port_lock_key --> &serport->lock
> >
> > Possible unsafe locking scenario:
> >
> > CPU0 CPU1
> > ---- ----
> > lock(&serport->lock);
> > lock(&port_lock_key);
> > lock(&serport->lock);
> > lock(&iforce->xmit_lock);
> >
> > *** DEADLOCK ***
> >
> > 2022-07-11 11:32 GMT+02:00, Greg T <greg.iforce@xxxxxxxxx>:
> >
> > > That problem vanishes if we don't call iforce_process_packet directly
> > > from iforce_serio_irq, but from a tasklet. Is that a right approach?
>
> Another option is to do wakeup without serport->lock held,
> given iforce->xmit_lock. Your test will provide info about the races
> that may come up due to the serport->lock.
No, I think we should use work to process the outbound buffer/queue,
as the comment in include/linux/tty_ldisc.h recommends. I believe a
single work instance in struct iforce_serio will suffice:
iforce_serio_xmit already should be able to handle concurrent
invocations, so we just need to schedule the work from
iforce_serio_write_wakeup() (and it is fine if it is already scheduled)
and let iforce_serio_xmit() do its job.
We can wait for the buffer to empty (which should take care of the work
running, but we may also do cancel_work_sync() for a good measure) in
iforce_serio_stop_io().
Thanks.
--
Dmitry