Re: [PATCH v3 3/3] Drivers: hv: vmbus: Fix race condition with new ring_buffer_info mutex
From: Kimberly Brown
Date: Thu Mar 28 2019 - 00:31:03 EST
On Thu, Mar 21, 2019 at 04:04:20PM +0000, Michael Kelley wrote:
> From: Kimberly Brown <kimbrownkd@xxxxxxxxx> Sent: Wednesday, March 20, 2019 8:48 PM
> > > > > Adding more locks will solve the problem but it seems like overkill.
> > > > > Why not either use a reference count or an RCU style access for the
> > > > > ring buffer?
> > > >
> > > > I agree that a reference count or RCU could also solve this problem.
> > > > Using mutex locks seemed like the most straightforward solution, but
> > > > I'll certainly switch to a different approach if it's better!
> > > >
> > > > Are you concerned about the extra memory required for the mutex locks,
> > > > read performance, or something else?
> > >
> > > Locks in control path are ok, but my concern is performance of the
> > > data path which puts packets in/out of rings. To keep reasonable performance,
> > > no additional locking should be added in those paths.
> > >
> > > So if data path is using RCU, can/should the control operations also
> > > use it?
> >
Hi Stephen,
Do you have any additional questions or suggestions for this race
condition and the mutex locks? I think that your initial questions were
addressed in the responses below. If there's anything else, please let
me know!
Thanks,
Kim
> > The data path doesn't use RCU to protect the ring buffers.
>
> My $.02: The mutex is obtained only in the sysfs path and the "delete
> ringbuffers" path, neither of which is performance or concurrency sensitive.
> There's no change to any path that reads or writes data from/to the ring
> buffers. It seems like the mutex is the most straightforward solution to
> preventing sysfs from accessing the ring buffer info while the memory is
> being freed as part of "delete ringbuffers".
>
> Michael