Re: [PATCH] drm: Take mmap_sem up front to avoid lock orderviolations.

From: Eric Anholt
Date: Fri Feb 20 2009 - 21:33:46 EST


On Thu, 2009-02-19 at 13:57 +0100, Nick Piggin wrote:
> On Thu, Feb 19, 2009 at 10:19:05AM +0100, Peter Zijlstra wrote:
> > On Wed, 2009-02-18 at 11:38 -0500, krh@xxxxxxxxxxxxx wrote:
> > > From: Kristian HÃgsberg <krh@xxxxxxxxxx>
> > >
> > > A number of GEM operations (and legacy drm ones) want to copy data to
> > > or from userspace while holding the struct_mutex lock. However, the
> > > fault handler calls us with the mmap_sem held and thus enforces the
> > > opposite locking order. This patch downs the mmap_sem up front for
> > > those operations that access userspace data under the struct_mutex
> > > lock to ensure the locking order is consistent.
> > >
> > > Signed-off-by: Kristian HÃgsberg <krh@xxxxxxxxxx>
> > > ---
> > >
> > > Here's a different and simpler attempt to fix the locking order
> > > problem. We can just down_read() the mmap_sem pre-emptively up-front,
> > > and the locking order is respected. It's simpler than the
> > > mutex_trylock() game, avoids introducing a new mutex.
>
> The "simple" way to fix this is to just allocate a temporary buffer
> to copy a snapshot of the data going to/from userspace. Then do the
> real usercopy to/from that buffer outside the locks.
>
> You don't have any performance critical bulk copies (ie. that will
> blow the L1 cache), do you?

16kb is the most common size (batchbuffers). 32k is popular on 915
(vertex), and varying between 0-128k on 965 (vertex). The pwrite path
generally represents 10-30% of CPU consumption in CPU-bound apps.

--
Eric Anholt
eric@xxxxxxxxxx eric.anholt@xxxxxxxxx


Attachment: signature.asc
Description: This is a digitally signed message part