Re: [PATCH v8 3/3] dmaengine: pl330: Don't require irq-safe runtime PM
From: Vinod Koul
Date: Mon Feb 13 2017 - 10:50:19 EST
On Mon, Feb 13, 2017 at 04:32:32PM +0100, Ulf Hansson wrote:
> [...]
>
> >> Although, I don't know of other examples, besides the runtime PM use
> >> case, where non-atomic channel prepare/unprepare would make sense. Do
> >> you?
> >
> > The primary ask for that has been to enable runtime_pm for drivers. It's not
> > a new ask, but we somehow haven't gotten around to do it.
>
> Okay, I see.
>
> >
> >> > As I said earlier, if we want to solve that problem a better idea is to
> >> > actually split the prepare as we discussed in [1]
> >> >
> >> > This way we can get a non atomic descriptor allocate/prepare and release.
> >> > Yes we need to redesign the APIs to solve this, but if you guys are up for
> >> > it, I think we can do it and avoid any further round abouts :)
> >>
> >> Adding/re-designing dma APIs is a viable option to solve the runtime PM case.
> >>
> >> Changes would be needed for all related dma client drivers as well,
> >> although if that's what we need to do - let's do it.
> >
> > Yes, but do bear in mind that some cases do need atomic prepare. The primary
> > cases for DMA had that in mind and also submitting next transaction from the
> > callback (tasklet) context, so that won't go away.
> >
> > It would help in other cases where clients know that they will not be in
> > atomic context so we provide additional non-atomic "allocation" followed by
> > prepare, so that drivers can split the work among these and people can do
> > runtime_pm and other things..
>
> That for sharing the details.
>
> It seems like some dma expert really need to be heavily involved if we
> ever are going to complete this work. :-)
Sure, I will help out :)
If anyone of you are in Portland next week, then we can discuss these f2f. I
will try taking a stab at the new API design next week.
>
> [...]
>
> >>
> >> 1) Dependencies between dma drivers and dma client drivers during system
> >> PM. For example, a dma client driver needs the dma controller to be
> >> operational (remain system resumed), until the dma client driver itself
> >> becomes system suspended.
> >>
> >> The *only* currently available solution for this, is to try to system
> >> suspend the dma controller later than the dma client, via using the *late
> >> or the *noirq system PM callbacks. This works for most cases, but it
> >> becomes a problem when the dma client also needs to be system suspended at
> >> the *late or the *noirq phase. Clearly this solution that doesn't scale.
> >>
> >> Using device links explicitly solves this problem as it allows to specify
> >> this dependency between devices.
> >
> > Yes this is an interesting point. Yes till now people have been doing above
> > to workaround this problem, but hey this is not a unique to dmaengine. Any
> > subsystem which provides services to others has this issue, so the solution
> > much be driver or pm framework and not unique to dmaengine.
>
> I definitely agree, these problems aren't unique to the dmaengine
> subsystem. Exactly how/where to manage them, that I guess, is the key
> question.
>
> However, I can't resist from finding the device links useful, as those
> really do address and solve our issues from a runtime/system PM point
> of view.
>
> >
> >> 2) We won't avoid dma clients from getting -EPROBE_DEFER when requesting
> >> their dma channels in their ->probe() routines. This would be possible, if
> >> we can set up the device links at device initialization.
> >
> > Well setting those links is not practical at initialization time. Most
> > modern dma controllers feature a SW mux, with multiple clients connecting
> > and requesting, would we link all of them? Most of times dmaengine driver
> > wont know about those..
>
> Okay, I see!
>
> Kind regards
> Uffe
--
~Vinod