Re: [RFC 5/8] drm/fence: add fence to drm_pending_event
From: Gustavo Padovan
Date: Fri Apr 15 2016 - 14:59:09 EST
2016-04-15 Daniel Vetter <daniel@xxxxxxxx>:
> On Thu, Apr 14, 2016 at 06:29:38PM -0700, Gustavo Padovan wrote:
> > From: Gustavo Padovan <gustavo.padovan@xxxxxxxxxxxxxxx>
> >
> > Now a drm_pending_event can either send a real drm_event or signal a
> > fence, or both. It allow us to signal via fences when the buffer is
> > displayed on the screen. Which in turn means that the previous buffer
> > is not in use anymore and can be freed or sent back to another driver
> > for processing.
> >
> > Signed-off-by: Gustavo Padovan <gustavo.padovan@xxxxxxxxxxxxxxx>
>
> Do you have atomic igt testcase that exercise all the combinations of
> drm_event and in/out-fences and make sure it all keeps working? Tomeu
> already converted that over to be a generic testcase.
Not yet. I'll check for test cases.
>
> > ---
> > drivers/gpu/drm/drm_atomic.c | 19 +++++++++++++------
> > drivers/gpu/drm/drm_fops.c | 5 +++--
> > drivers/gpu/drm/drm_irq.c | 7 +++++++
> > include/drm/drmP.h | 1 +
> > 4 files changed, 24 insertions(+), 8 deletions(-)
> >
> > diff --git a/drivers/gpu/drm/drm_atomic.c b/drivers/gpu/drm/drm_atomic.c
> > index 6702502..0b95526 100644
> > --- a/drivers/gpu/drm/drm_atomic.c
> > +++ b/drivers/gpu/drm/drm_atomic.c
> > @@ -1431,7 +1431,8 @@ EXPORT_SYMBOL(drm_atomic_async_commit);
> > */
> >
> > static struct drm_pending_vblank_event *create_vblank_event(
> > - struct drm_device *dev, struct drm_file *file_priv, uint64_t user_data)
> > + struct drm_device *dev, struct drm_file *file_priv,
> > + struct fence *fence, uint64_t user_data)
> > {
> > struct drm_pending_vblank_event *e = NULL;
> > int ret;
> > @@ -1444,12 +1445,17 @@ static struct drm_pending_vblank_event *create_vblank_event(
> > e->event.base.length = sizeof(e->event);
> > e->event.user_data = user_data;
> >
> > - ret = drm_event_reserve_init(dev, file_priv, &e->base, &e->event.base);
> > - if (ret) {
> > - kfree(e);
> > - return NULL;
> > + if (file_priv) {
> > + ret = drm_event_reserve_init(dev, file_priv, &e->base,
> > + &e->event.base);
> > + if (ret) {
> > + kfree(e);
> > + return NULL;
> > + }
> > }
> >
> > + e->base.fence = fence;
> > +
> > return e;
> > }
> >
> > @@ -1676,7 +1682,8 @@ retry:
> > for_each_crtc_in_state(state, crtc, crtc_state, i) {
> > struct drm_pending_vblank_event *e;
> >
> > - e = create_vblank_event(dev, file_priv, arg->user_data);
> > + e = create_vblank_event(dev, file_priv, NULL,
> > + arg->user_data);
> > if (!e) {
> > ret = -ENOMEM;
> > goto out;
> > diff --git a/drivers/gpu/drm/drm_fops.c b/drivers/gpu/drm/drm_fops.c
> > index aeef58e..38def49 100644
> > --- a/drivers/gpu/drm/drm_fops.c
> > +++ b/drivers/gpu/drm/drm_fops.c
> > @@ -801,8 +801,9 @@ void drm_send_event_locked(struct drm_device *dev, struct drm_pending_event *e)
> > {
> > assert_spin_locked(&dev->event_lock);
> >
> > - if (!e->file_priv) {
> > - e->destroy(e);
> > + if (!e->file_priv || !e->event) {
>
> This would be a bug: e->file_priv != NULL iff e->event != NULL. How did
> this happen?
Not sure now. But I needed this to prevent a crash, I don't have logs of
it anymore, I'll check this again.
>
> > + if (e->destroy)
> > + e->destroy(e);
> > return;
> > }
> >
> > diff --git a/drivers/gpu/drm/drm_irq.c b/drivers/gpu/drm/drm_irq.c
> > index 3c1a6f1..0c5d7cb 100644
> > --- a/drivers/gpu/drm/drm_irq.c
> > +++ b/drivers/gpu/drm/drm_irq.c
> > @@ -41,6 +41,7 @@
> >
> > #include <linux/vgaarb.h>
> > #include <linux/export.h>
> > +#include <linux/fence.h>
> >
> > /* Access macro for slots in vblank timestamp ringbuffer. */
> > #define vblanktimestamp(dev, pipe, count) \
> > @@ -1124,6 +1125,12 @@ void drm_send_vblank_event(struct drm_device *dev, unsigned int pipe,
> > now = get_drm_timestamp();
> > }
> > e->pipe = pipe;
> > +
> > + if (e->base.fence) {
> > + fence_signal(e->base.fence);
> > + fence_put(e->base.fence);
> > + }
>
> I'd put this into drm_send_event_locked even. In case we send out fences
> for other events too (e.g. exynos uses drm_event for rendering ...).
Okay. Looking into that.
Gustavo