Re: [Linaro-mm-sig] [PATCH 6/7] reservation: cross-devicereservation support

From: Daniel Vetter
Date: Mon Feb 04 2013 - 09:57:05 EST


On Mon, Feb 04, 2013 at 10:57:22AM +0100, Maarten Lankhorst wrote:
> Op 04-02-13 08:06, Inki Dae schreef:
> >> +/**
> >> + * ticket_commit - commit a reservation with a new fence
> >> + * @ticket: [in] the reservation_ticket returned by
> >> + * ticket_reserve
> >> + * @entries: [in] a linked list of struct reservation_entry
> >> + * @fence: [in] the fence that indicates completion
> >> + *
> >> + * This function will call reservation_ticket_fini, no need
> >> + * to do it manually.
> >> + *
> >> + * This function should be called after a hardware command submission is
> >> + * completed succesfully. The fence is used to indicate completion of
> >> + * those commands.
> >> + */
> >> +void
> >> +ticket_commit(struct reservation_ticket *ticket,
> >> + struct list_head *entries, struct fence *fence)
> >> +{
> >> + struct list_head *cur;
> >> +
> >> + if (list_empty(entries))
> >> + return;
> >> +
> >> + if (WARN_ON(!fence)) {
> >> + ticket_backoff(ticket, entries);
> >> + return;
> >> + }
> >> +
> >> + list_for_each(cur, entries) {
> >> + struct reservation_object *bo;
> >> + bool shared;
> >> +
> >> + reservation_entry_get(cur, &bo, &shared);
> >> +
> >> + if (!shared) {
> >> + int i;
> >> + for (i = 0; i < bo->fence_shared_count; ++i) {
> >> + fence_put(bo->fence_shared[i]);
> >> + bo->fence_shared[i] = NULL;
> >> + }
> >> + bo->fence_shared_count = 0;
> >> + if (bo->fence_excl)
> >> + fence_put(bo->fence_excl);
> >> +
> >> + bo->fence_excl = fence;
> >> + } else {
> >> + if (WARN_ON(bo->fence_shared_count >=
> >> + ARRAY_SIZE(bo->fence_shared))) {
> >> + mutex_unreserve_unlock(&bo->lock);
> >> + continue;
> >> + }
> >> +
> >> + bo->fence_shared[bo->fence_shared_count++] = fence;
> >> + }
> > Hi,
> >
> > I got some questions to fence_excl and fence_shared. At the above
> > code, if bo->fence_excl is not NULL then it puts bo->fence_excl and
> > sets a new fence to it. This seems like that someone that committed a
> > new fence, wants to access the given dmabuf exclusively even if
> > someone is accessing the given dmabuf.
> Yes, if there were shared fences, they had to issue a wait for the previous exclusive fence, so if you add
> (possibly hardware) wait ops on those shared fences to your command stream, it follows that you also
> waited for the previous exclusive fence implicitly.
>
> If there is only an exclusive fence, you have to issue some explicit wait op on it before you start the rest
> of the commands.
> > On the other hand, in case of fence_shared, someone wants to access
> > that dmabuf non-exclusively. So this case seems like that the given
> > dmabuf could be accessed by two more devices. So I guess that the
> > fence_excl could be used for write access(may need buffer sync like
> > blocking) and read access for the fence_shared(may not need buffer
> > sync). I'm not sure that I understand these two things correctly so
> > could you please give me more comments for them?
> Correct, if you create a shared fence, you still have to emit a wait op for the exclusive fence.

Just a quick note: We limit the number of non-exclusive fences to avoid
reallocating memory, which would be a bit a pain. Otoh if we support some
form of fence timeline concept, the fence core code could overwrite
redundant fences. Which would reasonable limit the total attached fence
count. Still we'd need to thread potential -ENOMEM failures through all
callers.

Do you see a use-case for more than just a few non-exclusive fences?
-Daniel
--
Daniel Vetter
Software Engineer, Intel Corporation
+41 (0) 79 365 57 48 - http://blog.ffwll.ch
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/