Andrey Grodzovsky <Andrey.Grodzovsky@xxxxxxx> writes:
On 04/24/2018 12:30 PM, Eric W. Biederman wrote:Given Oleg's earlier comment about the scheduler having special cases
"Panariti, David" <David.Panariti@xxxxxxx> writes:I am not clear here - could you be more specific about what races will happen
Andrey Grodzovsky <andrey.grodzovsky@xxxxxxx> writes:Ugh. This loop appears susceptible to loosing wake ups. There are
Kind of dma_fence_wait_killable, except that we don't have such APIDepends on how many places it would be called, or think it might be called. Can always factor on the 2nd time it's needed.
(maybe worth adding ?)
Factoring, IMO, rarely hurts. The factored function can easily be visited using `M-.' ;->
Also, if the wait could be very long, would a log message, something like "xxx has run for Y seconds." help?
I personally hate hanging w/no info.
races between when a wake-up happens, when we clear the sleeping state,
and when we test the stat to see if we should stat awake. So yes
implementing a dma_fence_wait_killable that handles of all that
correctly sounds like an very good idea.
here, more bellow
EricDo you mean that by the time I reach here some other thread from my group
If the ring is hanging for some reason allow to recover the waiting by sending fatal signal.
Originally-by: David Panariti <David.Panariti@xxxxxxx>
Signed-off-by: Andrey Grodzovsky <andrey.grodzovsky@xxxxxxx>
---
drivers/gpu/drm/amd/amdgpu/amdgpu_ctx.c | 14 ++++++++++----
1 file changed, 10 insertions(+), 4 deletions(-)
diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_ctx.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_ctx.c
index eb80edf..37a36af 100644
--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_ctx.c
+++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_ctx.c
@@ -421,10 +421,16 @@ int amdgpu_ctx_wait_prev_fence(struct amdgpu_ctx *ctx, unsigned ring_id)
if (other) {
signed long r;
- r = dma_fence_wait_timeout(other, false, MAX_SCHEDULE_TIMEOUT);
- if (r < 0) {
- DRM_ERROR("Error (%ld) waiting for fence!\n", r);
- return r;
+
+ while (true) {
+ if ((r = dma_fence_wait_timeout(other, true,
+ MAX_SCHEDULE_TIMEOUT)) >= 0)
+ return 0;
+
already might dequeued SIGKILL since it's a shared signal and hence
fatal_signal_pending will return false ? Or are you talking about the
dma_fence_wait_timeout implementation in dma_fence_default_wait with
schedule_timeout ?
for signals I might be wrong. But in general there is a pattern:
for (;;) {
set_current_state(TASK_UNINTERRUPTIBLE);
if (loop_is_done())
break;
schedule();
}
set_current_state(TASK_RUNNING);
If you violate that pattern by testing for a condition without
having first set your task as TASK_UNINTERRUPTIBLE (or whatever your
sleep state is). Then it is possible to miss a wake-up that
tests the condidtion.
Thus I am quite concerned that there is a subtle corner case where
you can miss a wakeup and not retest fatal_signal_pending().
Given that there is is a timeout the worst case might have you sleep
MAX_SCHEDULE_TIMEOUT instead of indefinitely.
Without a comment why this is safe, or having fatal_signal_pending
check integrated into dma_fence_wait_timeout I am not comfortable
with this loop.
Eric
Eric+ if (fatal_signal_pending(current)) {
+ DRM_ERROR("Error (%ld) waiting for fence!\n", r);
+ return r;
+ }
}
}
--
2.7.4