[PATCH 3.14 077/110] target: Use complete_all for se_cmd->t_transport_stop_comp
From: Greg Kroah-Hartman
Date: Sat Jun 28 2014 - 14:37:52 EST
3.14-stable review patch. If anyone has any objections, please let me know.
------------------
From: Nicholas Bellinger <nab@xxxxxxxxxxxxxxx>
commit a95d6511303b848da45ee27b35018bb58087bdc6 upstream.
This patch fixes a bug where multiple waiters on ->t_transport_stop_comp
occurs due to a concurrent ABORT_TASK and session reset both invoking
transport_wait_for_tasks(), while waiting for the associated se_cmd
descriptor backend processing to complete.
For this case, complete_all() should be invoked in order to wake up
both waiters in core_tmr_abort_task() + transport_generic_free_cmd()
process contexts.
Cc: Thomas Glanzmann <thomas@xxxxxxxxxxxx>
Cc: Charalampos Pournaris <charpour@xxxxxxxxx>
Signed-off-by: Nicholas Bellinger <nab@xxxxxxxxxxxxxxx>
Signed-off-by: Greg Kroah-Hartman <gregkh@xxxxxxxxxxxxxxxxxxx>
---
drivers/target/target_core_transport.c | 6 +++---
1 file changed, 3 insertions(+), 3 deletions(-)
--- a/drivers/target/target_core_transport.c
+++ b/drivers/target/target_core_transport.c
@@ -560,7 +560,7 @@ static int transport_cmd_check_stop(stru
spin_unlock_irqrestore(&cmd->t_state_lock, flags);
- complete(&cmd->t_transport_stop_comp);
+ complete_all(&cmd->t_transport_stop_comp);
return 1;
}
@@ -676,7 +676,7 @@ void target_complete_cmd(struct se_cmd *
if (cmd->transport_state & CMD_T_ABORTED &&
cmd->transport_state & CMD_T_STOP) {
spin_unlock_irqrestore(&cmd->t_state_lock, flags);
- complete(&cmd->t_transport_stop_comp);
+ complete_all(&cmd->t_transport_stop_comp);
return;
} else if (!success) {
INIT_WORK(&cmd->work, target_complete_failure_work);
@@ -1748,7 +1748,7 @@ void target_execute_cmd(struct se_cmd *c
cmd->se_tfo->get_task_tag(cmd));
spin_unlock_irq(&cmd->t_state_lock);
- complete(&cmd->t_transport_stop_comp);
+ complete_all(&cmd->t_transport_stop_comp);
return;
}
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/