Hi,
The idea is, by design to have parallel non-blocking paths for rx and tx (that is done as a+ÂÂÂ /* Take it off the tree of receive intents */Adding one more parallel path will hit performance, if this worker could not get CPU cycles
+ÂÂÂ if (!intent->reuse) {
+ÂÂÂÂÂÂÂ spin_lock(&channel->intent_lock);
+ÂÂÂÂÂÂÂ idr_remove(&channel->liids, intent->id);
+ÂÂÂÂÂÂÂ spin_unlock(&channel->intent_lock);
+ÂÂÂ }
+
+ÂÂÂ /* Schedule the sending of a rx_done indication */
+ÂÂÂ spin_lock(&channel->intent_lock);
+ÂÂÂ list_add_tail(&intent->node, &channel->done_intents);
+ÂÂÂ spin_unlock(&channel->intent_lock);
+
+ÂÂÂ schedule_work(&channel->intent_work);
or blocked by other RT or HIGH_PRIO worker on global worker pool.
part of rx by sending the rx_done command), otherwise trying to send the rx_done
command in the rx isr context is a problem since the tx can wait for the FIFO space and
in worst case, can even lead to a potential deadlock if both the local and remote try
the same. Having said that, instead of queuing this work in to the global queue, this
can be put in to a local glink edge owned queue (or) a threaded isr ?, downstream does the
rx_done in a client specific worker.
Regards,
Sricharan