Let me think about over the weekend... Do you have performance numbers for thisThank you, yes we tested mainly for the SPI cases(Master and Slave mode), there
change?
we saw a peak delay of 400ms for transaction completion and this varied with CPU
load, after adding the patch to not wait for DMA TX completion and use EOW
interrupt the peak latency reduced to 2ms.
If we make sure that this is only affecting non cyclic transfers with a in codeSure I will add this in the next revision.
comment to explain the expectations from the user I think this can be safe.
\
Signed-off-by: Vaishnav Achath <vaishnav.a@xxxxxx>
---
drivers/dma/ti/k3-udma.c | 5 ++++-
1 file changed, 4 insertions(+), 1 deletion(-)
diff --git a/drivers/dma/ti/k3-udma.c b/drivers/dma/ti/k3-udma.c
index 39b330ada200..03d579068453 100644
--- a/drivers/dma/ti/k3-udma.c
+++ b/drivers/dma/ti/k3-udma.c
@@ -263,6 +263,7 @@ struct udma_chan_config {
enum udma_tp_level channel_tpl; /* Channel Throughput Level */
u32 tr_trigger_type;
+ unsigned long tx_flags;
/* PKDMA mapped channel */
int mapped_channel_id;
@@ -1057,7 +1058,7 @@ static bool udma_is_desc_really_done(struct udma_chan
*uc, struct udma_desc *d)
/* Only TX towards PDMA is affected */
if (uc->config.ep_type == PSIL_EP_NATIVE ||
- uc->config.dir != DMA_MEM_TO_DEV)
+ uc->config.dir != DMA_MEM_TO_DEV || !(uc->config.tx_flags &
DMA_PREP_INTERRUPT))
return true;
peer_bcnt = udma_tchanrt_read(uc, UDMA_CHAN_RT_PEER_BCNT_REG);
@@ -3418,6 +3419,8 @@ udma_prep_slave_sg(struct dma_chan *chan, struct
scatterlist *sgl,
if (!burst)
burst = 1;
+ uc->config.tx_flags = tx_flags;
+
if (uc->config.pkt_mode)
d = udma_prep_slave_sg_pkt(uc, sgl, sglen, dir, tx_flags,
context);