On Tue, Mar 08, 2016 at 08:25:47AM +0530, Vinod Koul wrote:
On Mon, Mar 07, 2016 at 09:30:24PM +0100, Maxime Ripard wrote:
On Mon, Mar 07, 2016 at 04:08:57PM +0100, Boris Brezillon wrote:
Hi Vinod,
On Mon, 7 Mar 2016 20:24:29 +0530
Vinod Koul <vinod.koul@xxxxxxxxx> wrote:
On Mon, Mar 07, 2016 at 10:59:31AM +0100, Boris Brezillon wrote:
+/* Dedicated DMA parameter register layout */
+#define SUN4I_DDMA_PARA_DST_DATA_BLK_SIZE(n) (((n) - 1) << 24)
+#define SUN4I_DDMA_PARA_DST_WAIT_CYCLES(n) (((n) - 1) << 16)
+#define SUN4I_DDMA_PARA_SRC_DATA_BLK_SIZE(n) (((n) - 1) << 8)
+#define SUN4I_DDMA_PARA_SRC_WAIT_CYCLES(n) (((n) - 1) << 0)
+
+/**
+ * struct sun4i_dma_chan_config - DMA channel config
+ *
+ * @para: contains information about block size and time before checking
+ * DRQ line. This is device specific and only applicable to dedicated
+ * DMA channels
What information, can you elobrate.. And why can't you use existing
dma_slave_config for this?
Block size is related to the device FIFO size. I guess it allows the
DMA channel to launch a transfer of X bytes without having to check the
DRQ line (the line telling the DMA engine it can transfer more data
to/from the device). The wait cycles information is apparently related
to the number of clks the engine should wait before polling/checking
the DRQ line status between each block transfer. I'm not sure what it
saves to put WAIT_CYCLES() to something != 1, but in their BSP,
Allwinner tweak that depending on the device.
we already have block size aka src/dst_maxburst, why not use that one.
I'm not sure it's really the same thing. The DMA controller also has a
burst parameter, that is either 1 byte or 8 bytes, and ends up being
different from this one.
Why does dmaengine need to wait? Can you explain that
We have no idea, we thought you might have one :)
It doesn't really makes sense to us, but it does have a significant
impact on the throughput.