On Thu, 2012-05-03 at 13:11 +0530, Laxman Dewangan wrote:Fine, then I will go as APB_DMA_NR_DESC_xxx and APB_DMA_NR_REQ_xxx+ * Initial number of descriptors to allocate for each channel duringall of these should be namespaced.
+ * allocation. More descriptors will be allocated dynamically if
+ * client needs it.
+ */
+#define DMA_NR_DESCS_PER_CHANNEL 4
+#define DMA_NR_REQ_PER_DESC 8
APB and AHB are fairly generic names.
I am allowing multiple desc requests in dma through prep_slave and prep_cyclic. So when dma channel does not have any request then it can set its mode as NONE.+I dont understand why you would need to keep track of these?
+enum dma_transfer_mode {
+ DMA_MODE_NONE,
+ DMA_MODE_ONCE,
+ DMA_MODE_CYCLE,
+ DMA_MODE_CYCLE_HALF_NOTIFY,
+};
You get a request for DMA. You have properties of transfer. You prepare
you descriptors and then submit.
Why would you need to create above modes and remember them?
+struct tegra_dma_desc {what are these two for, seems redundant to me
+ struct dma_async_tx_descriptor txd;
+ int bytes_requested;
+ int bytes_transferred;
+ enum dma_status dma_status;
+ struct list_head node;
+ struct list_head tx_list;
+ struct list_head cb_node;
+ bool ack_reqd;
+ bool cb_due;
Sure, i will do it.+static int allocate_tegra_desc(struct tegra_dma_channel *tdc,pls follow single convention for naming in driver eithe xxx_tegra_yyy or
+ int ndma_desc, int nsg_req)
tegra_xxx_yyy... BUT not both!
yes I will do it, I missed this cleanup.+ /* Initialize DMA descriptors */kernel convention is kzalloc(sizeof(*dma_desc),....
+ for (i = 0; i< ndma_desc; ++i) {
+ dma_desc = kzalloc(sizeof(struct tegra_dma_desc), GFP_ATOMIC);
Then goto will be replace as+sorry I dont like this jumping and returning for two lines of code.
+
+ /* Check from free list desc */
+ if (!list_empty(&tdc->free_dma_desc)) {
+ dma_desc = list_first_entry(&tdc->free_dma_desc,
+ typeof(*dma_desc), node);
+ list_del(&dma_desc->node);
+ goto end;
+ }
Makes much sense to return from here.
Client need to ack this desc before reusing it.+static void tegra_dma_desc_done_locked(struct tegra_dma_channel *tdc,what does ack_reqd mean?
+ struct tegra_dma_desc *dma_desc)
+{
+ if (dma_desc->ack_reqd)
+ list_add_tail(&dma_desc->node,&tdc->wait_ack_dma_desc);
Do you ahve unlocked version of this function, name suggests so...Did not require the locked version of function. Will remove the locked and add in comment.
+ elsein this and other functions, you have used goto to unlock and return.
+ list_add_tail(&dma_desc->node,&tdc->free_dma_desc);
+}
+
+static struct tegra_dma_sg_req *tegra_dma_sg_req_get(
+ struct tegra_dma_channel *tdc)
Rather than that why don't you simplify code by calling these while
holding lock
If I dont call the the complete cookie of that channel will not get updated and on query, it will return as PROGRESS.+ }does this make sense. You are marking an aborted descriptor as complete.
+ dma_cookie_complete(&dma_desc->txd);
+ if (list_empty(&tdc->pending_sg_req)) {this is just waste of real estate and very ugly. Move them to 1/2 lines.
+ dev_err(tdc2dev(tdc),
+ "%s(): Dma is running without any req list\n",
+ __func__);
It is there. I am calling dma_cookie_complete from handle_once_dma_done() which get called from ISR.+ tegra_dma_stop(tdc);You should mark the descriptor as complete before calling callback.
+ return false;
+ }
+
+ /*
+ * Check that head req on list should be in flight.
+ * If it is not in flight then abort transfer as
+ * transfer looping can not continue.
+ */
+ hsgreq = list_first_entry(&tdc->pending_sg_req, typeof(*hsgreq), node);
+ if (!hsgreq->configured) {
+ tegra_dma_stop(tdc);
+ dev_err(tdc2dev(tdc),
+ "Error in dma transfer loop, aborting dma\n");
+ tegra_dma_abort_all(tdc);
+ return false;
+ }
+
+ /* Configure next request in single buffer mode */
+ if (!to_terminate&& (tdc->dma_mode == DMA_MODE_CYCLE))
+ tdc_configure_next_head_desc(tdc);
+ return true;
+}
+
+static void tegra_dma_tasklet(unsigned long data)
+{
+ struct tegra_dma_channel *tdc = (struct tegra_dma_channel *)data;
+ unsigned long flags;
+ dma_async_tx_callback callback = NULL;
+ void *callback_param = NULL;
+ struct tegra_dma_desc *dma_desc;
+ struct list_head cb_dma_desc_list;
+
+ INIT_LIST_HEAD(&cb_dma_desc_list);
+ spin_lock_irqsave(&tdc->lock, flags);
+ if (list_empty(&tdc->cb_desc)) {
+ spin_unlock_irqrestore(&tdc->lock, flags);
+ return;
+ }
+ list_splice_init(&tdc->cb_desc,&cb_dma_desc_list);
+ spin_unlock_irqrestore(&tdc->lock, flags);
+
+ while (!list_empty(&cb_dma_desc_list)) {
+ dma_desc = list_first_entry(&cb_dma_desc_list,
+ typeof(*dma_desc), cb_node);
+ list_del(&dma_desc->cb_node);
+
+ callback = dma_desc->txd.callback;
+ callback_param = dma_desc->txd.callback_param;
+ dma_desc->cb_due = false;
+ if (callback)
+ callback(callback_param);
Also tasklet is supposed to move the next pending descriptor to the
engine, I dont see that happening here?
What happen if some callbacks are pending as tasklet does not get scheduled and meantime, the dma terminated (specially in multi-core system)?+ /* Call callbacks if was pending before aborting requests */again, callback would be called from tasklet, so why would it need to be
+ while (!list_empty(&cb_dma_desc_list)) {
+ dma_desc = list_first_entry(&cb_dma_desc_list,
+ typeof(*dma_desc), cb_node);
+ list_del(&dma_desc->cb_node);
+ callback = dma_desc->txd.callback;
+ callback_param = dma_desc->txd.callback_param;
called from here , and why would this be pending.