Re: [PATCH v9 2/3] dmaengine: ptdma: register PTDMA controller as a DMA resource

From: Vinod Koul
Date: Wed Jun 16 2021 - 00:18:16 EST


On 15-06-21, 17:04, Sanjay R Mehta wrote:
>
>
> On 6/9/2021 12:26 AM, Vinod Koul wrote:
>
> [snipped]
>
> >> +static struct pt_dma_desc *pt_alloc_dma_desc(struct pt_dma_chan *chan,
> >> + unsigned long flags)
> >> +{
> >> + struct pt_dma_desc *desc;
> >> +
> >> + desc = kmem_cache_zalloc(chan->pt->dma_desc_cache, GFP_NOWAIT);
> >> + if (!desc)
> >> + return NULL;
> >> +
> >> + vchan_tx_prep(&chan->vc, &desc->vd, flags);
> >> +
> >> + desc->pt = chan->pt;
> >> + desc->issued_to_hw = 0;
> >> + INIT_LIST_HEAD(&desc->cmdlist);
> >
> > why do you need your own list, the lists in vc should suffice?
> >
>
> Do you think this should be a major blocker for pulling this series in 5.14?
> Would you be okay to accept this change in the subsequent driver updates?

Sorry that is not how upstream works, I would like things to be better
before we merge this

>
> >> +static int pt_resume(struct dma_chan *dma_chan)
> >> +{
> >> + struct pt_dma_chan *chan = to_pt_chan(dma_chan);
> >> + struct pt_dma_desc *desc = NULL;
> >> + unsigned long flags;
> >> +
> >> + spin_lock_irqsave(&chan->vc.lock, flags);
> >> + pt_start_queue(&chan->pt->cmd_q);
> >> + desc = __pt_next_dma_desc(chan);
> >> + spin_unlock_irqrestore(&chan->vc.lock, flags);
> >> +
> >> + /* If there was something active, re-start */
> >> + if (desc)
> >> + pt_cmd_callback(desc, 0);
> >
> > this doesn't sound correct. In pause yoy stop the queue, so start of the
> > queue should be done here... Why grab a descriptor?
> >
> >> +static int pt_terminate_all(struct dma_chan *dma_chan)
> >> +{
> >> + struct pt_dma_chan *chan = to_pt_chan(dma_chan);
> >> +
> >> + vchan_free_chan_resources(&chan->vc);
> >
> > what about the descriptors, are you not going to clear the lists and
> > free them..
> >
> >> +int pt_dmaengine_register(struct pt_device *pt)
> >> +{
> >> + struct pt_dma_chan *chan;
> >> + struct dma_device *dma_dev = &pt->dma_dev;
> >> + char *cmd_cache_name;
> >> + char *desc_cache_name;
> >> + int ret;
> >> +
> >> + pt->pt_dma_chan = devm_kzalloc(pt->dev, sizeof(*pt->pt_dma_chan),
> >> + GFP_KERNEL);
> >> + if (!pt->pt_dma_chan)
> >> + return -ENOMEM;
> >> +
> >> + cmd_cache_name = devm_kasprintf(pt->dev, GFP_KERNEL,
> >> + "%s-dmaengine-cmd-cache",
> >> + pt->name);
> >> + if (!cmd_cache_name)
> >> + return -ENOMEM;
> >> +
> >> + pt->dma_cmd_cache = kmem_cache_create(cmd_cache_name,
> >> + sizeof(struct pt_dma_cmd),
> >> + sizeof(void *),
> >> + SLAB_HWCACHE_ALIGN, NULL);
> >> + if (!pt->dma_cmd_cache)
> >> + return -ENOMEM;
> >> +
> >> + desc_cache_name = devm_kasprintf(pt->dev, GFP_KERNEL,
> >> + "%s-dmaengine-desc-cache",
> >> + pt->name);
> >> + if (!desc_cache_name) {
> >> + ret = -ENOMEM;
> >> + goto err_cache;
> >> + }
> >> +
> >> + pt->dma_desc_cache = kmem_cache_create(desc_cache_name,
> >> + sizeof(struct pt_dma_desc),
> >> + sizeof(void *),
> >
> > sizeof void ptr?

This and many more comments are left not replied, do you agree to them,
do you disagree, hard to tell from silence..

> >
> >> + SLAB_HWCACHE_ALIGN, NULL);
> >> + if (!pt->dma_desc_cache) {
> >> + ret = -ENOMEM;
> >> + goto err_cache;
> >> + }
> >> +
> >> + dma_dev->dev = pt->dev;
> >> + dma_dev->src_addr_widths = DMA_SLAVE_BUSWIDTH_64_BYTES;
> >> + dma_dev->dst_addr_widths = DMA_SLAVE_BUSWIDTH_64_BYTES;
> >> + dma_dev->directions = DMA_MEM_TO_MEM;
> >> + dma_dev->residue_granularity = DMA_RESIDUE_GRANULARITY_DESCRIPTOR;
> >> + dma_cap_set(DMA_MEMCPY, dma_dev->cap_mask);
> >> + dma_cap_set(DMA_INTERRUPT, dma_dev->cap_mask);
> >> + dma_cap_set(DMA_PRIVATE, dma_dev->cap_mask);
> >
> > Why DMA_PRIVATE ? this is a dma mempcy controller ...
>
> This DMA controller is intended to be used with AMD Non-Transparent
> Bridge devices and not for general purpose peripheral DMA. Hence marking
> it as DMA_PRIVATE.

Okay, maybe add a comment so that people would know

--
~Vinod