Hi Can/Asutosh
A few questions about MCQ configuration:
On Tue, 2022-07-19 at 00:01 -0700, Can Guo wrote:Yes, looks like it should be. Let me relook this logic.
From: Asutosh Das <quic_asutoshd@xxxxxxxxxxx>
Adds MCQ support to UFS driver.
Co-developed-by: Can Guo <quic_cang@xxxxxxxxxxx>
Signed-off-by: Asutosh Das <quic_asutoshd@xxxxxxxxxxx>
Signed-off-by: Can Guo <quic_cang@xxxxxxxxxxx>
---
+void ufshcd_mcq_config_mac(struct ufs_hba *hba)
+{
+ u32 val = ufshcd_readl(hba, REG_UFS_MCQ_CFG);
+
+ val &= ~MCQ_CFG_MAC_MASK;
+ val |= hba->dev_info.bqueuedepth << MCQ_CFG_MAC_OFFSET;
+ ufshcd_writel(hba, val, REG_UFS_MCQ_CFG);
Here you set MaxActiveCommand to dev_info.bqueuedepth (this limit comes
from UFS devices). I see in the qsize configuration that you want to
set the queue depth in each HW queue to be hba->nutrs (this limit comes
from UFSHCI), should not it be min(device limit, ufshci limit)?
Good point. Will check and fix it.+}...
+EXPORT_SYMBOL_GPL(ufshcd_mcq_config_mac);
+
+
+ for_each_hw_queue(hba, i) {
+ hwq = &hba->uhq[i];
+ hwq->id = i;
+ qsize = hwq->max_entries * MCQ_ENTRY_SIZE_IN_DWORD -
1;
qsize is hba->nutrs, 32*8-1 = 255 =256DW, per draft spec , should not
be 8DW in 4.0?
+
+ /* SQLBA */
+ ufsmcq_writel(hba, lower_32_bits(hwq->sqe_dma_addr),
+ MCQ_CFG_n(REG_SQLBA, i));
+ /* SQUBA */
+ ufsmcq_writel(hba, upper_32_bits(hwq->sqe_dma_addr),
+ MCQ_CFG_n(REG_SQUBA, i));
+ /* SQDAO */
+ ufsmcq_writel(hba, MCQ_ROP_OFFSET_n(ROP_SQD, i),
+ MCQ_CFG_n(REG_SQDAO, i));
...
}
+
+out:
+ hba->mcq_base = res->base;
+ return 0;
+
+out_err:
+ ufshcd_mcq_release_resource(hba);
+ return ret;
+}
+
+int ufshcd_mcq_init(struct ufs_hba *hba)
+{
+ struct Scsi_Host *host = hba->host;
+ struct ufs_hw_queue *hwq;
+ int i, ret = 0;
+
+ if (!is_mcq_supported(hba))
+ return 0;
+
+ ret = ufshcd_mcq_config_resource(hba);
+ if (ret) {
+ dev_err(hba->dev, "Failed to config MCQ resource\n");
+ return ret;
+ }
+
+ ret = ufshcd_vops_config_mcq_rop(hba);
+ if (ret) {
+ dev_err(hba->dev, "MCQ Runtime Operation Pointers not
configured\n");
+ goto out_err;
+ }
+
+ hba->nr_queues[HCTX_TYPE_DEFAULT] = num_possible_cpus();
4.0 supports maximum number of queues is 32. for cpus < 32, cpu to
queue will be 1x1, how about cpus > 32?
Yes, will take a look.+ hba->nr_queues[HCTX_TYPE_READ] = 0;
+ hba->nr_queues[HCTX_TYPE_POLL] = 1;
+
+ for (i = 0; i < HCTX_MAX_TYPES; i++)
+ host->nr_hw_queues += hba->nr_queues[i];
+
+ host->can_queue = hba->nutrs;
Also here, can_queue is inlined with ufshci limitation, not the UFS
device limit.
Let me check what eMMC does and see if that can be adopted here.+ host->cmd_per_lun = hba->nutrs;...
+
+ /* One more reserved for dev_cmd_queue */
+ hba->nr_hw_queues = host->nr_hw_queues + 1;
+
+ hba->uhq = devm_kmalloc(hba->dev,
ufshcd_tune_unipro_params(hba);
@@ -9641,6 +9775,10 @@ int ufshcd_init(struct ufs_hba *hba, void
__iomem *mmio_base, unsigned int irq)
goto out_disable;
}
+ err = ufshcd_mcq_init(hba);
The driver will force the customer to use MCQ, how about adding a
configuration option for the customer to choose (like eMMC CMDQ does)?
Kind regards,
Bean