Re: [PATCH v2 4/7] scsi: ufs: core: Add hwq print for debug
From: Slade's Kernel Patch Bot
Date: Wed Mar 01 2023 - 13:55:45 EST
On 2/28/23 21:17, Powen Kao (高伯文) wrote:
> Hi Bao,
>
> Sure, we can first integrate ur patch and see if anything is missing
> that need further upstream. Due to comapct schedule, I would kindly ask
> if it will be ready by the end of this week? :) Thanks
>
This is Slade's kernel patch bot. When scanning his mailbox, I came across
this message, which appears to be a top-post. Please do not top-post on Linux
mailing lists.
A: Because it messes up the order in which people normally read text.
Q: Why is top-posting such a bad thing?
A: Top-posting.
Q: What is the most annoying thing in e-mail?
Please bottom-post to Linux mailing lists in the future. See also:
https://daringfireball.net/2007/07/on_top
If you believe this is an error, please address a message to Slade Watkins
<srw@xxxxxxxxxxxxxxxx>.
Thank you,
-- Slade's kernel patch bot
>
> On Mon, 2023-02-27 at 18:57 -0800, Bao D. Nguyen wrote:
>> On 2/26/2023 7:14 Sure PM, Ziqi Chen wrote:
>>> Hi Powen,
>>>
>>> The Bao. D . Nguyen (quic_nguyenb@xxxxxxxxxxx) from QCOM already
>>> made
>>> patch to support MCQ abort.
>>>
>>> ++ Bao here to be aware of it in case your error handing patch
>>> conflict with his abort handling patch.
>>>
>>>
>>> Best Regards,
>>>
>>> Ziqi
>>>
>>>
>>> On 2/23/2023 10:13 PM, Powen Kao (高伯文) wrote:
>>>> Hi Ziqi,
>>>>
>>>> Thanks for ur comments.
>>>>
>>>> This piece of code successfully dump relevent registers on our
>>>> platform. As you know, mcq error handling flow is not ready yet
>>>> so the
>>>> insertion point might not seems to be reasonable.
>>>>
>>>> Maybe drop this patch for now, I will send it later with error
>>>> handling
>>>> patches.
>>>>
>>>>
>>>> On Thu, 2023-02-23 at 18:14 +0800, Ziqi Chen wrote:
>>>>> Hi Po-Wen,
>>>>>
>>>>> On 2/22/2023 11:04 AM, Po-Wen Kao wrote:
>>>>>> +void ufshcd_mcq_print_hwqs(struct ufs_hba *hba, unsigned
>>>>>> long
>>>>>> bitmap)
>>>>>> +{
>>>>>> + int id, i;
>>>>>> + char prefix[15];
>>>>>> +
>>>>>> + if (!is_mcq_enabled(hba))
>>>>>> + return;
>>>>>> +
>>>>>> + for_each_set_bit(id, &bitmap, hba->nr_hw_queues) {
>>>>>> + snprintf(prefix, sizeof(prefix), "q%d SQCFG: ", id);
>>>>>> + ufshcd_hex_dump(prefix,
>>>>>> + hba->mcq_base + MCQ_QCFG_SIZE * id,
>>>>>> MCQ_QCFG_SQ_SIZE);
>>>>>
>>>>> Is your purpose dump per hardware queue registers here? If
>>>>> yes, why
>>>>> don't use ufsmcq_readl() to save to a buffer and then use
>>>>> ufshcd_hex_dump()
>>>>>
>>>>> to dump ? Are you sure ufshcd_hex_dump() can dump register
>>>>> directly?
>>>>>
>>>>>> +
>>>>>> + snprintf(prefix, sizeof(prefix), "q%d CQCFG: ", id);
>>>>>> + ufshcd_hex_dump(prefix,
>>>>>> + hba->mcq_base + MCQ_QCFG_SIZE * id +
>>>>>> MCQ_QCFG_SQ_SIZE, MCQ_QCFG_CQ_SIZE);
>>>>>
>>>>> Same to above comment.
>>>>>> +
>>>>>> + for (i = 0; i < OPR_MAX ; i++) {
>>>>>> + snprintf(prefix, sizeof(prefix), "q%d OPR%d: ",
>>>>>> id, i);
>>>>>> + ufshcd_hex_dump(prefix, mcq_opr_base(hba, i,
>>>>>> id), mcq_opr_size[i]);
>>>>>
>>>>> Same.
>>>>>> + }
>>>>>> + }
>>>>>> +}
>>>>>> +
>>>>>>
>>>>>> @@ -574,7 +569,16 @@ void ufshcd_print_trs(struct ufs_hba
>>>>>> *hba,
>>>>>> unsigned long bitmap, bool pr_prdt)
>>>>>> if (pr_prdt)
>>>>>> ufshcd_hex_dump("UPIU PRDT: ", lrbp-
>>>>>>> ucd_prdt_ptr,
>>>>>>
>>>>>> ufshcd_sg_entry_size(hba) *
>>>>>> prdt_length);
>>>>>> +
>>>>>> + if (is_mcq_enabled(hba)) {
>>>>>> + cmd = lrbp->cmd;
>>>>>> + if (!cmd)
>>>>>> + return;
>>>>>> + hwq = ufshcd_mcq_req_to_hwq(hba,
>>>>>> scsi_cmd_to_rq(cmd));
>>>>>> + ufshcd_mcq_print_hwqs(hba, 1 << hwq->id);
>>>>>
>>>>> Calling registers dump function in ufshcd_print_trs() is not
>>>>> reasonable,
>>>>> eg.. for each aborted request, it would print out all hwq
>>>>> registers,
>>>>> it's not make sense.
>>>>>
>>>>> I think we should move it out of ufshcd_print_trs().
>>>>>
>>>>>> + }
>>>>>> }
>>>>>> +
>>>>>> }
>>>>>
>>>>> Best Regards,
>>>>>
>>>>> Ziqi
>>>>>
>>
>> Hi Powen,
>>
>> I am going to push the mcq abort handling and mcq error handling
>> code
>> upstream for review in a couple days. Would that work for you?
>>
>> Regards,
>> Bao
>>