Re: [PATCH v4 3/7] media: platform: amd: Add isp4 fw and hw interface

From: Du, Bin
Date: Wed Sep 24 2025 - 23:56:41 EST


Thanks Sultan.

On 9/24/2025 3:09 PM, Sultan Alsawaf wrote:
On Tue, Sep 23, 2025 at 05:24:11PM +0800, Du, Bin wrote:
On 9/22/2025 5:55 AM, Sultan Alsawaf wrote:
On Thu, Sep 11, 2025 at 06:08:43PM +0800, Bin Du wrote:
+ if (!mem_info)
+ return NULL;
+
+ mem_info->mem_size = mem_size;

mem_info->mem_size is not used anywhere, remove it.


Actually, mem_size will be used in following patches in isp4_subdev.c, so,
i'd like to keep it.

Ah, I didn't notice, my apologies.


It's okay. Actually, it is not used in this patch, so I don't think you are wrong :)

+ for (i = 0; i < buf_size / sizeof(u32); i++)
+ checksum += buffer[i];
+
+ surplus_ptr = (u8 *)&buffer[i];

Change cast to `(const u32 *)`


Sure, will modify in the next version. I guess you mean `(const u8 *)`

Yes, it should be `(const u8 *)`, apologies for the typo.


Never mind, just coding details, what's most important is the idea, thanks for that.

+
+ guard(mutex)(&ispif->cmdq_mutex);
+
+ list_for_each_entry_safe(buf_node, tmp_node, &ispif->cmdq, list) {
+ list_del(&buf_node->list);
+ kfree(buf_node);
+ }

Move the whole list to a local LIST_HEAD(free_list) variable and then release
the lock. Then you can list_for_each_entry_safe() without needing to do a
list_del() every time, and you won't need to hold the lock the whole time.


Thanks for the suggestion, seems that will make code complicated, e.g. this
is the suggested implementation in my mind if i don't get you wrong,

void isp4if_clear_cmdq(struct isp4_interface *ispif)
{
struct isp4if_cmd_element *buf_node;
struct isp4if_cmd_element *tmp_node;
LIST_HEAD(free_list);

{
guard(mutex)(&ispif->cmdq_mutex);
if (list_empty(&ispif->cmdq))
return;
free_list = ispif->cmdq;
INIT_LIST_HEAD(&ispif->cmdq);
}

list_for_each_entry_safe(buf_node, tmp_node, &free_list, list) {
bool quit = false;

if (buf_node->list.next == &ispif->cmdq)
quit = true;
kfree(buf_node);
if (quit)
return;
}
}
So, I'd like to keep previous implementation by removing list_del and adding
INIT_LIST_HEAD, so this will be the code after refine,

void isp4if_clear_cmdq(struct isp4_interface *ispif)
{
struct isp4if_cmd_element *buf_node;
struct isp4if_cmd_element *tmp_node;

guard(mutex)(&ispif->cmdq_mutex);

list_for_each_entry_safe(buf_node, tmp_node, &ispif->cmdq, list)
kfree(buf_node);
INIT_LIST_HEAD(&ispif->cmdq);
}
It's much simpler, and based on our test, for command and buffer queue, the
elements in the queue won't exceed 5 when run to here, so the lock time will
be very short. What do you think?

This is what I am thinking (with cmdq_mutex replaced with a spin lock):

void isp4if_clear_cmdq(struct isp4_interface *ispif)
{
struct isp4if_cmd_element *buf_node, *tmp_node;
LIST_HEAD(free_list);

scoped_guard(spinlock, &ispif->cmdq_lock)
list_splice_init(&ispif->cmdq, &free_list);

list_for_each_entry_safe(buf_node, tmp_node, &free_list, list)
kfree(buf_node);
}


Many thanks for the reference code, it's concise and easy to understand, will incorporate it into the next version.

+ struct isp4if_cmd_element *cmd_ele = NULL;
+ struct isp4if_rb_config *rb_config;
+ struct device *dev = ispif->dev;
+ struct isp4fw_cmd cmd = {};

Use memset() to guarantee padding bits of cmd are zeroed, since this may not
guarantee it on all compilers.


Sure, will do it in the next version. Just a question, padding bits seem
never to be used, will it cause any problem if they are not zeroed?

Padding bits, if there are any, are used by isp4if_compute_check_sum() and are
also sent to the firmware.


Yes, this will impact the checksum value. However, based on my understanding, it will not affect the error detection outcome, since the firmware uses the same padding bits during both checksum calculation and comparison. I apologize for the minor disagreement—I just want to avoid introducing redundant code, especially given that similar scenarios appear a lot. Originally, we used memset in the initial version, but switched to { } initialization in subsequent versions based on review feedback. Please feel free to share your ideas, if you believe it is still necessary, we will add them.

+
+ ret = isp4if_insert_isp_fw_cmd(ispif, stream, &cmd);
+ if (ret) {
+ dev_err(dev, "fail for insert_isp_fw_cmd camId (0x%08x)\n", cmd_id);
+ if (cmd_ele) {
+ isp4if_rm_cmd_from_cmdq(ispif, cmd_ele->seq_num, cmd_ele->cmd_id);

Using isp4if_rm_cmd_from_cmdq() sort of implies that there is a risk that
cmd_ele may have been removed from the list somehow, even though the fw cmd
insertion failed? In any case, either make it truly safe by assuming that it's
unsafe to dereference cmd_ele right now, or just delete cmd_ele directly from
the list under the lock.


Good consideration, so will change it to following in the next version,
ret = isp4if_insert_isp_fw_cmd(ispif, stream, &cmd);
if (ret) {
dev_err(dev, "fail for insert_isp_fw_cmd camId %s(0x%08x)\n",
isp4dbg_get_cmd_str(cmd_id), cmd_id);
if (cmd_ele) {
cmd_ele = isp4if_rm_cmd_from_cmdq(ispif, seq_num, cmd_id);
kfree(cmd_ele);
}
}
The final cmd_ele to be freed will come from cmdq which will be protected by
mutex, so it will be safe.

Looks good to me!


Thanks for the confirmation.

+static int isp4if_send_buffer(struct isp4_interface *ispif, struct isp4if_img_buf_info *buf_info)
+{
+ struct isp4fw_cmd_send_buffer cmd = {};

Use memset() to guarantee padding bits are zeroed, since this may not guarantee
it on all compilers.


Sure, will do it in the next version. as mentioned above, padding bits seem
never to be used, will it cause any problem if they are not zeroed?

Padding bits, if there are any, are used by isp4if_compute_check_sum() and are
also sent to the firmware.


Same comments as above

+
+ guard(mutex)(&ispif->bufq_mutex);
+
+ list_for_each_entry_safe(buf_node, tmp_node, &ispif->bufq, node) {
+ list_del(&buf_node->node);
+ kfree(buf_node);
+ }

Move the whole list to a local LIST_HEAD(free_list) variable and then release
the lock. Then you can list_for_each_entry_safe() without needing to do a
list_del() every time, and you won't need to hold the lock the whole time.


Same comments as above cmdq

This is what I am thinking (with bufq_mutex replaced with a spin lock):

void isp4if_clear_bufq(struct isp4_interface *ispif)
{
struct isp4if_img_buf_node *buf_node, *tmp_node;
LIST_HEAD(free_list);

scoped_guard(spinlock, &ispif->bufq_lock)
list_splice_init(&ispif->bufq, &free_list);

list_for_each_entry_safe(buf_node, tmp_node, &free_list, node)
kfree(buf_node);
}


Very good reference, will update in the next version.

+struct isp4_interface {
+ struct device *dev;
+ void __iomem *mmio;
+
+ struct mutex cmdq_mutex; /* used for cmdq access */
+ struct mutex bufq_mutex; /* used for bufq access */

It makes more sense for cmdq_mutex and bufq_mutex to be spin locks since they
are only held briefly for list traversal.


I'd like to keep them as mutex, because 1.no sync with hard/soft interrupt
is needed, 2.Not very time critical 3.Make guard mutex optimization
possible. What do you think?

For very quick/short critical sections that don't sleep, it makes more sense to
use a spin lock. A mutex lock is heavy when it needs to sleep on contention,
which isn't worth it if the critical section has very few instructions.

Also, guard and scoped_guard are available for spin locks too, as shown in my
replies above.


Thank you for the detailed explanation. Really appreciate the insights and will make sure to include these updates in the next version.

+
+#endif

Add /* _ISP4_INTERFACE_ */


Sure, will add it in the next version and check all header files. BTW, will
change the macro name to _ISP4_INTERFACE_H_ to align with others

Good catch, sounds good.

Sultan

--
Regards,
Bin