Re: [PATCH v2 1/3] net: ethernet: ti: am65-cpts: Enable PTP RX HW timestamp using CPTS FIFO

From: Chintan Vankar
Date: Mon Mar 18 2024 - 07:13:27 EST




On 12/03/24 16:20, Paolo Abeni wrote:
On Tue, 2024-03-12 at 15:32 +0530, Chintan Vankar wrote:
diff --git a/drivers/net/ethernet/ti/am65-cpts.c b/drivers/net/ethernet/ti/am65-cpts.c
index c66618d91c28..6c1d571c5e0b 100644
--- a/drivers/net/ethernet/ti/am65-cpts.c
+++ b/drivers/net/ethernet/ti/am65-cpts.c
@@ -859,29 +859,6 @@ static long am65_cpts_ts_work(struct ptp_clock_info *ptp)
return delay;
}
-/**
- * am65_cpts_rx_enable - enable rx timestamping
- * @cpts: cpts handle
- * @en: enable
- *
- * This functions enables rx packets timestamping. The CPTS can timestamp all
- * rx packets.
- */
-void am65_cpts_rx_enable(struct am65_cpts *cpts, bool en)
-{
- u32 val;
-
- mutex_lock(&cpts->ptp_clk_lock);
- val = am65_cpts_read32(cpts, control);
- if (en)
- val |= AM65_CPTS_CONTROL_TSTAMP_EN;
- else
- val &= ~AM65_CPTS_CONTROL_TSTAMP_EN;
- am65_cpts_write32(cpts, val, control);
- mutex_unlock(&cpts->ptp_clk_lock);
-}
-EXPORT_SYMBOL_GPL(am65_cpts_rx_enable);

It looks like the above chunk will cause a transient build break, as
the function is still in use and the caller will be removed by the next
patch. I guess you should move this chunk there.

Thank you for your suggestion. I will update the patch with this
change in the next version.


-
static int am65_skb_get_mtype_seqid(struct sk_buff *skb, u32 *mtype_seqid)
{
unsigned int ptp_class = ptp_classify_raw(skb);
@@ -906,6 +883,72 @@ static int am65_skb_get_mtype_seqid(struct sk_buff *skb, u32 *mtype_seqid)
return 1;
}
+static u64 am65_cpts_find_rx_ts(struct am65_cpts *cpts, struct sk_buff *skb,
+ int ev_type, u32 skb_mtype_seqid)
+{
+ struct list_head *this, *next;
+ struct am65_cpts_event *event;
+ unsigned long flags;
+ u32 mtype_seqid;
+ u64 ns = 0;
+
+ am65_cpts_fifo_read(cpts);
+ spin_lock_irqsave(&cpts->lock, flags);
+ list_for_each_safe(this, next, &cpts->events) {
+ event = list_entry(this, struct am65_cpts_event, list);
+ if (time_after(jiffies, event->tmo)) {
+ list_del_init(&event->list);
+ list_add(&event->list, &cpts->pool);
+ continue;
+ }
+
+ mtype_seqid = event->event1 &
+ (AM65_CPTS_EVENT_1_MESSAGE_TYPE_MASK |
+ AM65_CPTS_EVENT_1_SEQUENCE_ID_MASK |
+ AM65_CPTS_EVENT_1_EVENT_TYPE_MASK);
+
+ if (mtype_seqid == skb_mtype_seqid) {
+ ns = event->timestamp;
+ list_del_init(&event->list);
+ list_add(&event->list, &cpts->pool);
+ break;
+ }
+ }
+ spin_unlock_irqrestore(&cpts->lock, flags);

Ouch, you have to acquire an additional lock per packet and a lot of
cacheline dithering.

Not strictly necessary for this series, but I suggest to invest some
time reconsidering this schema, it looks bad from performance pov.

Possibly protecting the list with RCU and leaving the recycle to the
producer could help.

Thank you for pointing out this. I will consider your point and spend
some time on it.


Additionally net-next is currently closed for the merge window, please
post the new revision (including the target tree in the subj prefix)
when net-next will re-open in ~2weeks.

Thank you for information. I will update the patch and post its next
version with above mentioned changes.

Cheers,

Paolo