Re: [PATCH v3 net 0/4] qbv cycle time extension/truncation

From: Abdul Rahim, Faizal
Date: Tue Dec 19 2023 - 22:25:29 EST




On 20/12/2023 12:56 am, Vladimir Oltean wrote:
On Tue, Dec 19, 2023 at 03:14:49AM -0500, Faizal Rahim wrote:
According to IEEE Std. 802.1Q-2018 section Q.5 CycleTimeExtension,
the Cycle Time Extension variable allows this extension of the last old
cycle to be done in a defined way. If the last complete old cycle would
normally end less than OperCycleTimeExtension nanoseconds before the new
base time, then the last complete cycle before AdminBaseTime is reached
is extended so that it ends at AdminBaseTime.

Changes in v3:
- Removed the last 3 patches related to fixing cycle time adjustment
for the "current entry". This is to simplify this patch series submission
which only covers cycle time adjustment for the "next entry".
- Negative correction calculation in get_cycle_time_correction() is
guarded so that it doesn't exceed interval
- Some rename (macro, function)
- Transport commit message comments to the code comments
- Removed unnecessary null check
- Reword commit message

Changes in v2:
- Added 's64 cycle_time_correction' in 'sched_gate_list struct'.
- Removed sched_changed created in v1 since the new cycle_time_correction
field can also serve to indicate the need for a schedule change.
- Added 'bool correction_active' in 'struct sched_entry' to represent
the correction state from the entry's perspective and return corrected
interval value when active.
- Fix cycle time correction logics for the next entry in advance_sched()
- Fix and implement proper cycle time correction logics for current
entry in taprio_start_sched()

v2 at:
https://lore.kernel.org/lkml/20231107112023.676016-1-faizal.abdul.rahim@xxxxxxxxxxxxxxx/
v1 at:
https://lore.kernel.org/lkml/20230530082541.495-1-muhammad.husaini.zulkifli@xxxxxxxxx/

I'm sorry that I stopped responding on your v2. I realized the discussion
reached a point where I couldn't figure out who is right without some
testing. I wanted to write a selftest to highlight the expected correct
behavior of the datapath during various schedule changes, and whether we
could ever end up with a negative interval after the correction. However,
writing that got quite complicated and that ended there.

How are you testing the behavior, and who reported the issues / what prompted
the changes? Honestly I'm not very confident in the changes we're
pushing down the linux-stable pipe. They don't look all that obvious, so
I still think that having selftests would help. If you don't have a
testing rig already assembled, and you don't want to start one, I might
want to give it a second try and cook something up myself.

Something really simple like:
- start schedule 1 with base-time A and cycle-time-extension B
- start schedule 2 with base-time C
- send one packet with isochron during the last cycle of schedule 1

By varying the parameters, we could check if the schedule is correctly
extended or truncated. We could configure the 2 schedules in such a way
that "extending" would mean that isochron's gate (from schedule 1) is
open (and thus, the packet will pass) and "truncating" would mean that
the packet is scheduled according to schedule 2 (where isochron's gate
will be always closed, so the packet will never pass).

We could then alter the cycle-time-extension relative to the base-times,
to force a truncation of 1, 2, 3 entries or more, and see that the
behavior is always correct.

Hi Vladimir,

No worries, I truly appreciate the time you took to review and reply.

What prompted this in general is related to my project requirement to enable software QBV cycle time extension, so there's a validation team that created test cases to properly validate cycle time extension. Then I noticed the code doesn't handle truncation properly also, since it's the same code area, I just fixed it together.

Each time before sending the patch for upstream review, I normally will run our test cases that only validates cycle time extension. For truncation, I modify the test cases on my own and put logs to check if the cycle_time_correction negative value is within the correct range. I probably should have mentioned sooner that I have tested this myself, sorry about that.

Example of the test I run for cycle time extension:
1) 2 boards connected back-to-back with i226 NIC. Board A as sender, Board B as receiver
2) Time is sync between 2 boards with phc2sys and ptp4l
3) Run GCL1 on Board A with cycle time extension enabled:
tc qdisc replace dev $INTERFACE parent root handle 100 taprio \
num_tc 4 \
map 3 2 1 0 3 3 3 3 3 3 3 3 3 3 3 3 \
queues 1@0 1@1 1@2 1@3 \
base-time 0 \
cycle-time-extension 1000000 \
sched-entry S 09 500000 \
sched-entry S 0a 500000 \
clockid CLOCK_TAI
4) capture tcp dump on Board B
5) Send packets from Board A to Board B with 200us interval via UDP Tai
6) When packets reached Board B, trigger GCL2 to Board A:
CYCLETIME=1000000
APPLYTIME=1000000000 # 1s
CURRENT=$(date +%s%N)
BASE=$(( (CURRENT + APPLYTIME + (2*CYCLETIME)) - ((CURRENT + APPLYTIME)
% CYCLETIME) + ((CYCLETIME*3)/5) ))
tc qdisc replace dev $INTERFACE parent root handle 100 taprio \
num_tc 4 \
map 3 2 1 0 3 3 3 3 3 3 3 3 3 3 3 3 \
queues 1@0 1@1 1@2 1@3 \
base-time $BASE \
cycle-time-extension 1000000 \
sched-entry S oc 500000 \
sched-entry S 08 500000 \
clockid CLOCK_TAI
7) Analyze tcp dump data on Board B using wireshark, will observe packets receive pattern changed.

Note that I've hidden "Best Effort (default) 7001 → 7001" data from the wireshark log so that it's easier to see the pattern.

TIMESTAMP PRIORITY PRIORITY NOTES
1702896645.925014509 Critical Applications 7004 → 7004 GCL1
1702896645.925014893 Critical Applications 7004 → 7004 GCL1
1702896645.925514454 Excellent Effort 7003 → 7003 GCL1
1702896645.925514835 Excellent Effort 7003 → 7003 GCL1
1702896645.926014371 Critical Applications 7004 → 7004 GCL1
1702896645.926014755 Critical Applications 7004 → 7004 GCL1
1702896645.926514620 Excellent Effort 7003 → 7003 GCL1
1702896645.926515004 Excellent Effort 7003 → 7003 GCL1
1702896645.927014408 Critical Applications 7004 → 7004 GCL1
1702896645.927014792 Critical Applications 7004 → 7004 GCL1
1702896645.927514789 Excellent Effort 7003 → 7003 GCL1
1702896645.927515173 Excellent Effort 7003 → 7003 GCL1
1702896645.928168304 Excellent Effort 7003 → 7003 Extended
1702896645.928368780 Excellent Effort 7003 → 7003 Extended
1702896645.928569406 Excellent Effort 7003 → 7003 Extended
1702896645.929614835 Background 7002 → 7002 GCL2
1702896645.929615219 Background 7002 → 7002 GCL2
1702896645.930614643 Background 7002 → 7002 GCL2
1702896645.930615027 Background 7002 → 7002 GCL2
1702896645.931614604 Background 7002 → 7002 GCL2
1702896645.931614991 Background 7002 → 7002 GCL2

The extended packets only will happen if cycle_time and interval fields
are updated using cycle_time_correction. Without that patch, the extended packets are not received.


As for the negative truncation case, I just make the interval quite long, and experimented with GCL2 base-time value so that it hits the "next entry" in advance_sched(). Then I checked my logs in get_cycle_time_correction() to see the truncation case and its values.

Based on your feedback of the test required, I think that my existing truncation test is not enough, but the extension test case part should be good right ?

Do let me know then, I'm more than willing to do more test for the truncation case as per your suggestion, well basically, anything to help speed up the patches series review process :)


Appreciate your suggestion and help a lot, thank you.