RE: [PATCH v2] net: phylink: add missing supported link modes for the fixed-link

From: Wei Fang

Date: Sun Nov 16 2025 - 22:23:25 EST


> This also seems like two fixes: a regression for the AUTONEG bit, and
> allowing pause to be set. So maybe this should be two patches?

As Russell explained in the thread, one patch is enough.

>
> I'm also surprised TCP is collapsing. This is not an unusual setup,
> e.g. a home wireless network feeding a cable modem. A high speed link
> feeding a lower speed link. What RTT is there when TCP gets into

The below result is the RTT when doing the iperf TCtestP
root@imx943evk:~# ./tcping -I swp2 10.193.102.224 5201
TCPinging 10.193.102.224 on port 5201
Reply from 10.193.102.224 (10.193.102.224) on port 5201 TCP_conn=2 time=1.004 ms
Reply from 10.193.102.224 (10.193.102.224) on port 5201 TCP_conn=3 time=0.958 ms
Reply from 10.193.102.224 (10.193.102.224) on port 5201 TCP_conn=4 time=0.989 ms
Reply from 10.193.102.224 (10.193.102.224) on port 5201 TCP_conn=5 time=1.040 ms
Reply from 10.193.102.224 (10.193.102.224) on port 5201 TCP_conn=6 time=0.760 ms
Reply from 10.193.102.224 (10.193.102.224) on port 5201 TCP_conn=7 time=0.950 ms
Reply from 10.193.102.224 (10.193.102.224) on port 5201 TCP_conn=8 time=0.726 ms

After applying the this patch, the RTT appears to be greater, I suspect
that some iperf packets preceding the ping packet are being dropped
by the hardware, resulting in a smaller RTT.

root@imx943evk:~# ./tcping -I swp2 10.193.102.224 5201
TCPinging 10.193.102.224 on port 5201
Reply from 10.193.102.224 (10.193.102.224) on port 5201 TCP_conn=1 time=0.819 ms
Reply from 10.193.102.224 (10.193.102.224) on port 5201 TCP_conn=2 time=0.752 ms
Reply from 10.193.102.224 (10.193.102.224) on port 5201 TCP_conn=3 time=1.190 ms
Reply from 10.193.102.224 (10.193.102.224) on port 5201 TCP_conn=4 time=0.932 ms
Reply from 10.193.102.224 (10.193.102.224) on port 5201 TCP_conn=5 time=1.137 ms
Reply from 10.193.102.224 (10.193.102.224) on port 5201 TCP_conn=6 time=1.279 ms


> trouble? TCP should be backing off as soon as it sees packet loss, so
> reducing the bandwidth it tries to consume, and so emptying out the
> buffers. But if you have big buffers in the ENETC causing high
> latency, that might be an issue? Does ENETC have BQL? It is worth
> implementing, just to avoid bufferbloat problems.

No, currently the ENETC driver does not support BQL, maybe we
will support it in the future.

>
> Andrew
>
> ---
> pw-bot: cr