Re: Re: [PATCH v2] net: mtk_sgmii: implement mtk_pcs_ops

From: Frank Wunderlich
Date: Tue Oct 25 2022 - 04:04:07 EST


> Gesendet: Montag, 24. Oktober 2022 um 16:56 Uhr
> Von: "Russell King (Oracle)" <linux@xxxxxxxxxxxxxxx>
> On Mon, Oct 24, 2022 at 04:45:40PM +0200, Frank Wunderlich wrote:
> > Hi
> > > Gesendet: Montag, 24. Oktober 2022 um 11:27 Uhr
> > > Von: "Russell King (Oracle)" <linux@xxxxxxxxxxxxxxx>
> >
> > > Here's the combined patch for where I would like mtk_sgmii to get to.
> > >
> > > It looks like this PCS is similar to what we know as pcs-lynx.c, but
> > > there do seem to be differences - the duplex bit for example appears
> > > to be inverted.
> > >
> > > Please confirm whether this still works for you, thanks.
> >
> > basicly Patch works, but i get some (1-50) retransmitts on iperf3 on first interval in tx-mode (on r3 without -R), other 9 are clean. reverse mode is mostly clean.
> > run iperf3 multiple times, every first interval has retransmitts. same for gmac0 (fixed-link 2500baseX)
> >
> > i notice that you have changed the timer again to 10000000 for 1000/2500baseX...maybe use here the default value too like the older code does?
>
> You obviously missed my explanation. I will instead quote the 802.3
> standard which covers 1000base-X:

sorry, right i remember you've already mentioned it

> 37.3.1.4 Timers
>
> link_timer
> Timer used to ensure Auto-Negotiation protocol stability and
> register read/write by the management interface.
>
> Duration: 10 ms, tolerance +10 ms, –0 s.
>
> For SGMII, the situation is different. Here is what the SGMII
> specification says:
>
> The link_timer inside the Auto-Negotiation has been changed from 10
> msec to 1.6 msec to ensure a prompt update of the link status.
>
> So, 10ms is correct for 1000base-X, and 1.6ms correct for SGMII.
>
> However, feel free to check whether changing it solves that issue, but
> also check whether it could be some ARP related issue - remember, if
> two endpoints haven't communicated, they need to ARP to get the other
> end's ethernet addresses which adds extra latency, and may result in
> some packet loss in high packet queuing rate situations.

tried with 1.6ms, same result (or even worse on 1000baseX). i guess arp cache should stay for ~5s?
so at least second round followed directly after the first should be clean when looking on ARP.

apart from this little problem it works much better than it actually is so imho more
people should test it on different platforms.

regards Frank