Re: [PATCH 00/12] Add support for PSE port priority

From: Kyle Swenson
Date: Tue Oct 08 2024 - 20:20:54 EST


Hello Kory,

On Wed, Oct 02, 2024 at 06:14:11PM +0200, Kory Maincent wrote:
> From: Kory Maincent (Dent Project) <kory.maincent@xxxxxxxxxxx>
>
> This series brings support for port priority in the PSE subsystem.
> PSE controllers can set priorities to decide which ports should be
> turned off in case of special events like over-current.

First off, great work here. I've read through the patches in the series and
have a pretty good idea of what you're trying to achieve- use the PSE
controller's idea of "port priority" and expose this to userspace via ethtool.

I think this is probably sufficient but I wanted to share my experience
supporting a system level PSE power budget with PSE port priorities across
different PSE controllers through the same userspace interface such that
userspace doesn't know or care about the underlying PSE controller.

Out of the three PSE controllers I'm aware of (Microchip's PD692x0, TI's
TPS2388x, and LTC's LT4266), the PD692x0 definitely has the most advanced
configuration, supporting concepts like a system (well, manager) level budget
and powering off lower priority ports in the event that the port power
consumption is greater than the system budget.

When we experimented with this feature in our routers, we found it to be using
the dynamic power consumed by a particular port- literally, the summation of
port current * port voltage across all the ports. While this behavior
technically saves the system from resetting or worse, it causes a bit of a
problem with lower priority ports getting powered off depending on the behavior
(power consumption) of unrelated devices.

As an example, let's say we've got 4 devices, all powered, and we're close to
the power budget. One of the devices starts consuming more power (perhaps it's
modem just powered on), but not more than it's class limit. Say this device
consumes enough power to exceed the configured power budget, causing the lowest
priority device to be powered off. This is the documented and intended
behavior of the PD692x0 chipset, but causes an unpleasant user experience
because it's not really clear why some device was powered down all the sudden.
Was it because someone unplugged it? Or because the modem on the high priority
device turned on? Or maybe that device had an overcurrent? It'd be impossible
to tell, and even worse, by the time someone is able to physically look at the
switch, the low priority device might be back online (perhaps the modem on
the high priority device powered off).

This behavior is unique to the PD692x0- I'm much less familiar with the
TPS2388x's idea of port priority but it is very different from the PD692x0.
Frankly the behavior of the OSS pin is confusing and since we don't use the PSE
controllers' idea of port priority, it was safe to ignore it. Finally, the
LTC4266 has a "masked shutdown" ability where a predetermined set of ports are
shutdown when a specific pin (MSD) is driven low. Like the TPS2388x's OSS pin,
We ignore this feature on the LTC4266.

If the end-goal here is to have a device-independent idea of "port priority" I
think we need to add a level of indirection between the port priority concept and the
actual PSE hardware. The indirection would enable a system with multiple
(possibly heterogeneous even) PSE chips to have a unified idea of port
priority. The way we've implemented this in our routers is by putting the PSE
controllers in "semi-auto" mode, where they continually detect and classify PDs
(powered device), but do not power them until instructed to do so. The
mechanism that decides to power a particular port or not (for lack of a better
term, "budgeting logic") uses the available system power budget (configured
from userspace), the relative port priorities (also configured from userspace)
and the class of a detected PD. The classification result is used to determine
the _maximum_ power a particular PD might draw, and that is the value that is
subtracted from the power budget.

Using the PD's classification and then allocating it the maximum power for that
class enables a non-technical installer to plug in all the PDs at the switch,
and observe if all the PDs are powered (or not). But the important part is
(unless the port priorities or power budget are changed from userspace) the
devices that are powered won't change due to dynamic power consumption of the
other devices.

I'm not sure what the right path is for the kernel, and I'm not sure how this
would look with the regulator integration, nor am I sure what the userspace API
should look like (we used sysfs, but that's probably not ideal for upstream).
It's also not clear how much of the budgeting logic should be in the kernel, if
any. Despite that, hopefully sharing our experience is insightful and/or
helpful. If not, feel free to ignore it. In any case, you've got my

Reviewed-by: Kyle Swenson <kyle.swenson@xxxxxxxx>

for all the patches in the series.

Thanks,
Kyle Swenson