+ /*
+ * Kernel conventions require per-LED led-max-microamp property.
+ * But the chip does not allow to limit individual LEDs.
+ * So we take minimum from all subnodes.
Why minimum? Choose maximum and reduce max_brightness properties
of the sub-LEDs with lesser led-max-microamp.
Hm. Is this really the correct way to handle it?
Assume you connect an LED which is specified with 10mA peak current.
And another with 20mA peak current.
So you define led-max-microamp in the DT as 10 mA and 20 mA.
Firstly a user can set the brightness only to 50% of LED_FULL since it is limited
by a reduced max_brightness. And heshe finds that not all LEDs have the same
max_brightness. The first LED will have 127 and the second one 255 for reasons
not directly understandable.
This entangles "brightness" with "max-current" - which are IMHO two independent
things.
Next, this will set the hardware limit to 20 mA. So there will be current peaks
of 20 mA for an LED where the DT developer thinks to have specified a led-max-microamp
of 10 mA. So you run the LED outside of its specs although the DT seems to
tell that it is inside and user-space thinks it is ok. This will reduce lifetime of LEDs.
So either "led-max-microamp" is the wrong name for this property (could better
be "led-max-average-microamp") or the whole logic is broken.
This is why we hesitate to hide (or even disable because you can't set the limit
to 10mA by DT) the current limiting chip feature by such a difficult to understand
automatic feature.
Using the minimum of all led-max-microamp keeps everything on the safe
side, running some LEDs with less current than specified. But never outside
of the spec. And all LEDs have the same max_brightness which is IMHO
more intuitive for the user.
Our original proposal was to define led-max-microamp for the whole chip
instead of individual LEDs, which is IMHO much easier to understand,
because it corresponds one-to-one with the data sheet.