Hello Sean,
On Tue, Jun 29, 2021 at 02:01:31PM -0400, Sean Anderson wrote:
On 6/29/21 4:31 AM, Uwe Kleine-König wrote:
> Hello Sean,
>
> On Mon, Jun 28, 2021 at 01:41:43PM -0400, Sean Anderson wrote:
>> On 6/28/21 1:20 PM, Uwe Kleine-König wrote:
>> > On Mon, Jun 28, 2021 at 12:35:19PM -0400, Sean Anderson wrote:
>> >> On 6/28/21 12:24 PM, Uwe Kleine-König wrote:
>> >> > On Mon, Jun 28, 2021 at 11:50:33AM -0400, Sean Anderson wrote:
>> >> > > On 6/27/21 2:19 PM, Uwe Kleine-König wrote:
>> >> > > > On Fri, Jun 25, 2021 at 01:46:26PM -0400, Sean Anderson wrote:
>> >> > > IMO, this is the best way to prevent surprising results in the API.
>> >> >
>> >> > I think it's not possible in practise to refuse "near" misses and every
>> >> > definition of "near" is in some case ridiculous. Also if you consider
>> >> > the pwm_round_state() case you don't want to refuse any request to tell
>> >> > as much as possible about your controller's capabilities. And then it's
>> >> > straight forward to let apply behave in the same way to keep complexity
>> >> > low.
>> >> >
>> >> > > The real issue here is that it is impossible to determine the correct
>> >> > > way to round the PWM a priori, and in particular, without considering
>> >> > > both duty_cycle and period. If a consumer requests very small
>> >> > > period/duty cycle which we cannot produce, how should it be rounded?
>> >> >
>> >> > Yeah, because there is no obviously right one, I picked one that is as
>> >> > wrong as the other possibilities but is easy to work with.
>> >> >
>> >> > > Should we just set TLR0=1 and TLR1=0 to give them 66% duty cycle with
>> >> > > the least period? Or should we try and increase the period to better
>> >> > > approximate the % duty cycle? And both of these decisions must be made
>> >> > > knowing both parameters. We cannot (for example) just always round up,
>> >> > > since we may produce a configuration with TLR0 == TLR1, which would
>> >> > > produce 0% duty cycle instead of whatever was requested. Rounding rate
>> >> > > will introduce significant complexity into the driver. Most of the time
>> >> > > if a consumer requests an invalid rate, it is due to misconfiguration
>> >> > > which is best solved by fixing the configuration.
>> >> >
>> >> > In the first step pick the biggest period not bigger than the requested
>> >> > and then pick the biggest duty cycle that is not bigger than the
>> >> > requested and that can be set with the just picked period. That is the
>> >> > behaviour that all new drivers should do. This is somewhat arbitrary but
>> >> > after quite some thought the most sensible in my eyes.
>> >>
>> >> And if there are no periods smaller than the requested period?
>> >
>> > Then return -ERANGE.
>>
>> Ok, so instead of
>>
>> if (cycles < 2 || cycles > priv->max + 2)
>> return -ERANGE;
>>
>> you would prefer
>>
>> if (cycles < 2)
>> return -ERANGE;
>> else if (cycles > priv->max + 2)
>> cycles = priv->max;
>
> The actual calculation is a bit harder to handle TCSR_UDT = 0 but in
> principle, yes, but see below.
>
>> But if we do the above clamping for TLR0, then we have to recalculate
>> the duty cycle for TLR1. Which I guess means doing something like
>>
>> ret = xilinx_timer_tlr_period(priv, &tlr0, tcsr0, state->period);
>> if (ret)
>> return ret;
>>
>> state->duty_cycle = mult_frac(state->duty_cycle,
>> xilinx_timer_get_period(priv, tlr0, tcsr0),
>> state->period);
>>
>> ret = xilinx_timer_tlr_period(priv, &tlr1, tcsr1, state->duty_cycle);
>> if (ret)
>> return ret;
>
> No, you need something like:
>
> /*
> * The multiplication cannot overflow as both priv_max and
> * NSEC_PER_SEC fit into an u32.
> */
> max_period = div64_ul((u64)priv->max * NSEC_PER_SEC, clkrate);
>
> /* cap period to the maximal possible value */
> if (state->period > max_period)
> period = max_period;
> else
> period = state->period;
>
> /* cap duty_cycle to the maximal possible value */
> if (state->duty_cycle > max_period)
> duty_cycle = max_period;
> else
> duty_cycle = state->duty_cycle;
These caps may increase the % duty cycle.
Correct.
For some usecases keeping the relative duty cycle might be better, for
others it might not. I'm still convinced that in general my solution
makes sense, is computationally cheaper and easier to work with.
> period_cycles = period * clkrate / NSEC_PER_SEC;
>
> if (period_cycles < 2)
> return -ERANGE;
>
> duty_cycles = duty_cycle * clkrate / NSEC_PER_SEC;
>
> /*
> * The hardware cannot emit a 100% relative duty cycle, if
> * duty_cycle >= period_cycles is programmed the hardware emits
> * a 0% relative duty cycle.
> */
> if (duty_cycle == period_cycles)
> duty_cycles = period_cycles - 1;
>
> /*
> * The hardware cannot emit a duty_cycle of one clk step, so
> * emit 0 instead.
> */
> if (duty_cycles < 2)
> duty_cycles = period_cycles;
Of course, the above may result in 100% duty cycle being rounded down to
0%. I feel like that is too big of a jump to ignore. Perhaps if we
cannot return -ERANGE we should at least dev_warn.
You did it again. You picked one single case that you consider bad but
didn't provide a constructive way to make it better.
Assume there was already a pwm_round_state function (that returns the
state that pwm_apply_state would implement for a given request) Consider
a consumer that wants say a 50% relative duty together with a small
period. So it first might call:
ret = pwm_round_rate(pwm, { .period = 20, .duty_cycle = 20, ... }, &rounded_state)
to find out if .period = 20 can be implemented with the given PWM. If
this returns rounded state as:
.period = 20
.duty_cycle = 0
this says quite a lot about the pwm if the driver implements my policy.
(i.e.: The driver can do 20ns, but the biggest duty_cycle is only 0).
If however it returns -ERANGE this means (assuming the driver implements
the policy I try to convice you to be the right one) it means: The
hardware cannot implement 20 ns (or something smaller) and so the next
call probably tries 40 ns.
With your suggested semantic -ERANGE might mean:
- The driver doesn't support .period = 20 ns
(Follow up questions: What period should be tried next? 10 ns? 40
ns? What if this returns -ERANGE again?)
- The driver supports .period = 20 ns, but the biggest possible
duty_cycle is "too different from 20 ns to ignore".
Then how should the search continue?
So: As soon as there is a pwm_round_rate function this can be catched
and then it's important that the drivers adhere to the expected policy.
Implementing this is a big thing and believe me I already spend quite
some brain cycles into it. Once the core is extended accordingly I will
be happy about each driver already doing the right thing.
Best regards
Uwe