Re: [PATCH 5/7] RISC-V: arch/riscv/lib
From: Palmer Dabbelt
Date: Tue Jun 06 2017 - 00:56:39 EST
On Fri, 26 May 2017 02:06:58 PDT (-0700), Arnd Bergmann wrote:
> On Thu, May 25, 2017 at 3:59 AM, Palmer Dabbelt <palmer@xxxxxxxxxxx> wrote:
>> On Tue, 23 May 2017 04:19:42 PDT (-0700), Arnd Bergmann wrote:
>>> On Tue, May 23, 2017 at 2:41 AM, Palmer Dabbelt <palmer@xxxxxxxxxxx> wrote:
>
>>> Also, it would be good to replace the multiply+div64
>>> with a single multiplication here, see how x86 and arm do it
>>> (for the tsc/__timer_delay case).
>>
>> Makes sense. I think this should do it
>>
>> https://github.com/riscv/riscv-linux/commit/d397332f6ebff42f3ecb385e9cf3284fdeda6776
>>
>> but I'm finding this hard to test as this only works for 2ms sleeps. It seems
>> at least in the right ballpark
>
> + if (usecs > MAX_UDELAY_US) {
> + __delay((u64)usecs * riscv_timebase / 1000000ULL);
> + return;
> + }
>
> You still do the 64-bit division here. What I meant is to completely
> avoid the division and use a multiply+shift.
The goal here was to avoid the error case that ARM has on overflow and instead
just delay for the requested time. This should only divide when the delay is
>=2ms, so the division won't cost much in comparison.
The normal case should have no division in it.
I can copy ARM's error handling if you think that's better, but it seemed more
complicated than just computing the correct answer.
> Also, you don't need to base anything on HZ, as you do not rely
> on the delay calibration but always use a timer.
That makes sense, I just based this blindly off the ARM version. I'll see if
that lets me avoid unnecessary overflow for ndelay. If it doesn't then I'd
prefer to just keep exactly the same constraints ARM has to avoid unexpected
behavior.