Re: PROPOSAL: dot-proc interface [was: /proc stuff]

From: Riley Williams (rhw@MemAlpha.cx)
Date: Tue Nov 13 2001 - 09:41:42 EST


Hi Jakob.

>>> Sure, implement arbitrary precision arithmetic in every single app
>>> out there using /proc....

>> Bullshit. Implement whatever arithmetic is right *for your problem*.
>> And notice when the value you get doesn't fit so you can tell the
>> user he needs a newer version. That's all.
>>
>> There's no reason whatsoever to care what data type the kernel used.

> So my program runs for two months and then aborts with an error
> because some counter just happened to no longer fit into whatever
> type I assumed it was ?
>
> Come on - you just can't code like that...

There are certain assumptions you can make about any given variable
without even seeing a specific value for it. For example:

 1. Does it make sense for the value to be negative? If not, use an
    unsigned variable.

    As an example, no systems can validly have a negative uptime, as
    that implies that it hasn't yet started running. It is for this
    very reason that a supposedly Roman coin inscribed with the date
    "37 BC" was known to be counterfeit - who measures time from an
    event that hasn't yet happenned?

 2. Does it make sense for the variable to report fractional values?
    If not, use integral variables.

    As an example, it makes no sense to have a fractional number of
    CPU's in a particular system - or, for that matter, for a given
    family to have the fabled 2.4 children !!!

 3. If fractional values do make sense, what accuracy is needed, and
    would it make sense to use scaled integers rather than reals?

    As an example, fractional values make sense for the current time
    but the need for accuracy indicates that scaled integers rather
    than reals are to be preferred, with the integers recording time
    in units of whatever fraction of a second is deemed sufficiently
    accurate for the intended purpose whilst still giving a practical
    range that can be stored.

    To take this one step further, and push the next version of the
    Y2K problem as far into the future as possible whilst providing
    a sufficient accuracy for most tasks nowadays, one could use a
    64-bit unsigned variable for the current time, but, rather than
    storing the number of seconds since epoch in it, store the
    number of xths of a second instead.

    As an example of this, a 64-bit unsigned value that measures the
    number of 40 ns intervals from Jan 1 00:00:00 UTC 1970 onwards
    will roll over at Jan 29 15:31:14 UTC 13661. This is a over 45%
    further in the future than the Y10K rollover seen elsewhere...

                ( 13661 - 2000 )
                ---------------- * 100 % = 145.762 %
                ( 10000 - 2000 )

    With an interval of 40 ns one can accurately convert to seconds
    for backwards compatibility by simply dividing by 25,000,000.

 4. Is there any inherent limit on the range it can take? If not, use
    the largest available variables of the relevant type.

I've been doing this for 25 years now, and I've never regretted it.

Best wishes from Riley.

-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/



This archive was generated by hypermail 2b29 : Thu Nov 15 2001 - 21:00:35 EST