> In <99Feb23.101214gmt.28674@worm.brammer.net> Andrew Logan <aml@bsl.co.uk> writes:
>
> > Hi all,
> >
> > I've found a worrying rounding error which is manifested by the
> > following simple test program.
>
> This is not at all linux-kernel related. It should not be posted here.
>
> > func( (long) d3 ) -> 2400
> > func( (long) (d1 * d2) ) -> 2399
> > func1( d1 * d2 ) -> 2400
>
> What you are seeing here is the result of fact that 2.4 can't be
> represented exactly in the binary notation that floating point uses.
> This is basically the same as not being able to exactly represent 1/3
> in decimal notation (with a fixed number of digits).
>
> The other thing you are seeing is that x86 floating point processors
> hold "temporary" calculations in 80-bit format, while a double is only 64
> bits. This makes serious numerical analysis a real pain in the ass,
> and can make even simple programs like yours give surprising results.
>
> This is really the fault of both Intel and gcc. Intel didn't design
> the x86 to allow true 64-bit only math and uses 80-bit "temporaries"
> as a default. Gcc leaves the 80-bits as a default, and there isn't any
> "easy" way of switching to Intel's "almost IEEE754 64-bit" mode. The
> only option that comes close to turning off this "feature" is the
> -ffloat-store option, which causes massive slowdowns.
>
>From the Intel rag.
"The Intel 486 allows results to be calculated with either 64, 53, or
24 bits precision......" (sic)
"...The lower resolutions are required by the IEEE standard and are
provided to obtain compatibility with certain existing languages..."
(Like FORTRAN). Intel claims, and IEEE recognizes, compliance with the
IEEE standard when the processor is operated in its default mode, i.e.,
after a FINIT. There are rounding-control bits in its control register.
It can (1) round to nearest, (2) Chop (round toward zero), (3) round
Up, and (4) round down. The default, for IEEE compliance, is (1).
So it is _not_ a _fault_ of Intel. The "8087-family" was developed to
strictly adhere to IEEE standards and, in fact Intel helped write
those standards. If you have a chip that is not broken, and you
execute FINIT. The chip will function according to those standards.
Memory operands require 10 bytes to contain intermediate results. The
internal stack has a 10 byte width. Whether or not the full precision
is used by the 'C' runtime library is implementation-defined.
The dynamic range of the 64-bit precision is 10^+/- 4932.
To use this capability, you must declare real numbers to be type
double. 'Float' just means 'real' double means 64 bits.
double a = 1.000000000000000000000000000001;
main()
{
double b, c;
b = a;
c = a * 2.0;
printf("%f\n", c);
}
.globl a
.data
.align 4
.type a,@object
.size a,8
a:
.long 0x0,0x3ff00000 # Two longwords allocated.
.section .rodata
.LC0:
.string "%f\n"
.text
.align 4
.globl main
.type main,@function
main:
pushl %ebp
movl %esp,%ebp
subl $24,%esp
movl a,%edx
movl %edx,-8(%ebp)
movl a+4,%edx
movl %edx,-4(%ebp)
fldl a
fstpl -24(%ebp) # Two longwords on stack for each variable
fldl -24(%ebp)
faddl -24(%ebp)
fstpl -16(%ebp)
pushl -12(%ebp)
pushl -16(%ebp)
pushl $.LC0
call printf
addl $12,%esp
.L1:
movl %ebp,%esp
popl %ebp
ret
.Lfe1:
.size main,.Lfe1-main
.ident "GCC: (GNU) 2.8.1"
Cheers,
Dick Johnson
***** FILE SYSTEM WAS MODIFIED *****
Penguin : Linux version 2.2.1 on an i686 machine (400.59 BogoMips).
Warning : It's hard to remain at the trailing edge of technology.
Wisdom : It's not a Y2K problem. It's a Y2Day problem.
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@vger.rutgers.edu
Please read the FAQ at http://www.tux.org/lkml/