I may risk the wrath of the Gods, but I am going to give you some
*very* good reasons why this behavior should be changed.
It is interesting in fact that this thread has come up right at
the same time as I am wrestling with this same problem.
viz: The exception issued by the x86 processor (which is turned
rather questionably into a SIGFPE) is generated *both* on division
by zero and on division overflow. The latter happens when you
divide a 64-bit number by a 32-bit number and the result does not
fit in 32 bits.
Checking for division by zero is easy, but checking for possible
overflow is quite difficult and expensive. Trust me.
The application in which this is happening is a renderer --
in other words, if the result is slightly wrong in this case,
a pixel will appear slightly different (no big deal) but if
a SIGFPE happens, the program dies, which is really bad.
As I said, checking for overflow is way time consuming.
What the kernel really should do is trap the exception, set
EAX to MAX_INT, and continue with the next instruction.
This is exactly the same thing that should be done on division
by zero -- it's equivalent to the floating-point case of returning
Inf.
BTW, the same problem on the same processor under MS Windows
does *NOT* happen -- apparently Windows has sensible
integer-divide-exception handling.
ben
-- "... then the day came when the risk to remain tight in a bud was more painful than the risk it took to blossom." -- Anais Nin