The problem I have is wanting to test different versions of the
same simulation for performance. All the data is computed and
the output is identical. The object code is about 80k stripped
but do to the data being generated the occupied memory will run
about 4Meg. The execution time is on the order of 9 minutes with
up to 30 seconds variation. This is on a 486DX-2/66 with a
256K L2 cache with 16Meg 70ns Ram.
I am guessing that one contribution is the load point of
the code since it could change overlap in cache. Any thing else is
beyond me. System time is on the order of .2 sec as reported by
the time command. The variation noted is when running the identical
code repeatedly and it doesn't appear to have any cyclic effect.
Any thoughts appreciated.
Hubert Bahr