Having already read this question I'm reasonably certain that a given process using floating point arithmatic with the same input (on the same hardware, compiled with the same compiler) should be deterministic. I'm looking at a case where this isn't true and trying to determine what could have caused this.
I've compiled an executable and I'm feeding it the exact same data, running on a single machine (non-multithreaded) but I'm getting errors of about 3.814697265625e-06 which after careful googling I found is actually equal to 1/4^9 = 1/2^18 = 1/262144. which is pretty close to the precision level of a 32-bit floating point number (approx 7 digits according to wikipedia)
My suspicion is that it has something to do with optimisations that have been applied to the code. I'm using the intel C++ compiler and have turned floating point speculation to fast instead of safe or strict. Could this make a floating point process non-deterministic? Are there other optimisations etc that could lead to this behaviour?
EDIT: As per Pax's suggestion I recompiled the code with floating point speculation turned to safe and I'm now getting stable results. This allows me to clarify this question - what does floating-point-speculation actually do and how can this cause the same binary (i.e. one compilation, multiple runs) to generate different results when applied to the exact same input?
@Ben I'm compiling using Intel(R) C++ 11.0.061 [IA-32] and I'm running on an Intel quadcore processor.
See Question&Answers more detail:os