I'm porting my application from 32 bit to 64 bit. Currently, the code compiles under both architectures, but the results are different. For various reasons, I'm using floats instead of doubles. I assume that there is some implicit upconverting from float to double happening on one machine and not the other. Is there a way to control for this, or specific gotchas I should be looking for?
edited to add:
32 bit platform
gcc (GCC) 4.1.2 20070925 (Red Hat 4.1.2-33)
Dual-Core AMD Opteron(tm) Processor 2218 HE
64 bit platform
gcc (Ubuntu 4.3.3-5ubuntu4) 4.3.3
Intel(R) Xeon(R) CPU
Applying the -mfpmath=387 helps somewhat, after 1 iteration of the algorithm the values are the same, but beyond that they fall out of sync again.
I should also add that my concern isn't that the results aren't identical, it's that porting to a 64 bit platform has uncovered a 32 bit dependency of which I was not aware.
See Question&Answers more detail:os