In my numerical simulation I have code similar to the following snippet
double x;
do {
x = /* some computation */;
} while (x <= 0.0);
/* some algorithm that requires x to be (precisely) larger than 0 */
With certain compilers (e.g. gcc) on certain platforms (e.g. linux, x87 math) it is possible that x
is computed in higher than double precision ("with excess precision"). (Update: When I talk of precision here, I mean precision /and/ range.) Under these circumstances it is conceivable that the comparison (x <= 0
) returns false even though the next time x is rounded down to double precision it becomes 0. (And there's no guarantee that x isn't rounded down at an arbitrary point in time.)
Is there any way to perform this comparison that
- is portable,
- works in code that gets inlined,
- has no performance impact and
- doesn't exclude some arbitrary range (0, eps)?
I tried to use (x < std::numeric_limits<double>::denorm_min()
) but that seemed to significantly slow down the loop when working with SSE2 math. (I know that denormals can slow down a computation, but I didn't expect them to be slower to just move around and compare.)
Update:
An alternative is to use volatile
to force x
into memory before the comparison, e.g. by writing
} while (*((volatile double*)&x) <= 0.0);
However, depending on the application and the optimizations applied by the compiler, this solution can introduce a noticeable overhead too.
Update: The problem with any tolerance is that it's quite arbitrary, i.e. it depends on the specific application or context. I'd prefer to just do the comparison without excess precision, so that I don't have to make any additional assumptions or introduce some arbitrary epsilons into the documentation of my library functions.
See Question&Answers more detail:os