Is there any argument for using the numeric limits macros (e.g. INT64_MAX
) over std::numeric_limits<T>
? From what I understand numeric_limits
is in the standard but the macros are only in C99 so therefore non-standard.
Is there any argument for using the numeric limits macros (e.g. INT64_MAX
) over std::numeric_limits<T>
? From what I understand numeric_limits
is in the standard but the macros are only in C99 so therefore non-standard.
The other answers mostly have correct information, but it seems that this needs updating for C++11.
In C++11, std::numeric_limits<T>::min()
, std::numeric_limits<T>::max()
, and std::numeric_limits<T>::lowest()
are all declared constexpr
, so they can be usable in most of the same contexts as INT_MIN
and company. The only exception I can think of is compile-time string processing using the #
stringification token.
This means that numeric_limits
can be used for case labels, template parameters, etc., and you get the benefit of using it in generic code (try using INT_MIN
vs. LONG_MIN
in template<typename T> get_min(T t);
).
C++11 also brings a solution to the issue James Kanze talks about, by adding std::numeric_limits<T>::lowest()
, which gives the lowest finite value for all types, rather than the lowest value for integer types and the lowest positive value for floating-point types.