-2147483648
, for example, is not an integer literal; it's an expression consisting of a unary -
operator applied to the literal 2147483648
.
Prior to the new C++ 2011 standard, C++ doesn't require the existence of any type bigger than 32 bits (C++2011 adds long long
), so the literal 2147483648
is non-portable.
A decimal integer literal is of the first of the following types in which its value fits:
int
long int
long long int (new in C++ 2011)
Note that it's never of an unsigned type in standard C++. In the 1998 and 2003 versions of the C standard (which don't have long long int
), a decimal integer literal that's too big to fit in long int
results in undefined behavior. In C++2011, if a decimal integer literal doesn't fit in long long int
, then the program is "ill-formed".
But gcc (at least as of release 4.6.1, the latest one I have) doesn't implement the C++2011 semantics. The literal 2147483648
, which doesn't fit in a 32-bit long, is treated as unsigned long
, at least on my 32-bit system. (That's fine for C++98 or C++2003; the behavior is undefined, so the compiler can do anything it likes.)
So given a typical 32-bit 2's-complement int
type, this:
cout << -2147483647 << '
';
takes the int
value 2147483647
, negates it, and prints the result, which matches the mathematical result you'd expect. But this:
cout << -2147483648 << '
';
(when compiled with gcc 4.6.1) takes the long
or unsigned long
value 2147483648
, negates it as an unsigned int, yielding 2147483648
, and prints that.
As others have mentioned, you can use suffixes to force a particular type.
Here's a small program that you can use to show how your compiler treats literals:
#include <iostream>
#include <climits>
const char *type_of(int) { return "int"; }
const char *type_of(unsigned int) { return "unsigned int"; }
const char *type_of(long) { return "long"; }
const char *type_of(unsigned long) { return "unsigned long"; }
const char *type_of(long long) { return "long long"; }
const char *type_of(unsigned long long) { return "unsigned long long"; }
int main()
{
std::cout << "int: " << INT_MIN << " .. " << INT_MAX << "
";
std::cout << "long: " << LONG_MIN << " .. " << LONG_MAX << "
";
std::cout << "long long: " << LLONG_MIN << " .. " << LLONG_MAX << "
";
std::cout << "2147483647 is of type " << type_of(2147483647) << "
";
std::cout << "2147483648 is of type " << type_of(2147483648) << "
";
std::cout << "-2147483647 is of type " << type_of(-2147483647) << "
";
std::cout << "-2147483648 is of type " << type_of(-2147483648) << "
";
}
When I compile it, I get some warnings:
lits.cpp:18:5: warning: this decimal constant is unsigned only in ISO C90
lits.cpp:20:5: warning: this decimal constant is unsigned only in ISO C90
and the following output, even with gcc -std=c++0x
:
int: -2147483648 .. 2147483647
long: -2147483648 .. 2147483647
long long: -9223372036854775808 .. 9223372036854775807
2147483647 is of type int
2147483648 is of type unsigned long
-2147483647 is of type int
-2147483648 is of type unsigned long
I get the same output with VS2010, at least with default settings.