On your platform, they're all names for the same underlying data type. On other platforms, they aren't.
int64_t
is required to be EXACTLY 64 bits. On architectures with (for example) a 9-bit byte, it won't be available at all.
int_least64_t
is the smallest data type with at least 64 bits. If int64_t
is available, it will be used. But (for example) with a 9-bit byte machine, this could be 72 bits.
int_fast64_t
is the data type with at least 64 bits and the best arithmetic performance. It's there mainly for consistency with int_fast8_t
and int_fast16_t
, which on many machines will be 32 bits, not 8 or 16. In a few more years, there might be an architecture where 128-bit math is faster than 64-bit, but I don't think any exists today.
If you're porting an algorithm, you probably want to be using int_fast32_t
, since it will hold any value your old 32-bit code can handle, but will be 64-bit if that's faster. If you're converting pointers to integers (why?) then use intptr_t
.
与恶龙缠斗过久,自身亦成为恶龙;凝视深渊过久,深渊将回以凝视…