Sometimes it does matter if the compiler target for a C compiler has two's complement representation of integers or not, and having the preprocessor making the detection can be useful.
Since the standard requires the MAX/MIN macros from limits.h
and stdint.h
to be expressions that can be used in preprocessor conditionals, I think that
#include <limits.h>
#if INT_MIN + INT_MAX == -1
# define HAVE_TWOS_COMPLEMENT 1
#endif
does the trick, since one's complement and sign/magnitude architectures have symmetrical value ranges for signed integers. The question is, am I missing something here or is there a better way to make such a test in a compiler-agnostic way?
In two’s complement, −1 is encoded as 111...111.
In one’s complement, −1 is encoded as 111...110.
In sign-and-magnitude, −1 is encoded as 100..001.
Therefore, the following detects the encoding of the int
type:
#if (-1 & 3) == 1
// The encoding is sign-and-magnitude.
#elif (-1 & 3) == 2
// The encoding is one’s complement.
#elif (-1 & 3) == 3
// The encoding is two’s complement.
#else
// Not possible in the C standard.
#endif
The test offered in the question, INT_MIN + INT_MAX == -1
, is not reliable because C 2018 6.2.6.2 2 permits “the value with sign bit 1 and all value bits zero” to be a trap representation, in which case INT_MIN
is −(2M−1), where M is the number of value bits, and INT_MAX
is 2M−1, so INT_MIN + INT_MAX
is zero.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With