There are many questions that asks for difference between the short and int integer types in C++, but practically, when do you choose short over int?
A CPU works more efficient when the data with equals to the native CPU register width. This applies indirect to . NET code as well. In most cases using int in a loop is more efficient than using short.
short and int must be at least 16 bits, long must be at least 32 bits, and that short is no longer than int, which is no longer than long. Typically, short is 16 bits, long is 32 bits, and int is either 16 or 32 bits.
Prefer int to char or shortWe should always prefer int to char because C performs all operations of char with an integer. In all operations like passing char to a function or an arithmetic operation, first char will be converted into integer and after compilation of operation, it will again be converted into char.
All store integers, but consume different memory and have different ranges. For eg: short int consumes 16bits, long int consumes 32bits and long long int consumes 64bits. The maximum value of short int is 32767. So of you need to store big values you need to use long int .
(See Eric's answer for more detailed explanation)
Notes:
int is set to the 'natural size' - the integer form that the hardware handles most efficientlyshort in an array or in arithmetic operations, the short integer is converted into int, and so this can introduce a hit on the speed in processing short integersshort can conserve memory if it is narrower than int, which can be important when using a large arrayint system compared to a 16-bit int systemConclusion:
int unless you conserving memory is critical, or your program uses a lot of memory (e.g. many arrays). In that case, use short.You choose short over int when:
Either
int or short, which can vary based on platform (as you want a platform with a 32-bit short to be able to read a file written on a platform with a 16-bit short). Good candidates are the types defined in stdint.h.And:
short on your target platform (for a 16-bit short, this is -32768-32767, or 0-65535 for a 16-bit unsigned short).short than for an int. The standard only guarantees that short is not larger than int, so implementations are allowed to have the same size for a short and for an int.Note:
chars can also be used as arithmetic types. An answer to "When should I use char instead of short or int?" would read very similarly to this one, but with different numbers (-128-127 for an 8-bit char, 0-255 for an 8-bit unsigned char)
In reality, you likely don't actually want to use the short type specifically. If you want an integer of specific size, there are types defined in <cstdint> that should be preferred, as, for example, an int16_t will be 16 bits on every system, whereas you cannot guarantee the size of a short will be the same across all targets your code will be compiled for.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With