Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

The importance of using a 16bit integer

Tags:

types

memory

How seriously do developers think about using a 16bit integer when writing code? I've been using 32bit integers ever since I've been programming and I don't really think about using 16bit.

Its so easy to declare a 32bit int because its the default for most languages.

Whats the upside of using a 16bit integer apart from a little memory saved?


2 Answers

Now that we have cars, we don't walk or ride horses as much, but we still do walk and ride horses.

There is less need to use shorts these days. In a lot of situations the cost of disk space and availability of RAM mean that we no longer need to squeeze every last bit of storage out of computers as we did 20 years ago, so we can sacrifice a bit of storage efficiency in order to save on development/maintenance costs.

However, where large amounts of data are used, or we are working with systems with small memories (e.g. embedded controllers) or when we are transmitting data over networks, using 32 or 64 bits to represent a 16-bit value is just a waste of memory/bandwidth. It doesn't matter how much memory you have, wasting half or three quarters of it would just be stupid.

like image 176
Jason Williams Avatar answered Dec 23 '25 01:12

Jason Williams


APIs/interfaces (e.g. TCP/IP port numbers) and algorithms that require manipulation (e.g. rotation) of 16-bit values.

like image 42
Ignacio Vazquez-Abrams Avatar answered Dec 23 '25 01:12

Ignacio Vazquez-Abrams



Donate For Us

If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!