I am trying to understand
256 bits in hexadecimal is 32 bytes, or 64 characters in the range 0-9 or A-F
How can a 32 bytes string be 64 characters in the range 0-9 or A-F?
What does 32 bytes mean?
I would assume that bits mean a digit 0 or 1, so 256 bits would be 256 digits of either 0 or 1.
I know that 1 byte equals 8 bits, so is 32 bytes a 32 digits of either 0, 1, 2, 3, 4, 5, 6, or 7 (i.e. 8 different values)?
I do know a little about different bases (e.g. that binary has 0 and 1, decimal has 0-9, hexadecimal has 0-9 and A-F, etc.), but I still fail to understand why 256 bits in hexadecimal can be 32 bytes or 64 characters.
I know it's quite basic in computer science, so I have to read up on this, but can you give a brief explanation?
A single hexadecimal character represents 4 bits.
1 = 0001
2 = 0010
3 = 0011 
4 = 0100
5 = 0101
6 = 0110
7 = 0111
8 = 1000
9 = 1001
A = 1010
B = 1011
C = 1100
D = 1101
E = 1110
F = 1111
Two hexadecimal characters can represent a byte (8 bits).
How can a 32 bytes string be 64 characters in the range 0-9 or A-F?
Keep in mind that the hexadecimal representation is an EXTERNAL depiction of the bit settings. If byte contains 01001010, was can say that it 4A in hex. The characters 4A are not stored in the byte. It's like in mathematics where we use the depictions "e" and "π" to represent numbers.
What does 32 bytes mean?
1 Byte = 8 bits. 32 bytes = 256 bits.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With