What is the best way to convert binary to it's integral representation?
Let's imagine that we have a buffer containing binary data obtained from an external source such as a socket connection or a binary file. The data is organised in a well defined format and we know that the first four octets represent a single unsigned 32 bit integer (which could be the size of following data). What would be the more efficient way to covert those octets to a usable format (such as std::uint32_t)?
Here is what I have tried so far:
#include <algorithm>
#include <array>
#include <cstdint>
#include <cstring>
#include <iostream>
int main()
{
std::array<char, 4> buffer = { 0x01, 0x02, 0x03, 0x04 };
std::uint32_t n = 0;
n |= static_cast<std::uint32_t>(buffer[0]);
n |= static_cast<std::uint32_t>(buffer[1]) << 8;
n |= static_cast<std::uint32_t>(buffer[2]) << 16;
n |= static_cast<std::uint32_t>(buffer[3]) << 24;
std::cout << "Bit shifting: " << n << "\n";
n = 0;
std::memcpy(&n, buffer.data(), buffer.size());
std::cout << "std::memcpy(): " << n << "\n";
n = 0;
std::copy(buffer.begin(), buffer.end(), reinterpret_cast<char*>(&n));
std::cout << "std::copy(): " << n << "\n";
}
On my system, the result of the following program is
Bit shifting: 67305985
std::memcpy(): 67305985
std::copy(): 67305985
You essentially are asking about endianness. While your program might work on one computer, it might not on another. If the "well defined format" is network order, there are a standard set of macros/functions to convert to and from network order to the natural order for your specific machine.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With