On MSVC converting utf-16 to utf-32 is easy - with C11's codecvt_utf16 locale facet. But in GCC (gcc (Debian 4.7.2-5) 4.7.2) seemingly this new feature hasn't been implemented yet. Is there a way to perform such conversion on Linux without iconv (preferrably using conversion tools of std library)?
Unicode uses two encoding forms: 8-bit and 16-bit, based on the data type of the data that is being that is being encoded. The default encoding form is 16-bit, where each character is 16 bits (2 bytes) wide. Sixteen-bit encoding form is usually shown as U+hhhh, where hhhh is the hexadecimal code point of the character.
UTF-8 requires 8, 16, 24 or 32 bits (one to four bytes) to encode a Unicode character, UTF-16 requires either 16 or 32 bits to encode a character, and UTF-32 always requires 32 bits to encode a character.
UTF-8 encodes a character into a binary string of one, two, three, or four bytes. UTF-16 encodes a Unicode character into a string of either two or four bytes. This distinction is evident from their names. In UTF-8, the smallest binary representation of a character is one byte, or eight bits.
To convert UTF-8 to UTF-16 just call Utf32To16(Utf8To32(str)) and to convert UTF-16 to UTF-8 call Utf32To8(Utf16To32(str)) .
Decoding UTF-16 into UTF-32 is extremely easy.
You may want to detect at compile time the libc version you're using, and deploy your conversion routine if you detect a broken libc (without the functions you need).
Inputs:
char16_t *
, ushort *
, -- for convenience UTF16 *
);char32_t *
, uint *
-- for convenience UTF32 *
).Code looks like:
void convert_utf16_to_utf32(const UTF16 *input,
size_t input_size,
UTF32 *output)
{
const UTF16 * const end = input + input_size;
while (input < end) {
const UTF16 uc = *input++;
if (!is_surrogate(uc)) {
*output++ = uc;
} else {
if (is_high_surrogate(uc) && input < end && is_low_surrogate(*input))
*output++ = surrogate_to_utf32(uc, *input++);
else
// ERROR
}
}
}
Error handling is left. You might want to insert a U+FFFD
¹ into the stream and keep on going, or just bail out, really up to you. The auxiliary functions are trivial:
int is_surrogate(UTF16 uc) { return (uc - 0xd800u) < 2048u; }
int is_high_surrogate(UTF16 uc) { return (uc & 0xfffffc00) == 0xd800; }
int is_low_surrogate(UTF16 uc) { return (uc & 0xfffffc00) == 0xdc00; }
UTF32 surrogate_to_utf32(UTF16 high, UTF16 low) {
return (high << 10) + low - 0x35fdc00;
}
¹ Cf. Unicode:
² Also consider that the !is_surrogate(uc)
branch is by far the most common (as well the non-error path in the second if), you might want to optimize that with __builtin_expect
or similar.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With