Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

Is there a fast way to convert a string of 8 ASCII decimal digits into a binary number?

Consider 8 digit characters like 12345678 as a string. It can be converted to a number where every byte contains a digit like this:

const char* const str = "12345678";
const char* const base = "00000000";
const uint64_t unpacked = *reinterpret_cast<const uint64_t*>(str)
    - *reinterpret_cast<const uint64_t*>(base);

Then unpacked will be 0x0807060504030201 on a little-endian system.

What is the fastest way to convert the number into 12345678, perhaps by multiplying it by some magic number or using SIMD up to AVX2?

UPDATE: 12345678 has to be a number stored in a 32-bit or 64-bit integer, not a string.

like image 786
Serge Rogatch Avatar asked Sep 16 '25 04:09

Serge Rogatch


2 Answers

Multiplication in binary is just a series of shift & adds. A SWAR approach shouldn't be too hard to understand. For a detailed walk-thru see:

  • https://johnnylee-sde.github.io/Fast-numeric-string-to-int/
  • https://kholdstare.github.io/technical/2020/05/26/faster-integer-parsing.html
  • https://lemire.me/blog/2022/01/21/swar-explained-parsing-eight-digits/
  • http://0x80.pl/articles/simd-parsing-int-sequences.html
// http://govnokod.ru/13461
static inline
uint32_t parse_8digits_swar_classic (char* str) {
    uint64_t v;

    memcpy(&v, str, 8);
    v = (v & 0x0F0F0F0F0F0F0F0F) * 2561 >> 8;
    v = (v & 0x00FF00FF00FF00FF) * 6553601 >> 16;
    v = (v & 0x0000FFFF0000FFFF) * 42949672960001 >> 32;
    return v;
}

// attempt to improve the latency
static inline
uint32_t parse_8digits_swar_aqrit (char* str) {
    const uint64_t mask = 0x000000FF000000FF;
    uint64_t v, t;

    memcpy(&v, str, 8);
    v = (v * 10) + (v >> 8);
    t = (v & mask) * 0x000F424000000064;
    v = ((v >> 16) & mask) * 0x0000271000000001;
    v = (v + t + 0xFF0915C600000000ULL) >> 32;
    return v;
}

// SSSE3 needs less `shift & mask` operations...
static inline
uint32_t parse_8digits_simd_ssse3 (char* str) {
    const __m128i mul1 = _mm_set_epi32(0, 0, 0x010A0A64, 0x14C814C8);
    const __m128i mul2 = _mm_set_epi32(0, 0, 0x0001000A, 0x00FA61A8);
    const __m128i mask = _mm_set1_epi8(0x0F);
    __m128i v;

    v = _mm_loadl_epi64((__m128i*)(void*)str);
    v = _mm_and_si128(v, mask);
    v = _mm_madd_epi16(_mm_maddubs_epi16(mul1, v), mul2);
    v = _mm_add_epi32(_mm_add_epi32(v, v), _mm_shuffle_epi32(v, 1));
    return (uint32_t)_mm_cvtsi128_si32(v);
}
like image 101
aqrit Avatar answered Sep 17 '25 19:09

aqrit


On an older x86-64 system without AVX2, this simple version based on gathering digits in tree fashion is quite efficient, with performance on par with a simple SWAR-based implementation per my measurements. This requires a processor with a lot of instruction-level parallelism however, as it comprises 50% more instructions than the SWAR -based code when compiled with full optimizations.

/* convert a string of exactly eight 'char' into a 32-bit unsigned integer */
uint32_t string_to_number (const char * s)
{
    uint32_t t0 = s[0] * 10 + s[1];
    uint32_t t1 = s[2] * 10 + s[3];
    uint32_t t2 = s[4] * 10 + s[5];
    uint32_t t3 = s[6] * 10 + s[7];
    uint32_t s0 = t0 * 100 + t1;
    uint32_t s1 = t2 * 100 + t3;
    uint32_t num = s0 * 10000 + s1;
    uint32_t corr =
        '0' * 10000000 +
        '0' * 1000000 +
        '0' * 100000 +
        '0' * 10000 +
        '0' * 1000 +
        '0' * 100 +
        '0' * 10 +
        '0' * 1;
    return num - corr;
}
like image 33
njuffa Avatar answered Sep 17 '25 17:09

njuffa