tree: 105684929a96a20139388fbd1c64c0dc73f68904 [path history] [tgz]
  1. cmake/
  2. fuzz/
  3. utf8_corpus_dir/
  4. utf8_to_utf16/
  5. ascii.cpp
  6. boost.cpp
  7. BUILD.bazel
  8. CMakeLists.txt
  9. lemire-avx2.c
  10. lemire-neon.c
  11. lemire-sse.c
  12. LICENSE
  13. lookup.c
  14. main.c
  15. naive.c
  16. range-avx2.c
  17. range-neon.c
  18. range-sse.c
  19. range2-neon.c
  20. range2-sse.c
  21. README.md
  22. UTF-8-demo.txt
  23. utf8_range.c
  24. utf8_range.h
  25. utf8_validity.cc
  26. utf8_validity.h
  27. utf8_validity_test.cc
third_party/utf8_range/README.md

Build Status

Fast UTF-8 validation with Range algorithm (NEON+SSE4+AVX2)

This is a brand new algorithm to leverage SIMD for fast UTF-8 string validation. Both NEON(armv8a) and SSE4 versions are implemented. AVX2 implementation contributed by ioioioio.

Four UTF-8 validation methods are compared on both x86 and Arm platforms. Benchmark result shows range base algorithm is the best solution on Arm, and achieves same performance as Lemire's approach on x86.

  • Range based algorithm
    • range-neon.c: NEON version
    • range-sse.c: SSE4 version
    • range-avx2.c: AVX2 version
    • range2-neon.c, range2-sse.c: Process two blocks in one iteration
  • Lemire's SIMD implementation
    • lemire-sse.c: SSE4 version
    • lemire-avx2.c: AVX2 version
    • lemire-neon.c: NEON porting
  • naive.c: Naive UTF-8 validation byte by byte
  • lookup.c: Lookup-table method

About the code

  • Run “make” to build. Built and tested with gcc-7.3.
  • Run “./utf8” to see all command line options.
  • Benchmark
    • Run “./utf8 bench” to bechmark all algorithms with default test file.
    • Run “./utf8 bench size NUM” to benchmark specified string size.
  • Run “./utf8 test” to test all algorithms with positive and negative test cases.
  • To benchmark or test specific algorithm, run something like “./utf8 bench range”.

Benchmark result (MB/s)

Method

  1. Generate UTF-8 test buffer per test file or buffer size.
  2. Call validation sub-routines in a loop until 1G bytes are checked.
  3. Calculate speed(MB/s) of validating UTF-8 strings.

NEON(armv8a)

Test casenaivelookuplemirerangerange2
UTF-demo.txt562.25412.841198.501411.721579.85
32 bytes651.55441.70891.381003.951043.58
33 bytes660.00446.78588.771009.311048.12
129 bytes771.89402.55938.071283.771401.76
1K bytes811.92411.581188.961398.151560.23
8K bytes812.25412.741198.901412.181580.65
64K bytes817.35412.241200.201415.111583.86
1M bytes815.70411.931200.931415.651585.40

SSE4(E5-2650)

Test casenaivelookuplemirerangerange2
UTF-demo.txt753.70310.413954.743945.603986.13
32 bytes1135.76364.072890.522351.812173.02
33 bytes1161.85376.291352.952239.552041.43
129 bytes1161.22322.472742.493315.333249.35
1K bytes1310.95310.723755.883781.233874.17
8K bytes1348.32307.933860.713922.813968.93
64K bytes1301.34308.393935.153973.503983.44
1M bytes1279.78309.063923.513953.003960.49

Range algorithm analysis

Basic idea:

  • Load 16 bytes
  • Leverage SIMD to calculate value range for each byte efficiently
  • Validate 16 bytes at once

UTF-8 coding format

http://www.unicode.org/versions/Unicode6.0.0/ch03.pdf, page 94

Table 3-7. Well-Formed UTF-8 Byte Sequences

Code PointsFirst ByteSecond ByteThird ByteFourth Byte
U+0000..U+007F00..7F
U+0080..U+07FFC2..DF80..BF
U+0800..U+0FFFE0A0..BF80..BF
U+1000..U+CFFFE1..EC80..BF80..BF
U+D000..U+D7FFED80..9F80..BF
U+E000..U+FFFFEE..EF80..BF80..BF
U+10000..U+3FFFFF090..BF80..BF80..BF
U+40000..U+FFFFFF1..F380..BF80..BF80..BF
U+100000..U+10FFFFF480..8F80..BF80..BF

To summarise UTF-8 encoding:

  • Depending on First Byte, one legal character can be 1, 2, 3, 4 bytes
    • For First Byte within C0..DF, character length = 2
    • For First Byte within E0..EF, character length = 3
    • For First Byte within F0..F4, character length = 4
  • C0, C1, F5..FF are not allowed
  • Second,Third,Fourth Bytes must lie in 80..BF.
  • There are four special cases for Second Byte, shown bold italic in above table.

Range table

Range table maps range index 0 ~ 15 to minimal and maximum values allowed. Our task is to observe input string, find the pattern and set correct range index for each byte, then validate input string.

IndexMinMaxByte type
0007FFirst Byte, ASCII
1,2,380BFSecond, Third, Fourth Bytes
4A0BFSecond Byte after E0
5809FSecond Byte after ED
690BFSecond Byte after F0
7808FSecond Byte after F4
8C2F4First Byte, non-ASCII
9..15(NEON)FF00Illegal: unsigned char >= 255 && unsigned char <= 0
9..15(SSE)7F80Illegal: signed char >= 127 && signed char <= -128

Calculate byte ranges (ignore special cases)

Ignoring the four special cases(E0,ED,F0,F4), how should we set range index for each byte?

  • Set range index to 0(00..7F) for all bytes by default
  • Find non-ASCII First Byte (C0..FF), set their range index to 8(C2..F4)
  • For First Byte within C0..DF, set next byte's range index to 1(80..BF)
  • For First Byte within E0..EF, set next two byte's range index to 2,1(80..BF) in sequence
  • For First Byte within F0..FF, set next three byte's range index to 3,2,1(80..BF) in sequence

To implement above operations efficiently with SIMD:

  • For 16 input bytes, use lookup table to map C0..DF to 1, E0..EF to 2, F0..FF to 3, others to 0. Save to first_len.
  • Map C0..FF to 8, we get range indices for First Byte.
  • Shift first_len one byte, we get range indices for Second Byte.
  • Saturate substract first_len by one(3->2, 2->1, 1->0, 0->0), then shift two bytes, we get range indices for Third Byte.
  • Saturate substract first_len by two(3->1, 2->0, 1->0, 0->0), then shift three bytes, we get range indices for Fourth Byte.

Example(assume no previous data)

InputF180808080C28080...
first_len30000100...
First Byte80000800...
Second Byte03000010...
Third Byte00200000...
Fourth Byte00010000...
Range index83210810...
Range_index = First_Byte | Second_Byte | Third_Byte | Fourth_Byte

Error handling

  • C0,C1,F5..FF are not included in range table and will always be detected.
  • Illegal 80..BF will have range index 0(00..7F) and be detected.
  • Based on First Byte, according Second, Third and Fourth Bytes will have range index 1/2/3, to make sure they must lie in 80..BF.
  • If non-ASCII First Byte overlaps, above algorithm will set range index of the latter First Byte to 9,10,11, which are illegal ranges. E.g, Input = F1 80 C2 90 --> Range index = 8 3 10 1, where 10 indicates error. See table below.

Overlapped non-ASCII First Byte

InputF180C290
first_len3010
First Byte8080
Second Byte0301
Third Byte0020
Fourth Byte0001
Range index83101

Adjust Second Byte range for special cases

Range index adjustment for four special cases

First ByteSecond ByteBefore adjustmentCorrect indexAdjustment
E0A0..BF242
ED80..9F253
F090..BF363
F480..8F374

Range index adjustment can be reduced to below problem:

Given 16 bytes, replace E0 with 2, ED with 3, F0 with 3, F4 with 4, others with 0.

A naive SIMD approach:

  1. Compare 16 bytes with E0, get the mask for eacy byte (FF if equal, 00 otherwise)
  2. And the mask with 2 to get adjustment for E0
  3. Repeat step 1,2 for ED,F0,F4

At least eight operations are required for naive approach.

Observing special bytes(E0,ED,F0,F4) are close to each other, we can do much better using lookup table.

NEON

NEON tbl instruction is very convenient for table lookup:

  • Table can be up to 16x4 bytes in size
  • Return zero if index is out of range

Leverage these features, we can solve the problem with as few as two operations:

  • Precreate a 16x2 lookup table, where table[0]=2, table[13]=3, table[16]=3, table[20]=4, table[others]=0.
  • Substract input bytes with E0 (E0 -> 0, ED -> 13, F0 -> 16, F4 -> 20).
  • Use the substracted byte as index of lookup table and get range adjustment directly.
    • For indices less than 32, we get zero or required adjustment value per input byte
    • For out of bound indices, we get zero per tbl behaviour

SSE

SSE pshufb instruction is not as friendly as NEON tbl in this case:

  • Table can only be 16 bytes in size
  • Out of bound indices are handled this way:
    • If 7-th bit of index is 0, least four bits are used as index (E.g, index 0x73 returns 3rd element)
    • If 7-th bit of index is 1, return 0 (E.g, index 0x83 returns 0)

We can still leverage these features to solve the problem in five operations:

  • Precreate two tables:
    • table_df[1] = 2, table_df[14] = 3, table_df[others] = 0
    • table_ef[1] = 3, table_ef[5] = 4, table_ef[others] = 0
  • Substract input bytes with EF (E0 -> 241, ED -> 254, F0 -> 1, F4 -> 5) to get the temporary indices
  • Get range index for E0,ED
    • Saturate substract temporary indices with 240 (E0 -> 1, ED -> 14, all values below 240 becomes 0)
    • Use substracted indices to look up table_df, get the correct adjustment
  • Get range index for F0,F4
    • Saturate add temporary indices with 112(0x70) (F0 -> 0x71, F4 -> 0x75, all values above 16 will be larger than 128(7-th bit set))
    • Use added indices to look up table_ef, get the correct adjustment (index 0x71,0x75 returns 1st,5th elements, per pshufb behaviour)

Error handling

  • For overlapped non-ASCII First Byte, range index before adjustment is 9,10,11. After adjustment (adds 2,3,4 or 0), the range index will be 9 to 15, which is still illegal in range table. So the error will be detected.

Handling remaining bytes

For remaining input less than 16 bytes, we will fallback to naive byte by byte approach to validate them, which is actually faster than SIMD processing.

  • Look back last 16 bytes buffer to find First Byte. At most three bytes need to look back. Otherwise we either happen to be at character boundray, or there are some errors we already detected.
  • Validate string byte by byte starting from the First Byte.

Tests

It's necessary to design test cases to cover corner cases as more as possible.

Positive cases

  1. Prepare correct characters
  2. Validate correct characters
  3. Validate long strings
    • Round concatenate characters starting from first character to 1024 bytes
    • Validate 1024 bytes string
    • Shift 1 byte, validate 1025 bytes string
    • Shift 2 bytes, Validate 1026 bytes string
    • ...
    • Shift 16 bytes, validate 1040 bytes string
  4. Repeat step3, test buffer starting from second character
  5. Repeat step3, test buffer starting from third character
  6. ...

Negative cases

  1. Prepare bad characters and bad strings
    • Bad character
    • Bad character cross 16 bytes boundary
    • Bad character cross last 16 bytes and remaining bytes boundary
  2. Test long strings
    • Prepare correct long strings same as positive cases
    • Append bad characters
    • Shift one byte for each iteration
    • Validate each shift

Code breakdown

Below table shows how 16 bytes input are processed step by step. See range-neon.c for according code.

Range based UTF-8 validation algorithm