Clone this repo:
  1. 5df1362 Mark `binary_test` with `timeout = "moderate"` to avoid flakiness due to timeouts. by Pedro Gonnet · 39 minutes ago upstream/master
  2. b58f508 Add `slice_dim0` helper by Dillon Sharlet · 48 minutes ago
  3. d06fc11 Merge pull request #9935 from wangw-1991:fix_bugs by XNNPACK Team · 52 minutes ago
  4. ac9bc35 Adds missing GEMMBenchmark overload for qp8_f32_qc8w_gemm_minmax and corrects qp8_f32_qc4w microkernels and associated functions to use xnn_f32_qc4w_minmax_params instead of xnn_f32_minmax_params. by Samuel Fuller · 79 minutes ago
  5. 704a6d7 Demote some logging for subgraph rewrites from `xnn_log_info` to `xnn_log_debug`. by Pedro Gonnet · 89 minutes ago

XNNPACK

XNNPACK is a highly optimized solution for neural network inference on ARM, x86, WebAssembly, and RISC-V platforms. XNNPACK is not intended for direct use by deep learning practitioners and researchers; instead it provides low-level performance primitives for accelerating high-level machine learning frameworks, such as TensorFlow Lite, TensorFlow.js, PyTorch, ONNX Runtime, ExecuTorch, and MediaPipe.

Supported Architectures

  • ARM64 on Android, iOS, macOS, Linux, and Windows
  • ARMv7 (with NEON) on Android
  • ARMv6 (with VFPv2) on Linux
  • x86 and x86-64 (up to AVX512) on Windows, Linux, macOS, Android, and iOS simulator
  • WebAssembly MVP
  • WebAssembly SIMD
  • WebAssembly Relaxed SIMD (experimental)
  • RISC-V (RV32GC and RV64GC)
  • Hexagon (with HVX)

Operator Coverage

XNNPACK implements the following neural network operators:

  • 2D Convolution (including grouped and depthwise)
  • 2D Deconvolution (AKA Transposed Convolution)
  • 2D Average Pooling
  • 2D Max Pooling
  • 2D ArgMax Pooling (Max Pooling + indices)
  • 2D Unpooling
  • 2D Bilinear Resize
  • 2D Depth-to-Space (AKA Pixel Shuffle)
  • Add (including broadcasting, two inputs only)
  • Subtract (including broadcasting)
  • Divide (including broadcasting)
  • Maximum (including broadcasting)
  • Minimum (including broadcasting)
  • Multiply (including broadcasting)
  • Squared Difference (including broadcasting)
  • Global Average Pooling
  • Channel Shuffle
  • Fully Connected
  • Abs (absolute value)
  • Bankers' Rounding (rounding to nearest, ties to even)
  • Ceiling (rounding to integer above)
  • Clamp (includes ReLU and ReLU6)
  • Convert (includes fixed-point and half-precision quantization and dequantization)
  • Copy
  • ELU
  • Floor (rounding to integer below)
  • HardSwish
  • Leaky ReLU
  • Negate
  • Sigmoid
  • Softmax
  • Square
  • Tanh
  • Transpose
  • Truncation (rounding to integer towards zero)
  • PReLU

All operators in XNNPACK support NHWC layout, but additionally allow custom stride along the Channel dimension. Thus, operators can consume a subset of channels in the input tensor, and produce a subset of channels in the output tensor, providing a zero-cost Channel Split and Channel Concatenation operations.

Performance

Mobile phones

The table below presents single-threaded performance of XNNPACK library on three generations of MobileNet models and three generations of Pixel phones.

ModelPixel, msPixel 2, msPixel 3a, ms
FP32 MobileNet v1 1.0X828688
FP32 MobileNet v2 1.0X495355
FP32 MobileNet v3 Large394244
FP32 MobileNet v3 Small121414

The following table presents multi-threaded (using as many threads as there are big cores) performance of XNNPACK library on three generations of MobileNet models and three generations of Pixel phones.

ModelPixel, msPixel 2, msPixel 3a, ms
FP32 MobileNet v1 1.0X432746
FP32 MobileNet v2 1.0X261828
FP32 MobileNet v3 Large221624
FP32 MobileNet v3 Small768

Benchmarked on March 27, 2020 with end2end_bench --benchmark_min_time=5 on an Android/ARM64 build with Android NDK r21 (bazel build -c opt --config android_arm64 :end2end_bench) and neural network models with randomized weights and inputs.

Raspberry Pi

The table below presents multi-threaded performance of XNNPACK library on three generations of MobileNet models and three generations of Raspberry Pi boards.

ModelRPi Zero W (BCM2835), msRPi 2 (BCM2836), msRPi 3+ (BCM2837B0), msRPi 4 (BCM2711), msRPi 4 (BCM2711, ARM64), ms
FP32 MobileNet v1 1.0X39193021147277
FP32 MobileNet v2 1.0X1987191794146
FP32 MobileNet v3 Large1658161673840
FP32 MobileNet v3 Small47450221315
INT8 MobileNet v1 1.0X2589128462924
INT8 MobileNet v2 1.0X149582302017

Benchmarked on Feb 8, 2022 with end2end-bench --benchmark_min_time=5 on a Raspbian Buster build with CMake (./scripts/build-local.sh) and neural network models with randomized weights and inputs. INT8 inference was evaluated on per-channel quantization schema.

Minimum build requirements

  • C11
  • C++17
  • Python 3

Publications

Ecosystem

Machine Learning Frameworks

Acknowledgements

XNNPACK is based on QNNPACK library. Over time its codebase diverged a lot, and XNNPACK API is no longer compatible with QNNPACK.