commit | 4c2637da4b0a1df58bb30e8ef24b182fc3065fb7 | [log] [tgz] |
---|---|---|
author | Frank Barchard <fbarchard@google.com> | Mon Sep 30 23:21:55 2019 |
committer | XNNPACK Team <xnnpack-github-robot@google.com> | Mon Sep 30 23:22:22 2019 |
tree | e17e687010f8d7b68a022c61006bdbfd4ff005bb | |
parent | bb4c18b433764db0a3bcc0baa0d7bc41f2df97ec [diff] |
LD2R for loading clamp parameters dwconv and a57 gemm kernels use single LD2R instead of 2 LD1R for loading clamp paramters into 2 vectors. PiperOrigin-RevId: 272090356
XNNPACK is a highly optimized library of floating-point neural network inference operators for ARM, WebAssembly, and x86 (SSE2 level) platforms. XNNPACK is not intended for direct use by deep learning practitioners researchers; instead it provides low-level performance primitives for accelerating high-level machine learning frameworks, such as MediaPipe, TensorFlow Lite, and TensorFlow.js.
XNNPACK implements the following neural network operators:
All operators in XNNPACK support NHWC layout, but additionally allow custom stride along the Channel dimension. Thus, operators can consume a subset of channels in the input tensor, and produce a subset of channels in the output tensor, providing a zero-cost Channel Split and Channel Concatenation operations.
XNNPACK is a based on QNNPACK library. However, unlike QNNPACK, XNNPACK focuses entirely on floating-point operators, and its API is no longer compatible with QNNPACK.