commit | 105170f9eef9d12e93742e1ff5af355c8049bbea | [log] [tgz] |
---|---|---|
author | Dillon Sharlet <dsharlet@google.com> | Tue Feb 25 17:32:26 2025 |
committer | XNNPACK Team <xnnpack-github-robot@google.com> | Tue Feb 25 17:34:30 2025 |
tree | 20fef017594e49a1f92fb2156e76bd08a5adb007 | |
parent | 4b857f39ce1ff8454f916e4e3cb0ab0f7f698c46 [diff] |
Add Tensor<T> helper class for tests This CL does two things: first is adding `Tensor<T>`, a helper class for multi-dimensional arrays that are frequently used in our tests (but currently are duplicating extent/stride computations and other buffer related logic frequently). Second is modifying a few tests to make some changes: - Currently, subgraph tests compare subgraph to operator results. This changes tests to directly check the output, without running the operator code. - Currently, subgraph tests run a single random variation, and getting good coverage requires running the test many times. This changes the subgraph tests to test cover many more permutations in a single run. - Currently, subgraph tests dig into the internal implementation details of subgraphs (e.g. checking xnn_node_value state). This makes sense in some cases (e.g. fusion tests), but it is both hard to be certain that this covers real usage, and is brittle. IMO, tests should (as much as possible) attempt to verify the behavior is as expected via the APIs that are visible to the user of the thing they are testing. For the subgraph API, that means we should just make sure the subgraph works as expected. - Currently subgraph tests are very verbose. IMO this is a problem because it discourages writing tests. This CL adds bfloat16 test coverage for constant_pad (and enables that operation, which previously didn't work for any good reason) with just 3 marginal lines of code, whereas before it would have added several hundred lines of code (copy/paste + modifications) This change required a few minor cleanups: - `xnnpack::Buffer<T>` needs to be able to distinguish between "extra bytes" and real data. - There is now some overlap between `RuntimeTester` and `SubgraphTester`. I think we should deprecate `RuntimeTester` and consolidate everything in `SubgraphTester`, because we can't return `RuntimeTester` from the base class `SubgraphTester` builder methods. This is a minor difficulty, but it also seems like the reason to separate them is minor too. PiperOrigin-RevId: 730917832
XNNPACK is a highly optimized solution for neural network inference on ARM, x86, WebAssembly, and RISC-V platforms. XNNPACK is not intended for direct use by deep learning practitioners and researchers; instead it provides low-level performance primitives for accelerating high-level machine learning frameworks, such as TensorFlow Lite, TensorFlow.js, PyTorch, ONNX Runtime, and MediaPipe.
XNNPACK implements the following neural network operators:
All operators in XNNPACK support NHWC layout, but additionally allow custom stride along the Channel dimension. Thus, operators can consume a subset of channels in the input tensor, and produce a subset of channels in the output tensor, providing a zero-cost Channel Split and Channel Concatenation operations.
The table below presents single-threaded performance of XNNPACK library on three generations of MobileNet models and three generations of Pixel phones.
Model | Pixel, ms | Pixel 2, ms | Pixel 3a, ms |
---|---|---|---|
FP32 MobileNet v1 1.0X | 82 | 86 | 88 |
FP32 MobileNet v2 1.0X | 49 | 53 | 55 |
FP32 MobileNet v3 Large | 39 | 42 | 44 |
FP32 MobileNet v3 Small | 12 | 14 | 14 |
The following table presents multi-threaded (using as many threads as there are big cores) performance of XNNPACK library on three generations of MobileNet models and three generations of Pixel phones.
Model | Pixel, ms | Pixel 2, ms | Pixel 3a, ms |
---|---|---|---|
FP32 MobileNet v1 1.0X | 43 | 27 | 46 |
FP32 MobileNet v2 1.0X | 26 | 18 | 28 |
FP32 MobileNet v3 Large | 22 | 16 | 24 |
FP32 MobileNet v3 Small | 7 | 6 | 8 |
Benchmarked on March 27, 2020 with end2end_bench --benchmark_min_time=5
on an Android/ARM64 build with Android NDK r21 (bazel build -c opt --config android_arm64 :end2end_bench
) and neural network models with randomized weights and inputs.
The table below presents multi-threaded performance of XNNPACK library on three generations of MobileNet models and three generations of Raspberry Pi boards.
Model | RPi Zero W (BCM2835), ms | RPi 2 (BCM2836), ms | RPi 3+ (BCM2837B0), ms | RPi 4 (BCM2711), ms | RPi 4 (BCM2711, ARM64), ms |
---|---|---|---|---|---|
FP32 MobileNet v1 1.0X | 3919 | 302 | 114 | 72 | 77 |
FP32 MobileNet v2 1.0X | 1987 | 191 | 79 | 41 | 46 |
FP32 MobileNet v3 Large | 1658 | 161 | 67 | 38 | 40 |
FP32 MobileNet v3 Small | 474 | 50 | 22 | 13 | 15 |
INT8 MobileNet v1 1.0X | 2589 | 128 | 46 | 29 | 24 |
INT8 MobileNet v2 1.0X | 1495 | 82 | 30 | 20 | 17 |
Benchmarked on Feb 8, 2022 with end2end-bench --benchmark_min_time=5
on a Raspbian Buster build with CMake (./scripts/build-local.sh
) and neural network models with randomized weights and inputs. INT8 inference was evaluated on per-channel quantization schema.
XNNPACK is a based on QNNPACK library. Over time its codebase diverged a lot, and XNNPACK API is no longer compatible with QNNPACK.