tree: 9277b1acf5dfcdd5984df6d09540c674d877e038 [path history] [tgz]
  1. .github/
  2. .kokoro/
  3. build_tools/
  4. docs/
  5. third_party/
  6. tools/
  7. xla/
  8. .bazelrc
  9. .bazelversion
  10. .clang-format
  11. .clang-tidy
  12. .gitignore
  13. BUILD.bazel
  14. CONTRIBUTING.md
  15. LICENSE
  16. opensource_only.files
  17. README.md
  18. requirements_lock_3_11.txt
  19. tensorflow.bazelrc
  20. tsl_workspace0.bzl
  21. tsl_workspace1.bzl
  22. tsl_workspace2.bzl
  23. tsl_workspace3.bzl
  24. warnings.bazelrc
  25. WORKSPACE
  26. workspace0.bzl
  27. workspace1.bzl
  28. workspace2.bzl
  29. workspace3.bzl
  30. workspace4.bzl
third_party/xla/README.md

XLA

XLA (Accelerated Linear Algebra) is an open-source machine learning (ML) compiler for GPUs, CPUs, and ML accelerators.

The XLA compiler takes models from popular ML frameworks such as PyTorch, TensorFlow, and JAX, and optimizes them for high-performance execution across different hardware platforms including GPUs, CPUs, and ML accelerators.

openxla.org is the project's website.

Get started

If you want to use XLA to compile your ML project, refer to the corresponding documentation for your ML framework:

If you‘re not contributing code to the XLA compiler, you don’t need to clone and build this repo. Everything here is intended for XLA contributors who want to develop the compiler and XLA integrators who want to debug or add support for ML frontends and hardware backends.

Contribute

If you'd like to contribute to XLA, review How to Contribute and then see the developer guide.

Contacts

  • For questions, contact the maintainers - maintainers at openxla.org

Resources

Code of Conduct

While under TensorFlow governance, all community spaces for SIG OpenXLA are subject to the TensorFlow Code of Conduct.