tree: 12dc798f20fdfa9bb65ee6b3b23879952c645f81 [path history] [tgz]
  1. default/
  2. arguments.cc
  3. arguments.h
  4. arguments_test.cc
  5. async_runtime.cc
  6. async_runtime.h
  7. async_values_cache.h
  8. BUILD
  9. compiler.h
  10. constraints.cc
  11. constraints.h
  12. custom_call.cc
  13. custom_call.h
  14. custom_call_registry.cc
  15. custom_call_registry.h
  16. custom_call_test.cc
  17. diagnostics.cc
  18. diagnostics.h
  19. diagnostics_test.cc
  20. errors.h
  21. executable.cc
  22. executable.h
  23. executable_test.cc
  24. execution_engine.cc
  25. execution_engine.h
  26. jit_executable.cc
  27. jit_executable.h
  28. logical_result.h
  29. map_by_type.h
  30. map_by_type_test.cc
  31. memory_mapper.cc
  32. memory_mapper.h
  33. README.md
  34. results.h
  35. results_test.cc
  36. runtime.h
  37. symbolic_shape.cc
  38. symbolic_shape.h
  39. symbolic_shape_test.cc
  40. tracing.h
  41. type_id.cc
  42. type_id.h
  43. type_id_test.cc
  44. types.cc
  45. types.h
tensorflow/compiler/xla/runtime/README.md

XLA Runtime

XLA runtime is a set of libraries that support execution of XLA programs compiled to native executables. XLA runtime provides user-friendly APIs for calling compiled programs, takes care of passing arguments and returning results according to the expected ABI, implements async tasks support and defines the FFI for compiled programs to call into user-defined callbacks.

If you squint and look at XLA as a programming language like Objective-C, then the XLA runtime is somewhat similar to Objective-C runtime: a runtime library that provides support for the functionality that we do not want to compile, e.g. it provides functionality to launch asynchronous tasks in a thread pool, because we do not want to codegen directly on top of pthreads library.