tree: cdfec11fc19603bd82fbd66d0c5bf542fb1dcf83 [path history] [tgz]
  1. dbus/
  2. docs/
  3. init/
  4. mojom/
  5. seccomp/
  6. BUILD.gn
  7. daemon.cc
  8. daemon.h
  9. graph_executor_impl.cc
  10. graph_executor_impl.h
  11. graph_executor_impl_test.cc
  12. machine_learning_service_impl.cc
  13. machine_learning_service_impl.h
  14. machine_learning_service_impl_test.cc
  15. main.cc
  16. metrics.cc
  17. metrics.h
  18. model_impl.cc
  19. model_impl.h
  20. model_impl_test.cc
  21. model_metadata.cc
  22. model_metadata.h
  23. OWNERS
  24. README.md
  25. request_metrics.h
  26. tensor_view.cc
  27. tensor_view.h
  28. test_utils.cc
  29. test_utils.h
  30. testrunner.cc
ml/README.md

CrosML: Chrome OS Machine Learning Service

Summary

The Machine Learning (ML) Service provides a common runtime for evaluating machine learning models on device. The service wraps the TensorFlow Lite runtime and provides infrastructure for deployment of trained models. The TFLite runtime runs in a sandboxed process. Chromium communicates with ML Service via a Mojo Interface.

How to use ML Service

You need to provide your trained models to ML Service by following these instructions. You can then load and use your model from Chromium using the client library provided at //chromeos/services/machine_learning/public/cpp/.

Note: The sandboxed process hosting TFLite models is currently shared between all users of ML Service. If this isn't acceptable from a security perspective for your model, follow this bug about switching ML Service to having a separate sandboxed process per loaded model.

Metrics

The following metrics are currently recorded by the daemon process in order to understand its resource costs in the wild:

  • MachineLearningService.MojoConnectionEvent: Success/failure of the D-Bus->Mojo bootstrap.
  • MachineLearningService.PrivateMemoryKb: Private (unshared) memory footprint every 5 minutes.
  • MachineLearningService.PeakPrivateMemoryKb: Peak value of MachineLearningService.PrivateMemoryKb per 24 hour period. Daemon code can also call ml::Metrics::UpdateCumulativeMetricsNow() at any time to take a peak-memory observation, to catch short-lived memory usage spikes.
  • MachineLearningService.CpuUsageMilliPercent: Fraction of total CPU resources consumed by the daemon every 5 minutes, in units of milli-percent (1/100,000).

Additional metrics added in order to understand the resource costs of each request:

  • MachineLearningService.|request|.Event: OK/ErrorType of the request.
  • MachineLearningService.|request|.PrivateMemoryDeltaKb: Private (unshared) memory delta caused by the request.
  • MachineLearningService.|request|.ElapsedTimeMicrosec: Time cost of the request.
  • MachineLearningService.|request|.CpuTimeMicrosec: CPU time usage of the request, which is scaled to one CPU core, namely it may be greater than ElapsedTimeMicrosec if there are multi cores.

The above request can be following:

  • LoadModel
  • CreateGraphExecutor
  • Execute (model inference)

Design docs