tree: ac80bd0f60e5010b7453f7373facbef482b56fdc [path history] [tgz]
  1. BUILD.gn
  2. OWNERS
  3. README.md
  4. command_line.cc
  5. daemon.cc
  6. daemon.h
  7. dbus/
  8. docs/
  9. fuzzers/
  10. graph_executor_impl.cc
  11. graph_executor_impl.h
  12. graph_executor_impl_test.cc
  13. handwriting.cc
  14. handwriting.h
  15. handwriting_path.cc
  16. handwriting_path.h
  17. handwriting_proto_mojom_conversion.cc
  18. handwriting_proto_mojom_conversion.h
  19. handwriting_proto_mojom_conversion_test.cc
  20. handwriting_recognizer_impl.cc
  21. handwriting_recognizer_impl.h
  22. handwriting_test.cc
  23. init/
  24. machine_learning_service_impl.cc
  25. machine_learning_service_impl.h
  26. machine_learning_service_impl_test.cc
  27. main.cc
  28. metrics.cc
  29. metrics.h
  30. model_impl.cc
  31. model_impl.h
  32. model_impl_test.cc
  33. model_metadata.cc
  34. model_metadata.h
  35. mojom/
  36. request_metrics.cc
  37. request_metrics.h
  38. seccomp/
  39. simple.cc
  40. simple.h
  41. simple_test.cc
  42. tensor_view.cc
  43. tensor_view.h
  44. test_utils.cc
  45. test_utils.h
  46. testrunner.cc
  47. text_classifier_impl.cc
  48. text_classifier_impl.h
  49. util.cc
  50. util.h
  51. util_test.cc
ml/README.md

Chrome OS Machine Learning Service

Summary

The Machine Learning (ML) Service provides a common runtime for evaluating machine learning models on device. The service wraps the TensorFlow Lite runtime and provides infrastructure for deployment of trained models. The TFLite runtime runs in a sandboxed process. Chromium communicates with ML Service via a Mojo interface.

How to use ML Service

You need to provide your trained models to ML Service first, then load and use your model from Chromium using the client library provided at //chromeos/services/machine_learning/public/cpp/. See this doc for more detailed instructions.

Optional: Later, if you find a need for it, you can add your model to the ML Service internals page.

Note: The sandboxed process hosting TFLite models is currently shared between all users of ML Service. If this isn't acceptable from a security perspective for your model, follow this bug about switching ML Service to having a separate sandboxed process per loaded model.

Metrics

The following metrics are currently recorded by the daemon process in order to understand its resource costs in the wild:

  • MachineLearningService.MojoConnectionEvent: Success/failure of the D-Bus->Mojo bootstrap.
  • MachineLearningService.TotalMemoryKb: Total (shared+unshared) memory footprint every 5 minutes.
  • MachineLearningService.PeakTotalMemoryKb: Peak value of MachineLearningService.TotalMemoryKb per 24 hour period. Daemon code can also call ml::Metrics::UpdateCumulativeMetricsNow() at any time to take a peak-memory observation, to catch short-lived memory usage spikes.
  • MachineLearningService.CpuUsageMilliPercent: Fraction of total CPU resources consumed by the daemon every 5 minutes, in units of milli-percent (1/100,000).

Additional metrics added in order to understand the resource costs of each request for a particular model:

  • MachineLearningService.|MetricsModelName|.|request|.Event: OK/ErrorType of the request.
  • MachineLearningService.|MetricsModelName|.|request|.TotalMemoryDeltaKb: Total (shared+unshared) memory delta caused by the request.
  • MachineLearningService.|MetricsModelName|.|request|.CpuTimeMicrosec: CPU time usage of the request, which is scaled to one CPU core, i.e. the units are CPU-core*microsec (10 CPU cores for 1 microsec = 1 CPU core for 10 microsec = recorded value of 10).

|MetricsModelName| is specified in the model's metadata for builtin models and is specified in |FlatBufferModelSpec| by the client for flatbuffer models. The above |request| can be following:

  • LoadModelResult
  • CreateGraphExecutorResult
  • ExecuteResult (model inference)

The request name “LoadModelResult” is used no matter the model is loaded by |LoadBuiltinModel| or by |LoadFlatBufferModel|. This is valid based on the fact that for a particular model, it is either loaded by |LoadBuiltinModel| or by |LoadFlatBufferModel| and never both.

There is also an enum histogram “MachineLearningService.LoadModelResult” which records a generic model specification error event during a |LoadBuiltinModel| or |LoadFlatBufferModel| request when the model name is unknown.

Original design docs

Note that aspects of the design may have evolved since the original design docs were written.