tree: b3a08ddaa67de627ac8b7ca428bd8cb7b99f64b2 [path history] [tgz]
  1. .gitignore
  6. benchmark.csv
  7. benchmarks/
  8. bootstrap_deps
  9. chrome_telemetry_build/
  10. clear_system_cache/
  11. cli_tools/
  12. contrib/
  13. core/
  14. diagnose_test_failure
  15. examples/
  16. expectations.config
  17. experimental/
  18. export_csv
  21. find_dependencies
  22. flakiness_cli
  25. generate_perf_data
  26. generate_perf_sharding
  27. list_affected_benchmarks
  28. list_benchmarks
  29. measurements/
  30. metrics/
  31. page_sets/
  32. pinboard
  33. pinpoint_cli
  36. pylintrc
  37. record_wpr
  38. run_benchmark
  39. run_telemetry_tests
  40. run_tests
  42. soundwave
  43. testdata/
  44. update_wpr
  45. validate_perf_json_config
  46. validate_story_expectation_data
  47. validate_wpr_archives

Performance tools

This directory contains a variety of command line tools that can be used to run benchmarks, interact with speed services, and manage performance waterfall configurations.

Note you can also read the higher level Chrome Speed documentation to learn more about the team organization and, in particular, the top level view of How Chrome Measures Performance.


This command allows running benchmarks defined in the chromium repository, specifically in tools/perf/benchmarks. If you need it, documentation is available on how to run benchmarks locally and how to properly set up your device.


A helper script to automate various tasks related to the update of Web Page Recordings for our benchmarks. In can help creating new recordings from live websites, replay those to make sure they work, upload them to cloud storage, and finally send a CL to review with the new recordings.


A command line interface to the pinpoint service. Allows to create new jobs, check the status of jobs, and fetch their measurements as csv files.


A command line interface to the flakiness dashboard.


Allows to fetch data from the Chrome Performance Dashboard and stores it locally on a SQLite database for further analysis and processing. It also allows defining studies, pre-sets of measurements a team is interested in tracking, and uploads them to cloud storage to visualize with the help of Data Studio. This currently backs the v8 and health dashboards.


Allows scheduling daily pinpoint jobs to compare measurements with/without a patch being applied. This is useful for teams developing a new feature behind a flag, who wants to track the effects on performance as the development of their feature progresses. Processed data for relevant measurements is uploaded to cloud storage, where it can be read by Data Studio. This also backs data displayed on the v8 dashboard.