If you just want Telemetry, cloning catapult repository should be enough. If you also want the Chrome benchmarks built with Telemetry, get the latest Chromium checkout. If you‘re running on Mac OS X, you’re all set! For Windows, Linux, Android, or ChromeOS, read on.
Some benchmarks require you to have pywin32. Be sure to install a version that matches the version and bitness of the Python you have installed.
Telemetry on Linux tries to scan for attached Android devices with adb. The included adb binary is 32-bit. On 64-bit machines, you need to install the libstdc++6:i386 package.
Running on Android is supported with a Linux host. Windows and Mac OS X are not yet supported. There are also a few additional steps to set up:
adb root
. Sometimes you may also need to run adb remount
. If you are unable to install “userdebug” build of Android, you can try running benchmarks with --compatibility-mode=dont-require-rooted-device
switch, however this configuration may not be supported and you may run into errors.adb devices
and use it with Telemetry via --device=<device_name>.See Running Telemetry on ChromeOS.
Telemetry benchmarks can be run with run_benchmark.
In the Chromium source tree, this is located at src/tools/perf/run_benchmark
.
List the available benchmarks with telemetry/run_benchmark list
.
Here's an example for running a particular benchmark:
src/tools/perf/run_benchmark blink_perf.css --browser=stable
To list available browsers, use:
src/tools/perf/run_benchmark --browser=list
For ease of use, you can use default system browsers on desktop:
src/tools/perf/run_benchmark blink_perf.css --browser=system
and on Android:
src/tools/perf/run_benchmark blink_perf.css --browser=android-system-chrome
If you‘re running telemetry from within a Chromium checkout, the release and debug browsers are what’s built in out/Release and out/Debug, respectively.
To run a specific browser executable:
src/tools/perf/run_benchmark blink_perf.css --browser=exact --browser-executable=/path/to/binary
To run on a Chromebook:
src/tools/perf/run_benchmark blink_perf.css --browser=cros-chrome --remote=[ip_address]
To see all options, run:
src/tools/perf/run_benchmark run --help
Use --pageset-repeat to run the benchmark repeatedly. For example:
src/tools/perf/run_benchmark blink_perf.css --pageset-repeat=30
Use --run-abridged-story-set to run a shortened version of a benchmark with a representative subset of the stories included (Note that some benchmarks do not have an abridged version yet. Instructions for abridging a benchmark are here). Example:
src/tools/perf/run_benchmark rendering.desktop --run-abridged-story-set
If you want to re-generate HTML results and add label, you can do this locally by using the parameters --reset-results --results-label="foo"
src/tools/perf/run_benchmark blink_perf.css --reset-results --results-label="foo"
src/tools/perf/run_benchmark some_benchmark --browser-executable=path/to/version/1 --reset-results --results-label="Version 1"
src/tools/perf/run_benchmark some_benchmark --browser-executable=path/to/version/2 --results-label="Version 2"
The results will be written to in the results.html
file in the same location of the run_benchmark
script.