Telemetry is the performance testing framework used by Chrome. It allows you to perform arbitrary actions on a set of web pages (or any android application!) and report metrics about it. The framework abstracts:
- Launching a browser with arbitrary flags on any platform.
- Opening a tab and navigating to the page under test.
- Launching an Android application with intents through ADB.
- Fetching data via the Inspector timeline and traces.
- Using Web Page Replay to cache real-world websites so they don’t change when used in benchmarks.
- Write one performance test that runs on major platforms - Windows, Mac, Linux, Chrome OS, and Android for both Chrome and ContentShell.
- Run on browser binaries, without a full Chromium checkout, and without having to build the browser yourself.
- Use Web Page Replay to get repeatable test results.
- Clean architecture for writing benchmarks that keeps measurements and use cases separate.
Telemetry is designed for measuring performance rather than checking correctness. If you want to check for correctness, browser tests are your friend.
If you are a Chromium developer looking to add a new Telemetry benchmark to
src/tools/perf/, please make sure to read our Benchmark Policy first.
Telemetry provides two major functionality groups: those that provide test automation, and those that provide the capability to collect data.
The test automation facilities of Telemetry provide Python wrappers for a number of different system concepts.
- Platforms use a variety of libraries & tools to abstract away the OS specific logic.
- Browser wraps Chrome's DevTools Remote Debugging Protocol to perform actions and extract information from the browser.
- Android App is a Python wrapper around
The Telemetry framework lives in
src/third_party/catapult/telemetry/ and performance benchmarks that use Telemetry live in
Telemetry offers a framework for collecting metrics that quantify the performance of automated actions in terms of benchmarks, measurements, and story sets.
- A benchmark combines a measurement together with a story set, and optionally a set of browser options.
- We strongly discourage benchmark authors from using command-line flags to specify the behavior of benchmarks, since benchmarks should be cross-platform.
- Benchmarks are discovered and run by the benchmark runner, which is wrapped by scripts like
- A measurement (called
StoryTest in the code) is responsible for setting up and tearing down the testing platform, and for collecting metrics that quantify the application scenario under test.
- Measurements need to work with all story sets, to provide consistency and prevent benchmark rot.
- You probably don't need to override
StoryTest (see “Timeline Based Measurement” below). If you think you do, please talk to us.
- A story set is a set of stories together with a shared state that describes application-level configuration options.
- A metric describes how to collect data about the story run and compute results.
- New metrics should generally be timeline-based.
- Metrics can specify many different types of results, including numbers, histograms, traces, and failures.
- Timeline Based Measurement is a built-in
StoryTest that runs all available timeline-based metrics, and benchmarks that use it can filter relevant results.
Contact Us or Follow Along
If you have questions, please email email@example.com.
You can keep up with Telemetry related discussions by joining the telemetry group.
Frequently Asked Questions
I get an error when I try to use recorded story sets.
The recordings are not included in the Chromium source tree. If you are a Google partner, run
gsutil config to authenticate, then try running the test again. If you don't have
gsutil installed on your machine, you can find it in
If you are not a Google partner, you can run on live sites with --use-live-sites` or record your own story set archive.
I get mysterious errors about device_forwarder failing.
Your forwarder binary may be outdated. If you have built the forwarder in src/out that one will be used. if there isn't anything there Telemetry will default to downloading a pre-built binary. Try re-building the forwarder, or alternatively wiping the contents of
src/out/ and running
run_benchmark, which should download the latest binary.
I'm having problems with keychain prompts on Mac.
Make sure that your keychain is correctly configured.