Telemetry

Telemetry is the performance testing framework used by Chrome. It allows you to perform arbitrary actions on a set of web pages (or any android application!) and report metrics about it. The framework abstracts:

  • Launching a browser with arbitrary flags on any platform.
  • Opening a tab and navigating to the page under test.
  • Launching an Android application with intents through ADB.
  • Fetching data via the Inspector timeline and traces.
  • Using Web Page Replay to cache real-world websites so they don’t change when used in benchmarks.

How to run the unit tests

Run catapult/telemetry/bin/run_tests --help and see the usage info at the top.

Running tests on ChromeOS?

See this page.

Design Principles

  • Write one performance test that runs on major platforms - Windows, Mac, Linux, Chrome OS, and Android for both Chrome and ContentShell.
  • Run on browser binaries, without a full Chromium checkout, and without having to build the browser yourself.
  • Use Web Page Replay to get repeatable test results.
  • Clean architecture for writing benchmarks that keeps measurements and use cases separate.

Telemetry is designed for measuring performance rather than checking correctness. If you want to check for correctness, browser tests are your friend.

If you are a Chromium developer looking to add a new Telemetry benchmark to src/tools/perf/, please make sure to read our Benchmark Policy first.

Code Concepts

Telemetry provides two major functionality groups: those that provide test automation, and those that provide the capability to collect data.

Test Automation

The test automation facilities of Telemetry provide Python wrappers for a number of different system concepts.

  • Platforms use a variety of libraries & tools to abstract away the OS specific logic.
  • Browser wraps Chrome's DevTools Remote Debugging Protocol to perform actions and extract information from the browser.
  • Android App is a Python wrapper around adb shell.

The Telemetry framework lives in src/third_party/catapult/telemetry/ and performance benchmarks that use Telemetry live in src/tools/perf/.

Data Collection

Telemetry offers a framework for collecting metrics that quantify the performance of automated actions in terms of benchmarks, measurements, and story sets.

  • A benchmark combines a measurement together with a story set, and optionally a set of browser options.
    • We strongly discourage benchmark authors from using command-line flags to specify the behavior of benchmarks, since benchmarks should be cross-platform.
    • Benchmarks are discovered and run by the benchmark runner, which is wrapped by scripts like run_benchmark in tools/perf.
  • A measurement (called StoryTest in the code) is responsible for setting up and tearing down the testing platform, and for collecting metrics that quantify the application scenario under test.
    • Measurements need to work with all story sets, to provide consistency and prevent benchmark rot.
    • You probably don't need to override StoryTest (see “Timeline Based Measurement” below). If you think you do, please talk to us.
  • A story set is a set of stories together with a shared state that describes application-level configuration options.
  • A story is an application scenario and a set of actions to run in that scenario. In the typical Chromium use case, this will be a web page together with actions like scrolling, clicking, or executing JavaScript.
  • A metric describes how to collect data about the story run and compute results.
    • New metrics should generally be timeline-based.
    • Metrics can specify many different types of results, including numbers, histograms, traces, and failures.
  • Timeline Based Measurement is a built-in StoryTest that runs all available timeline-based metrics, and benchmarks that use it can filter relevant results.

Next Steps

Contact Us or Follow Along

If you have questions, please email telemetry@chromium.org.

You can keep up with Telemetry related discussions by joining the telemetry group.

For Googlers

Frequently Asked Questions

I get an error when I try to use recorded story sets.

The recordings are not included in the Chromium source tree. If you are a Google partner, run gsutil config to authenticate, then try running the test again. If you don't have gsutil installed on your machine, you can find it in build/third_party/gsutil/gsutil.

If you are not a Google partner, you can run on live sites with --use-live-sites` or record your own story set archive.

I get mysterious errors about device_forwarder failing.

Your forwarder binary may be outdated. If you have built the forwarder in src/out that one will be used. if there isn't anything there Telemetry will default to downloading a pre-built binary. Try re-building the forwarder, or alternatively wiping the contents of src/out/ and running run_benchmark, which should download the latest binary.

I'm having problems with keychain prompts on Mac.

Make sure that your keychain is correctly configured.