This document provides an overview of the benchmarks used to monitor Chrome’s graphics performance. It includes information on what benchmarks are available, how to run them, how to interpret their results, and how to add more tests to the benchmarks.
The Telemetry rendering benchmarks measure Chrome’s rendering performance in different scenarios.
There are currently two rendering benchmarks:
rendering.desktop
: A desktop-only benchmark that measures performance on both real world websites and special cases (ex. pages that are difficult to zoom)rendering.mobile
: A mobile-only equivalent of rendering.desktopNote: Some pages are used for rendering.desktop but not rendering.mobile, and vice versa. This is because some pages are only meant to measure behavior on one platform, for instance dragging on desktop. This is indicated with the SUPPORTED_PLATFORMS
attribute in the page class.
These benchmarks are run on the Chromium Perf Waterfall, with results reported on the Chrome Performance Dashboard.
Rendering metrics are written in Javascript. The list of all metrics and their meanings should be documented in the files they are defined in.
cpu_time_per_frame
and tasks_per_frame
mean_pixels_approximated
queueing_durations
First, set up your device by following the instructions here. You can then run telemetry benchmarks locally using:
./tools/perf/run_benchmark <benchmark_name> --browser=<browser>
For benchmark_name
, use either rendering.desktop
or rendering.mobile
As the pages in the rendering page sets were merged from a variety of previous page sets, they have corresponding tags. To run the benchmark only for pages of a certain tag, add this flag:
--story-tag-filter=<tag name>
For example, if the old benchmark was smoothness.tough_scrolling_cases
, you would now use --story-tag-filter=tough_scrolling
for the rendering benchmarks. A list of all rendering tags can be found here. You can also find out which tags are used by a page by looking at the TAGS
attribute of the class. Additionally, these same tags can be used to filter the metrics results in the generated results.html file.
Other useful options for the command are:
--pageset-repeat [n]
: override the default number of repetitions--reset-results
: clear results from any previous benchmark runs in the results.html file.--results-label [label]
: give meaningful names to your benchmark runs, to make it easier to compare themFor more consistent results and to identify whether your change has resulted in a rendering regression, you can run the rendering benchmarks using a perf try job. In order to do this, you need to first upload a CL, which allows results to be generated with and without your patch.
If your changes have resulted in a regression in a metric that is monitored by perf alerts, you will be assigned to a bug. This will contain information about the specific metric and how much it was regressed, as well as a Pinpoint link that will help you investigate further. For instance, you will be able to obtain traces from the try bot runs. This link contains detailed steps on how to deal with regressions. Rendering metrics use trace events logged under the benchmark and toplevel trace categories.
If you already have a trace and want to debug the metric computation part, you can just run the metric: tracing/bin/run_metric <path-to-trace-file> renderingMetric
If you are specifically investigating a regression related to janks, this document may be useful.
New rendering pages should be added to the ./tools/perf/page_sets/rendering folder:
Pages inherit from the RenderingStory class. If adding a group of new pages, create an abstract class with the following attributes:
ABSTRACT_STORY = True
TAGS
: a list of tags, which can be added to story_tags.py if necessarySUPPORTED_PLATFORMS
(optional): if the page should only be mobile or desktopChildren classes should specify these attributes:
BASE_NAME
: name of the pageURL
: url of the pageAll pages in the rendering benchmark need to use RenderingSharedState as the shared_page_state_class, since this has to be consistent across pages in a page set. Individual pages can also specify extra_browser_args
, in order to set specific flags.
After adding the page, record it and upload it to cloud storage using these instructions.
This will modify the data/rendering_desktop.json or data/rendering_mobile.json files and generate .sha1 files, which should be included in the CL.
If more pages need to be merged into the rendering page sets, please see this guide on how to do so.