Telemetry: Diagnosing Test Failures

If you're seeing a test failure on the bots and are unable to diagnose the issue from the test log, here are some steps to help debug the issue.

Reproducing the Issue

Determining how Telemetry was invoked

The command line used to invoke Telemetry can be seen when looking at the log for the failing build step, the output is very verbose but if you search for `run_benchmark` you should be able to find it. Look for a command that resembles the following:

/tools/perf/run_benchmark -v --output-format=chartjson --upload-results blink_perf.shadow_dom --browser=reference --output-trace-tag=_ref

Running a Windows VM

Instructions on setting up a Windows VM on a Linux host can be found here (for Googlers only). For how to run it locally on windows, check the instructions here.

Diagnosing on the trybots

Reproducing a failure locally is the most desirable option, both in terms of speed and ease of debugging. If the failure only occurs on a specific OS you don't have access to or if the failure only reproduces on a specific bot you may need to access that bot directly. Information on accessing a trybot remotely can be found here (Internal-only).

You can find the name of the trybot the test is failing on by looking at the “BuildSlave” section of the test run (build58-a1 in the below screenshot):

image

Another option is to use the performance trybots to try a patch with extra diagnostics.

Tips on Diagnosis

  • Telemetry prints local variables on test failure and will attempt to print a stack trace for the browser process if it crashes, this should be visible in the buildbot output.
  • If the Telemetry is wedged, you can send it a SIGUSR1 ($kill -SIGUSR1 on a POSIXy system), this has the effect of printing the current stack trace (search for InstallStackDumpOnSigusr1() to see the code behind this.)
  • Consider adding logging.info()messages to the code to print diagnostic information, as can be seen above - the bots run Telemetry with the ‘-v’ option so these will be visible in the build output. These can be sent to the performance trybots or committed and reverted afterwards but consider leaving in messages that might help diagnose similar issues in the future. If left in, beware of spamming the console.
  • As the benchmark runs, devtools can be used to examine the state of the running page.

Useful Telemetry command line flags

Name Effect --browser={list,} Change the version of the browser used, list will print all the browsers that Telemetry can see and a browser name will run that browser e.g. --browser=release --repeat= Repeats the test N times. Note that flaky tests might fail after repetition e.g. --repeat=5 --story-filter= Only run pages from the pageset that match the given regex, this is useful to make test runs faster if a test only fails on a specific page e.g. --story-filter=flickr --story-filter-exclude= Inverse of the above. -v, -vv Change log level --show-stdout Show browser stdout --extra-browser-args= Pass extra arguments when invoking the browser, you can find a useful list of Chrome's command-line arguments here. E.g.: ‘--extra-browser-args=--enable-logging=stderr --v=2’