The primary function of the web tests is as a regression test suite; this means that, while we care about whether a page is being rendered correctly, we care more about whether the page is being rendered the way we expect it to. In other words, we look more for changes in behavior than we do for correctness.
All web tests have “expected results”, or “baselines”, which may be one of several forms. The test may produce one or more of:
For any of these types of tests, baselines are checked into the web_tests directory. The filename of a baseline is the same as that of the corresponding test, but the extension is replaced with -expected.{txt,png,wav}
(depending on the type of test output). Baselines usually live alongside tests, with the exception when baselines vary by platforms; read Web Test Baseline Fallback for more details.
Lastly, we also support the concept of “reference tests”, which check that two pages are rendered identically (pixel-by-pixel). As long as the two tests' output match, the tests pass. For more on reference tests, see Writing ref tests.
When the output doesn't match, there are two potential reasons for it:
In both cases, the convention is to check in a new baseline (aka rebaseline), even though that file may be codifying errors. This helps us maintain test coverage for all the other things the test is testing while we resolve the bug.
Bugs at crbug.com should track fixing incorrect behavior, not lines in TestExpectations. If a test is never supposed to pass (e.g. it‘s testing Windows-specific behavior, so can’t ever pass on Linux/Mac), move it to the NeverFixTests file. That gets it out of the way of the rest of the project.
There are some cases where you can‘t rebaseline and, unfortunately, we don’t have a better solution than either:
In this case, reverting the patch is strongly preferred.
These are the cases where you can't rebaseline:
Once you decide that a test is truly flaky, you can suppress it using the TestExpectations file, as described below. We do not generally expect Chromium sheriffs to spend time trying to address flakiness, though.
Since baselines themselves are often platform-specific, updating baselines in general requires fetching new test results after running the test on multiple platforms.
The recommended way to rebaseline for a currently-in-progress CL is to use results from try jobs, by using the command-tool third_party/blink/tools/blink_tool.py rebaseline-cl
:
blink_tool.py rebaseline-cl
. This should trigger jobs on tryserver.blink.blink_tool.py rebaseline-cl
again to fetch new baselines.This way, the new baselines can be reviewed along with the changes, which helps the reviewer verify that the new baselines are correct. It also means that there is no period of time when the web test results are ignored.
When a change will cause many tests to fail, the try jobs may exit early because the number of failures exceeds the limit, or the try jobs may timeout because more time is needed for the retries. Rebaseline based on such results are not suggested. The solution is to temporarily increase the number of shards in test_suite_exceptions.pyl
in your CL. Change the values back to its original value before sending the CL to CQ.
The tests which blink_tool.py rebaseline-cl
tries to download new baselines for depends on its arguments.
--only-changed-tests
, then only tests modified in the CL will be considered.--patchset=n
to specify the patchset. This is very useful when the CL has ‘trivial’ patchsets that are created e.g. by editing the CL descrpition.Web test results.html linked from bot job result page provides an alternative way to rebaseline tests for a particular platform.
third_party/blink/web_tests/platform/<platform>
.The generated command includes blink_tool.py optimize-baselines <tests>
which removes redundant baselines.
third_party/blink/tools/run_web_tests.py --reset-results foo/bar/test.html
If there are current expectation files for web_tests/foo/bar/test.html
, the above command will overwrite the current baselines at their original locations with the actual results. The current baseline means the -expected.*
file used to compare the actual result when the test is run locally, i.e. the first file found in the baseline search path.
If there are no current baselines, the above command will create new baselines in the platform-independent directory, e.g. web_tests/foo/bar/test-expected.{txt,png}
.
When you rebaseline a test, make sure your commit description explains why the test is being re-baselined.
See Testing Runtime Flags for details about flag-specific expectations.
The Rebaseline Tool supports all flag-specific suites that run in CQ/CI. You may also rebaseline flag-specific results locally with:
third_party/blink/tools/run_web_tests.py --flag-specific=config --reset-results foo/bar/test.html
New baselines will be created in the flag-specific baselines directory, e.g. web_tests/flag-specific/config/foo/bar/test-expected.{txt,png}
Then you can commit the new baselines and upload the patch for review.
Sometimes it's difficult for reviewers to review the patch containing only new files. You can follow the steps below for easier review.
Copy existing baselines to the flag-specific baselines directory for the tests to be rebaselined:
third_party/blink/tools/run_web_tests.py --flag-specific=config --copy-baselines foo/bar/test.html
Then add the newly created baseline files, commit and upload the patch. Note that the above command won't copy baselines for passing tests.
Rebaseline the test locally:
third_party/blink/tools/run_web_tests.py --flag-specific=config --reset-results foo/bar/test.html
Commit the changes and upload the patch.
Request review of the CL and tell the reviewer to compare the patch sets that were uploaded in step 1 and step 2 to see the differences of the rebaselines.
TestExpectations
and other files. See the run_wpt_tests.py
doc for information about WPT coverage for Chrome.It is possible to handle tests that only fail when run with a particular flag being passed to content_shell
. See web_tests/FlagExpectations/README.txt for more.
The file is not ordered. If you put new changes somewhere in the middle of the file, this will reduce the chance of merge conflicts when landing your patch.
The syntax of the file is roughly one expectation per line. An expectation can apply to either a directory of tests, or a specific tests. Lines prefixed with #
are treated as comments, and blank lines are allowed as well.
The syntax of a line is roughly:
[ bugs ] [ "[" modifiers "]" ] test_name_or_directory [ "[" expectations "]" ]
/*
, and all tests under the directory will have the expectations, unless overridden by more specific expectation lines. The wildcard is intentionally only allowed at the end of test_name_or_directory, so that it will be easy to reason about which test(s) a test expectation will apply to.crbug.com/12345
, code.google.com/p/v8/issues/detail?id=12345
or Bug(username)
.Fuchsia
, Mac
, Mac11
, Mac11-arm64
, Mac12
, Mac12-arm64
, Mac13
, Mac13-arm64
, Mac14
, Mac14-arm64
, Mac15
, Mac15-arm64
, Linux
, Chrome
, Win
, Win10.20h2
, Win11
, iOS17-Simulator
, and, optionally, Release
, or Debug
. Check the # tags: ...
comments at the top of each file to see which modifiers that file supports.Win
represents Win10.20h2
and Win11
. See the CONFIGURATION_SPECIFIER_MACROS
dictionary in third_party/blink/tools/blinkpy/web_tests/port/base.py for the meta keywords and which modifiers they represent.Crash
, Failure
, Pass
, Slow
, or Skip
, Timeout
. Some results don't make sense for some files; check the # results: ...
comment at the top of each file to see what results that file supports. If multiple expectations are listed, the test is considered “flaky” and any of those results will be considered as expected.For example:
crbug.com/12345 [ Win Debug ] fast/html/keygen.html [ Crash ]
which indicates that the “fast/html/keygen.html” test file is expected to crash when run in the Debug configuration on Windows, and the tracking bug for this crash is bug #12345 in the Chromium issue tracker. Note that the test will still be run, so that we can notice if it doesn't actually crash.
Assuming you're running a debug build on Mac 10.9, the following lines are equivalent (in terms of whether the test is performed and its expected outcome):
fast/html/keygen.html [ Skip ] Bug(darin) [ Mac10.9 Debug ] fast/html/keygen.html [ Skip ]
Slow
causes the test runner to give the test 5x the usual time limit to run. Slow
lines go in the SlowTests
file. A given line cannot have both Slow and Timeout.
Also, when parsing the file, we use two rules to figure out if an expectation line applies to the current run:
If a virtual test has no explicit expectations (following the rules above), it inherits its expectations from the base (nonvirtual) test.
For example, if you had the following lines in your file, and you were running a debug build on Mac10.10
:
crbug.com/12345 [ Mac10.10 ] fast/html [ Failure ] crbug.com/12345 [ Mac10.10 ] fast/html/keygen.html [ Pass ] crbug.com/12345 [ Win11 ] fast/forms/submit.html [ Failure ] crbug.com/12345 fast/html/section-element.html [ Failure Crash ]
You would expect:
fast/html/article-element.html
to fail with a text diff (since it is in the fast/html directory).fast/html/keygen.html
to pass (since the exact match on the test name).fast/forms/submit.html
to pass (since the configuration parameters don't match).fast/html/section-element.html
to either crash or produce a text (or image and text) failure, but not time out or pass.virtual/foo/fast/html/article-element.html
to fail with a text diff. The virtual test inherits its expectation from the first line.Test expectation can also apply to all tests under a directory (specified with a name ending with /*
). A more specific expectation can override a less specific expectation. For example:
crbug.com/12345 virtual/composite-after-paint/* [ Skip ] crbug.com/12345 virtual/composite-after-paint/compositing/backface-visibility/* [ Pass ] crbug.com/12345 virtual/composite-after-paint/compositing/backface-visibility/test.html [ Failure ]
You can verify that any changes you've made to an expectations file are correct by running:
third_party/blink/tools/lint_test_expectations.py
which will cycle through all of the possible combinations of configurations looking for problems.