Testing is an essential component of software development in Chromium, it ensures Chrome is behaving as we expect, and is critical to find bugs and regressions at early stage.
This document covers the high level overview of testing in Chromium, including what type of tests we have, what's the purpose for each test type, what tests are needed for new features etc.
There are several different types of tests in Chromium to serve different purposes, some types of test are running on multiple platforms, others are specific for one platform.
- gtest is Google's C++ test framework, which helps you write better C++ tests in Chromium. gtest is test framework for unit tests in Chromium and browser tests are built on top of it.
- Browser Tests is built on top of gtest, and it is used to write integration tests and e2e tests in Chromium.
- Web Tests (formerly known as “Layout Tests” or “LayoutTests”) is used by Blink to test many components, including but not limited to layout and rendering. In general, web tests involve loading pages in a test renderer (
- Robolectric is build on top of JUnit 4. It emulates Android APIs so that tests can be run on the host machine instead of on devices / emulators.
- Instrumentation Tests are JUnit tests that run on devices / emulators.
- EarlGrey is the integration testing framework used by Chromium for iOS.
- Telemetry is the performance testing framework used by Chromium. It allows you to perform arbitrary actions on a set of web pages and report metrics about it.
- Fuzzer Tests is used to uncover potential security & stability problems in Chromium.
- Tast is a test framework for system integration tests on Chrome OS.
The following table shows which types of test works on which platforms.
|Web Tests(HTML, JS)||√||√||√|
Browser Tests Note
Only subset of browser tests are enabled on Android:
Other browser tests are not supported on Android yet. crbug/611756 tracks the effort to enable them on Android.
Web Tests Note
Web Tests were enabled on Android K before, but it is disabled on Android platform now, see this thread for more context.
- All the tests in Chromium running on CQ and main waterfall should be hermetic and stable.
- Add unit tests along with the code in same changelist instead of adding tests in future, it is most likely no one will add tests later.
- Write enough unit tests to have good code coverage, since they are fast and stable.
- Don't enable tests with external dependencies on CQ and main waterfall, e.g. tests against live sites. It is fine to check in those tests, but only run them on your own bots.
- Eventually, all tests should implement the Test Executable API command line interface.
What tests are needed for new features
- Unit Tests are needed no matter where the code is for your feature. It is the best practice to add the unit tests when you add new code or update existing code in the same changelist, check out Code Coverage in Gerrit for the instruction about how to see the code coverage in Gerrit.
- Browser Tests are recommended for integration tests and e2e tests. It will be great if you add browser tests to cover the major user cases for your feature, even with some mocking.
- Web Tests are required if you plan to launch new W3C APIs in Chrome.
- Instrumentation Tests are recommended for features on Android, you only need to write instrumentation features if your feature is supported on Android for integration tests or e2e tests.
- EarlGrey Tests are recommended for iOS only.
- Telemetry benchmarking or stories are needed if existing telemetry benchmarks or stories can't cover the performance for your feature, you need to either add new story, but reuse existing metrics or add new benchmarks for your feature. Talk to benchmarking team first before start to add Telemetry benchmarks or stories.
- Fuzzer Tests are recommended if your feature adds user facing APIs in Chromium, it is recommended to write fuzzer tests to detect the security issue.
Right now, code coverage is the only way we have to measure test coverage. The following is the recommended thresholds for different code coverage levels:
level 1(improving): >0%
level 2(acceptable): 60%
level 3(commendable): 75%
level 4(exemplary): 90%
Go to code coverage dashboard to check the code coverage for your project.
How to write new tests
TODO: add the link to the instruction about how to enable new tests in CQ and main waterfall
How to run tests
Run tests locally
Run gtest locally
Before you can run a gtest, you need to build the appropriate launcher target that contains your test, such as
autoninja -C out/Default blink_unittests
To run specific tests, rather than all tests in a launcher, pass
--gtest_filter= with a pattern. The simplest pattern is the full name of a test (SuiteOrFixtureName.TestName), but you can use wildcards:
--help for more ways to select and run tests.
Run tests remotely(on Swarming)
TODO: add the link to the instruction about how to run tests on Swarming.
How to debug tests
How to deal with flaky tests
Go to LUCI Analysis to find reports about flaky tests in your projects.
If you cannot fix a flaky test in a short timeframe, disable it first to reduce development pain for other and then fix it later. “How do I disable a flaky test” has instructions on how to disable a flaky test.
Tests are not configured to upload metrics, such as UMA, UKM or crash reports.