Table of contents:
Chromium uses source-based code coverage for clang-compiled languages such as C++. This documentation explains how to use Clang’s source-based coverage features in general.
In this document, we first introduce the code coverage infrastructure that continuously generates code coverage information for the whole codebase and for specific CLs in Gerrit. For the latter, refer to code_coverage_in_gerrit.md. We then present a script that can be used to locally generate code coverage reports with one command, and finally we provide a description of the process of producing these reports.
There are 3 layers in the system:
The first layer is the LUCI builders that
There are two types of builder:
The code coverage CI Builders periodically build all the test targets and fuzzer targets for a given platform and instrument all available source files. Then save the coverage data to a dedicated storage bucket.
The code coverage CQ builders instrument only the files changed for a given CL. More information about per-cl coverage info in this doc.
The second layer in the system consists of an AppEngine application that consumes the coverage data from the builders above, structures it and stores it in cloud datastore. It then serves the information to the clients below.
In the last layer we currently have two clients that consume the service:
The coverage dashboard front end is hosted in the same application as the service above. It shows the full-code coverage reports with links to the builds that generated them, as well as per-directory and per-component aggregation, and can be drilled down to the single line of code level of detail.
Refer to the following screenshots:
See coverage breakdown by directories (default landing page).
Use the view dropdown menu to switch between directory and component.
Click on a particular source file in one of the views above to see line-by-line coverage breakdown, and it's useful to identify:
Click on “Previous Reports” to check out the coverage history of the project.
List of historical coverage reports are in reverse chronological order.
The other client supported at the moment is the gerrit plugin for code coverage.
The coverage script automates the process described below and provides a one-stop service to generate code coverage reports locally in just one command.
This script is currently supported on Linux, Mac, iOS and ChromeOS platforms.
Here is an example usage:
$ gn gen out/coverage \ --args="use_clang_coverage=true is_component_build=false dcheck_always_on=true is_debug=false" $ python tools/code_coverage/coverage.py \ crypto_unittests url_unittests \ -b out/coverage -o out/report \ -c 'out/coverage/crypto_unittests' \ -c 'out/coverage/url_unittests --gtest_filter=URLParser.PathURL' \ -f url/ -f crypto/
The command above builds
url_unittests targets and then runs them individually with their commands and arguments specified by the
-c flag. For
url_unittests, it only runs the test
URLParser.PathURL. The coverage report is filtered to include only files and sub-directories under
Aside from automating the process, this script provides visualization features to view code coverage breakdown by directories and by components, similar to the views in the coverage dashboard above.
This section presents the workflow of generating code coverage reports using two unit test targets in Chromium repo as an example:
url_unittests, and the following diagram shows a step-by-step overview of the process.
Generating code coverage reports requires llvm-profdata and llvm-cov tools. Currently, these two tools are not part of Chromium’s Clang bundle, coverage script downloads and updates them automatically, you can also download the tools manually (tools link).
In Chromium, to compile code with coverage enabled, one needs to add
is_debug=false GN flags to the args.gn file in the build output directory. Under the hood, they ensure
-fcoverage-mapping flags are passed to the compiler.
$ gn gen out/coverage \ --args='use_clang_coverage=true is_component_build=false is_debug=false' $ gclient runhooks $ autoninja -C out/coverage crypto_unittests url_unittests
The next step is to run the instrumented binaries. When the program exits, it writes a raw profile for each process. Because Chromium runs tests in multiple processes, the number of processes spawned can be as many as a few hundred, resulting in the generation of a few hundred gigabytes’ raw profiles. To limit the number of raw profiles,
%Nm pattern in
LLVM_PROFILE_FILE environment variable is used to run tests in multi-process mode, where
N is the number of raw profiles. With
N = 4, the total size of the raw profiles are limited to a few gigabytes.
$ export LLVM_PROFILE_FILE=”out/report/crypto_unittests.%4m.profraw” $ ./out/coverage/crypto_unittests $ ls out/report/ crypto_unittests.3657994905831792357_0.profraw ... crypto_unittests.3657994905831792357_3.profraw
Raw profiles must be indexed before generating code coverage reports, and this is done using the
merge command of
llvm-profdata tool, which merges multiple raw profiles (.profraw) and indexes them to create a single profile (.profdata).
At this point, all the raw profiles can be thrown away because their information is already contained in the indexed profile.
$ llvm-profdata merge -o out/report/coverage.profdata \ out/report/crypto_unittests.3657994905831792357_0.profraw ... out/report/crypto_unittests.3657994905831792357_3.profraw out/report/url_unittests.714228855822523802_0.profraw ... out/report/url_unittests.714228855822523802_3.profraw $ ls out/report/coverage.profdata out/report/coverage.profdata
llvm-cov is used to render code coverage reports. There are different report generation modes, and all of them require the following as input:
For example, the following command can be used to generate per-file line-by-line code coverage report:
$ llvm-cov show -output-dir=out/report -format=html \ -instr-profile=out/report/coverage.profdata \ -object=out/coverage/url_unittests \ out/coverage/crypto_unittests
For more information on how to use llvm-cov, please refer to the guide.
For any breakage report and feature requests, please file a bug.
For questions and general discussions, please join code-coverage group.
Yes, code coverage instrumentation works with both component and non-component builds. Component build is usually faster to compile, but can be up to several times slower to run with code coverage instrumentation. For more information, see crbug.com/831939.
Usually this is not a critical issue, but in general we tend not to have any warnings. Please check the list of known issues, and if there is a similar bug, leave a comment with the command you run, the output you get, and Chromium revision you use. Otherwise, please file a bug providing the same information.
If a crash of any type occurs (e.g. Segmentation Fault or ASan error), the crashing process might not dump coverage information necessary to generate code coverage report. For single-process applications (e.g. fuzz targets), that means no coverage might be reported at all. For multi-process applications, the report might be incomplete. It is important to fix the crash first. If this is happening only in the coverage instrumented build, please file a bug.
If a crash is caused by CHECK or DCHECK, the coverage dump will still be written on the disk (crrev.com/c/1172932). However, if a crashing process calls the standard assert directly or through a custom wrapper, the dump will not be written (see How do crashes affect code coverage?).
Yes, with some important caveats. It is possible to build
chrome target with code coverage instrumentation enabled. However, there are some inconveniences involved:
For more information, please see crbug.com/834781.
There can be two possible scenarios:
The code for the service and dashboard currently lives along with findit at this location because of significant shared logic.
The code used by the bots that generate the coverage data lives (among other places) in the code coverage recipe module.
There are several reasons why coverage reports can be incomplete or incorrect: