This PAGE describes in detail how the GPU bots are set up, which files affect their configuration, and how to both modify their behavior and add new bots.
Chromium‘s GPU bots, compared to the majority of the project’s test machines, are physical pieces of hardware. When end users run the Chrome browser, they are almost surely running it on a physical piece of hardware with a real graphics processor. There are some portions of the code base which simply can not be exercised by running the browser in a virtual machine, or on a software implementation of the underlying graphics libraries. The GPU bots were developed and deployed in order to cover these code paths, and avoid regressions that are otherwise inevitable in a project the size of the Chromium browser.
The GPU bots are utilized on the chromium.gpu and chromium.gpu.fyi waterfalls, and various tryservers, as described in Using the GPU Bots.
The vast majority of the hardware for the bots lives in the Chrome-GPU Swarming pool. The waterfall bots are simply virtual machines which spawn Swarming tasks with the appropriate tags to get them to run on the desired GPU and operating system type. So, for example, the Win10 Release (NVIDIA) bot is actually a virtual machine which spawns all of its jobs with the Swarming parameters:
{ "gpu": "10de:1cb3-23.21.13.8816", "os": "Windows-10", "pool": "Chrome-GPU" }
Since the GPUs in the Swarming pool are mostly homogeneous, this is sufficient to target the pool of Windows 10-like NVIDIA machines. (There are a few Windows 7-like NVIDIA bots in the pool, which necessitates the OS specifier.)
Details about the bots can be found on chromium-swarm.appspot.com and by using src/tools/swarming_client/swarming.py
, for example swarming.py bots
. If you are authenticated with @google.com credentials you will be able to make queries of the bots and see, for example, which GPUs are available.
The waterfall bots run tests on a single GPU type in order to make it easier to see regressions or flakiness that affect only a certain type of GPU.
The tryservers like win_chromium_rel_ng
which include GPU tests, on the other hand, run tests on more than one GPU type. As of this writing, the Windows tryservers ran tests on NVIDIA and AMD GPUs; the Mac tryservers ran tests on Intel and NVIDIA GPUs. The way these tryservers' tests are specified is simply by mirroring how one or more waterfall bots work. This is an inherent property of the chromium_trybot
recipe, which was designed to eliminate differences in behavior between the tryservers and waterfall bots. Since the tryservers mirror waterfall bots, if the waterfall bot is working, the tryserver must almost inherently be working as well.
There are a few one-off GPU configurations on the waterfall where the tests are run locally on physical hardware, rather than via Swarming. A few examples are:
There are a couple of reasons to continue to support running tests on a specific machine: it might be too expensive to deploy the required multiple copies of said hardware, or the configuration might not be reliable enough to begin scaling it up.
Adding a new test step to the bots requires that the test run via an isolate. Isolates describe both the binary and data dependencies of an executable, and are the underpinning of how the Swarming system works. See the LUCI wiki for background on Isolates and Swarming.
template("test")
template in src/testing/test.gni
. See test("gl_tests")
in src/gpu/BUILD.gn
for an example. For a more complex example which invokes a series of scripts which finally launches the browser, see src/chrome/telemetry_gpu_test.isolate
.src/testing/buildbot/gn_isolate_map.pyl
that refers to your target. Find a similar target to yours in order to determine the type
. The type is referenced in src/tools/mb/mb_config.pyl
.At this point you can build and upload your isolate to the isolate server.
See Isolated Testing for SWEs for the most up-to-date instructions. These instructions are a copy which show how to run an isolate that's been uploaded to the isolate server on your local machine rather than on Swarming.
If cd
'd into src/
:
./tools/mb/mb.py isolate //out/Release [target name]
./tools/mb/mb.py isolate //out/Release angle_end2end_tests
python tools/swarming_client/isolate.py batcharchive -I https://isolateserver.appspot.com out/Release/[target name].isolated.gen.json
python tools/swarming_client/isolate.py batcharchive -I https://isolateserver.appspot.com out/Release/angle_end2end_tests.isolated.gen.json
python tools/swarming_client/run_isolated.py -I https://isolateserver.appspot.com -s [HASH] -- [any additional args for the isolate]
See the section below on isolate server credentials.
See Adding new steps to the GPU bots for details on this process.
In the tools/build workspace:
scripts/slave/recipe_modules/chromium_tests/
:linux_chromium_rel_ng
, mac_chromium_rel_ng
, and win_chromium_rel_ng
, which run against every Chromium CL, and which mirror the behavior of bots on the chromium.gpu waterfall.linux_optional_gpu_tests_rel
, mac_optional_gpu_tests_rel
and win_optional_gpu_tests_rel
, which are triggered manually and run some tests which can't be run on the regular Chromium try servers mainly due to lack of hardware capacity.In the chromium/src workspace:
src/tools/mb/mb_config.pyl
src/content/test/gpu/generate_buildbot_json.py
chromium.gpu.json
and chromium.gpu.fyi.json
. It defines on which GPUs various tests run.In the infradata/config workspace (Google internal only, sorry):
Chrome-GPU
Swarming pool which contains most of the specialized hardware: as of this writing, the Windows and Linux NVIDIA bots, the Windows AMD bots, and the MacBook Pros with NVIDIA and AMD GPUs. New GPU hardware should be added to this pool.This section describes various common scenarios that might arise when maintaining the GPU bots, and how they'd be addressed.
This is described in Adding new tests to the GPU bots.
The first decision point when adding a new GPU bot is whether it is a one-off piece of hardware, or one which is expected to be scaled up at some point. If it‘s a one-off piece of hardware, it can be added to the chromium.gpu.fyi waterfall as a non-swarmed test machine. If it’s expected to be scaled up at some point, the hardware should be added to the swarming pool. These two scenarios are described in more detail below.
masters/master.chromium.gpu.fyi/builders.pyl
.scripts/slave/recipe_modules/chromium_tests/chromium_gpu_fyi.py
. Set the enable_swarming
property to False
.scripts/slave/recipes.py --use-bootstrap test train
) and add the newly created JSON file(s) corresponding to the new machines to your CL.src/content/test/gpu/generate_buildbot_json.py
. Make sure to set the swarming
property to False
.src/tools/mb/mb_config.pyl
.When deploying a new GPU configuration, it should be added to the chromium.gpu.fyi waterfall first. The chromium.gpu waterfall should be reserved for those GPUs which are tested on the commit queue. (Some of the bots violate this rule – namely, the Debug bots – though we should strive to eliminate these differences.) Once the new configuration is ready to be fully deployed on tryservers, bots can be added to the chromium.gpu waterfall, and the tryservers changed to mirror them.
In order to add Release and Debug waterfall bots for a new configuration, experience has shown that at least 4 physical machines are needed in the swarming pool. The reason is that the tests all run in parallel on the Swarming cluster, so the load induced on the swarming bots is higher than it would be for a non-swarmed bot that executes its tests serially.
With these prerequisites, these are the steps to add a new swarmed bot. (Actually, pair of bots -- Release and Debug.)
src/tools/swarming_client/swarming.py bots
to determine the PCI IDs of the GPUs in the bots. (These instructions will need to be updated for Android bots which don't have PCI buses.)configs/chromium-swarm/bots.cfg
in the infradata/config workspace.master.chromium.gpu.fyi/builders.pyl
.chromium_gpu_fyi.py
. Make sure to set the enable_swarming
and serialize_tests
properties to True
. Double-check the parent_buildername
property for each. It must match the Release/Debug flavor of the builder.scripts/slave/recipes.py --use-bootstrap test train
) and add the newly created JSON file(s) corresponding to the new machines to your CL.src/content/test/gpu/generate_buildbot_json.py
.swarming
property to True
for both the Release and Debug bots.win
to Windows-2008ServerR2-SP1
(the Win7-like flavor running in our data center). Similarly, the Win8 bots had to have a very precise OS description (Windows-2012ServerR2-SP0
).src/tools/mb/mb_config.pyl
.Let's say that you want to cause the win_chromium_rel_ng
try bot to run tests on CoolNewGPUType in addition to the types it currently runs (as of this writing, NVIDIA and AMD). To do this:
tests/masters_recipes_test.py
for these new testers since they aren't yet covered by try bots and are going on a non-FYI waterfall. Make sure these run green for a day or two before proceeding.win_chromium_rel_ng
's bot_ids
list in scripts/slave/recipe_modules/chromium_tests/trybots.py
. Rerun scripts/slave/recipes.py --use-bootstrap test train
.The “optional” GPU try bots are a concession to the reality that there are some long-running GPU test suites that simply can not run against every Chromium CL. They run some additional tests that are usually run only on the chromium.gpu.fyi waterfall. Some of these tests, like the WebGL 2.0 conformance suite, are intended to be run on the normal try bots once hardware capacity is available. Some are not intended to ever run on the normal try bots.
The optional try bots are a little different because they mirror waterfall bots that don‘t actually exist. The waterfall bots’ specifications exist only to tell the optional try bots which tests to run.
Let‘s say that you intended to add a new such optional try bot on Windows. Call it win_new_optional_tests_rel
for example. Now, if you wanted to just add this GPU type to the existing win_optional_gpu_tests_rel
try bot, you’d just follow the instructions above (How to start running tests on a new GPU type on an existing try bot). The steps below describe how to spin up an entire new optional try bot.
win_optional_gpu_tests_rel
.)masters/master.tryserver.chromium.win
's master.cfg and slaves.cfg to add the new tryserver. Follow the pattern for the existing win_optional_gpu_tests_rel
tryserver. Namely, add the new entry to master.cfg, and add the new tryserver to the optional_builders
list in slaves.cfg
.chromium_gpu_fyi.py
to add the new “Optional Win7 Release (CoolNewGPUType)” entry.trybots.py
to add the new win_new_optional_tests_rel
try bot, mirroring “Optional Win7 Release (CoolNewGPUType)”.git cl try -m tryserver.chromium.win -b win_new_optional_tests_rel
Let's say that you want to roll out an update to the graphics drivers on one of the configurations like the Win7 NVIDIA bots. The responsible way to do this is to run the new driver on one of the waterfalls for a day or two to make sure the tests are running reliably green before rolling out the driver update everywhere. To do this:
Note that with recent improvements to Swarming, in particular this RFE and others, these steps are no longer strictly necessary – it's possible to target Swarming jobs at a particular driver version. If generate_buildbot_json.py
were improved to be more specific about the driver version on the various bots, then the machines with the new drivers could simply be added to the Swarming pool, and this process could be a lot simpler. Patches welcome. :)
Working with the GPU bots requires credentials to various services: the isolate server, the swarming server, and cloud storage.
To upload and download isolates you must first authenticate to the isolate server. From a Chromium checkout, run:
./src/tools/swarming_client/auth.py login --service=https://isolateserver.appspot.com
This will open a web browser to complete the authentication flow. A @google.com email address is required in order to properly authenticate.
To test your authentication, find a hash for a recent isolate. Consult the instructions on Running Binaries from the Bots Locally to find a random hash from a target like gl_tests
. Then run the following:
If authentication succeeded, this will silently download a file called delete_me
into the current working directory. If it failed, the script will report multiple authentication errors. In this case, use the following command to log out and then try again:
./src/tools/swarming_client/auth.py logout --service=https://isolateserver.appspot.com
The swarming server uses the same auth.py
script as the isolate server. You will need to authenticate if you want to manually download the results of previous swarming jobs, trigger your own jobs, or run swarming.py reproduce
to re-run a remote job on your local workstation. Follow the instructions above, replacing the service with https://chromium-swarm.appspot.com
.
Authentication to Google Cloud Storage is needed for a couple of reasons: uploading pixel test results to the cloud, and potentially uploading and downloading builds as well, at least in Debug mode. Use the copy of gsutil in depot_tools/third_party/gsutil/gsutil
, and follow the Google Cloud Storage instructions to authenticate. You must use your @google.com email address and be a member of the Chrome GPU team in order to receive read-write access to the appropriate cloud storage buckets. Roughly:
gsutil config
At this point you should be able to write to the cloud storage bucket.
Navigate to https://console.developers.google.com/storage/chromium-gpu-archive to view the contents of the cloud storage bucket.