GPU Bots & Pixel Wrangling

image

GPU Pixel Wrangling

GPU Pixel Wrangling is the process of keeping various GPU bots green. On the GPU bots, tests run on physical hardware with real GPUs, not in VMs like the majority of the bots on the Chromium waterfall.

GPU Bots

Waterfalls

The waterfalls work much like any other; see the Tour of the Chromium Buildbot Waterfall for a more detailed explanation of how this is laid out. We have more subtle configurations because the GPU matters, not just the OS and release v. debug. Hence we have Windows Nvidia Release bots, Mac Intel Debug bots, and so on. The waterfalls we’re interested in are:

  • Chromium GPU [http://build.chromium.org/p/chromium.gpu/waterfall?reload=120] [console view]
    • Various operating systems, configurations, GPUs, etc.
  • The GPU tryservers
    • These bots run try jobs from “git cl try” or the Rietveld UI.
    • The GPU pixel wrangler needs to check that these bots are online, are running jobs correctly, and aren't overloaded.
    • Of course, try jobs may occasionally fail due to bad patches. This is normal.
    • See the section below on making sure the try servers are in good health.
    • [http://chromium-try-flakes.appspot.com/]
  • To a lesser degree: the Chromium GPU FYI waterfall [http://build.chromium.org/p/chromium.gpu.fyi/waterfall?reload=120][console view]
    • These bots run less-standard configurations like Windows 8, Linux with Intel GPUs, etc.
    • There are some longstanding failures on this waterfall. The Linux Intel and Linux AMD bots have been red for a long time. Don't worry about these failures. It is advisable to keep the other bots on this waterfall green, and not to let it go red for long periods of time (12-24 hours).
    • These bots build with top of tree ANGLE rather than the DEPS version. This means that the tree can go red, with no Chromium commit to blame, if commits to the ANGLE repository break Chromium. To determine if a different angle revision was used between two builds, compare the got_angle_revision buildbot property on the GPU builders or parent_got_angle_revision on the testers. This revision can be used to do a git log in the third_party/angle repository.
    • Ignore the Win7 Audio and Linux Audio bots. They are not maintained by the Chrome GPU team and are being decommissioned.

check_gpu_bots.py Script

As an alternative to constantly watching the various waterfall pages listed above, check out the check_gpu_bots.py script which can be configured to repeat periodically and email you if there's something that needs to be looked into.

Test Suites

The bots run several test suites. The majority of them have been migrated to the Telemetry harness, and are run within the full browser, in order to better test the code that is actually shipped. As of this writing, the tests included:

  • Tests using the Telemetry harness:
    • The WebGL conformance tests: src/content/test/gpu/gpu_tests/webgl_conformance.py
    • A Google Maps test: src/content/test/gpu/gpu_tests/maps.py
    • Context loss tests: src/content/test/gpu/gpu_tests/context_lost.py
    • GPU process launch tests: src/content/test/gpu/gpu_tests/gpu_process.py
    • Hardware acceleration validation tests: src/content/test/gpu/gpu_tests/hardware_accelerated_feature.py
    • GPU memory consumption tests: src/content/test/gpu/gpu_tests/memory.py
    • Pixel tests validating the end-to-end rendering pipeline: src/content/test/gpu/gpu_tests/pixel.py
  • content_gl_tests: see src/content/content_tests.gypi
  • gles2_conform_test (requires internal sources): see src/gpu/gles2_conform_support/gles2_conform_test.gyp
  • gl_tests: see src/gpu/gpu.gyp
  • angle_unittests: see src/gpu/gpu.gyp

Additionally, the Release bots run:

  • tab_capture_performance_tests: see performance_browser_tests in src/chrome/chrome_tests.gypi and src/browser/extensions/api/tab_capture/tab_capture_performancetest.cc

More Details

More details about the bots' setup can be found on the GPU Testing page.

Wrangling

Prerequisites

  1. Ideally a wrangler should be a Chromium committer. If you're on the GPU pixel wrangling rotation, there will be an email notifying you of the upcoming shift, and a calendar appointment.
    • If you aren‘t a committer, don’t panic. It's still best for everyone on the team to become acquainted with the procedures of maintaining the GPU bots.
    • In this case you'll upload CLs to Rietveld to perform reverts (optionally using the new “Revert” button in the UI), and might consider using TBR= to speed through trivial and urgent CLs. In general, try to send all CLs through the commit queue.
    • Contact bajones, kbr, vmiura, zmo, or another member of the Chrome GPU team who's already a committer for help landing patches or reverts during your shift.
  2. Apply for access to the bots.

How to Keep the Bots Green

  1. Watch for redness on the tree.
    1. The bots are expected to be green all the time. Flakiness on these bots is neither expected nor acceptable.
    2. If a bot goes consistently red, it‘s necessary to figure out whether a recent CL caused it, or whether it’s a problem with the bot or infrastructure.
    3. If it looks like a problem with the bot (deep problems like failing to check out the sources, the isolate server failing, etc.) notify the Chromium troopers and file a P1 bug with labels: infra>labs, infra>troopers and internals>gpu>testing. See the general tree sheriffing page for more details.
    4. Otherwise, examine the builds just before and after the redness was introduced. Look at the revisions in the builds before and after the failure was introduced. Unfortunately, you'll need to construct your regression URL manually: use this URL and replace “[rev1]” and “[rev2]” in the “range=[rev1]:[rev2]” URL query parameter
    5. File a bug capturing the regression range and excerpts of any associated logs. Regressions should be marked P1. CC engineers who you think may be able to help triage the issue. Keep in mind that the logs on the bots expire after a few days, so make sure to add copies of relevant logs to the bug report.
    6. Use the Hotlist=PixelWrangler label to mark bugs that require the pixel wrangler‘s attention, so it’s easy to find relevant bugs when handing off shifts.
    7. Study the regression range carefully. Use drover to revert any CLs which break the GPU bots. In the revert message, provide a clear description of what broke, links to failing builds, and excerpts of the failure logs, because the build logs expire after a few days.
  2. Make sure the bots are running jobs.
    1. Keep an eye on the console views of the various bots.
    2. Make sure the bots are all actively processing jobs. If they go offline for a long period of time, the “summary bubble” at the top may still be green, but the column in the console view will be gray.
    3. Email the Chromium troopers if you find a bot that's not processing jobs.
  3. Make sure the GPU try servers are in good health.
    1. The GPU try servers are no longer distinct bots on a separate waterfall, but instead run as part of the regular tryjobs on the Chromium waterfalls. The GPU tests run as part of the following tryservers' jobs:
      1. linux_chromium_rel_ng on the tryserver.chromium.linux waterfall
      2. mac_chromium_rel_ng on the tryserver.chromium.mac waterfall
      3. win_chromium_rel_ng on the tryserver.chromium.win waterfall
    2. The best tool to use to quickly find flakiness on the tryservers is the new Chromium Try Flakes tool. Look for the names of GPU tests (like maps_pixel_test) as well as the test machines (e.g. mac_chromium_rel_ng). If you see a flaky test, file a bug like this one. Also look for compile flakes that may indicate that a bot needs to be clobbered. Contact the Chromium sheriffs or troopers if so.
    3. The Swarming Server Stats tool provides an overview of the health of these bots. Use the “gpu” drop-down to go through the supported GPU types and select the resulting dimension corresponding to one of the bots. (The Windows and Linux bots use the same GPU; it‘s best to examine them independently.) Check the activity on these bots to ensure the number of pending jobs seems reasonable according to historical levels. Sign in with an @google.com account in order to examine individual bots’ history and see the successes, failures and durations of tests on the bot.
    4. For more in-depth detail, examine the specific bots above. See if there are any pervasive build or test failures. Note that test failures are expected on these bots: individuals' patches may fail to apply, fail to compile, or break various tests. Look specifically for patterns in the failures. It isn't necessary to spend a lot of time investigating each individual failure. (Use the “Show: 200” link at the bottom of the page to see more history.)
    5. If the same set of tests are failing repeatedly, look at the individual runs. Examine the swarming results and see whether they're all running on the same machine. If they are, something might be wrong with the hardware. Use the Swarming Server Stats tool to drill down into the specific builder.
    6. If you see the same test failing in a flaky manner across multiple machines and multiple CLs, it‘s crucial to investigate why it’s happening. crbug.com/395914 was a recent example of an innocent-looking Blink change which made it through the commit queue and introduced widespread flakiness in a range of GPU tests. The failures were also most visible on the try servers as opposed to the main waterfalls.
    7. Use Chrome Monitor to see if any of the tryservers seem to be falling far behind (hundreds of jobs queued up). If so, email the Chromium troopers for help. Try to correlate the data with the swarming server stats to see whether the GPU tryservers have fallen behind.
      1. chrome-monitor for linux_chromium_rel_ng
      2. chrome-monitor for mac_chromium_rel_ng
      3. chrome-monitor for win_chromium_rel_ng
      4. chrome-monitor for linux_blink_rel
      5. chrome-monitor for mac_blink_rel
      6. chrome-monitor for win_blink_rel
  4. Check if any pixel test failures are actual failures or need to be rebaselined.
    1. For a given build failing the pixel tests, click the “stdio” link of the “pixel” step.
    2. The output will contain a link of the form http://chromium-browser-gpu-tests.commondatastorage.googleapis.com/view_test_results.html?242523_Linux_Release_Intel__telemetry
    3. Visit the link to see whether the generated or reference images look incorrect.
    4. All of the reference images for all of the bots are stored in cloud storage under the link https://storage.cloud.google.com/chromium-gpu-archive. They are under the folder “reference-images” and are indexed by version number, OS, GPU vendor, GPU device, and whether or not antialiasing is enabled in that configuration. You can download the reference images individually to examine them in detail.
  5. Rebaseline pixel test reference images if necessary.
    1. Follow the instructions on the GPU testing page.
    2. Alternatively, if absolutely necessary, you can use the Chrome Internal GPU Pixel Wrangling Instructions to delete just the broken reference images for a particular configuration.
  6. Update Telemetry-based test expectations if necessary.
    1. Most of the GPU tests are run inside a full Chromium browser, launched by Telemetry, rather than a Gtest harness. The tests and their expectations are contained in src/content/test/gpu/gpu_tests/ . See for example webgl_conformance_expectations.py , gpu_process_expectations.py and pixel_expectations.py.
    2. See the header of the file a list of modifiers to specify a bot configuration. It is possible to specify OS (down to a specific version, say, Windows 7 or Mountain Lion), GPU vendor (NVIDIA/AMD/Intel), and a specific GPU device.
    3. The key is to maintain the highest coverage: if you have to disable a test, disable it only on the specific configurations it's failing. Note that it is not possible to discern between Debug and Release configurations.
    4. Mark tests failing or skipped, which will suppress flaky failures, only as a last resort. It is only really necessary to suppress failures that are showing up on the GPU tryservers, since failing tests no longer close the Chromium tree.
    5. Please read the section on stamping out flakiness for motivation on how important it is to eliminate flakiness rather than hiding it.
  7. For the remaining Gtest-style tests, use the DISABLED_ modifier to suppress any failures if necessary.
  8. (Rarely) Update the version of the WebGL conformance tests. See below.

When Bots Misbehave (SSHing into a bot)

  1. See the Chrome Internal GPU Pixel Wrangling Instructions for information on ssh'ing in to the GPU bots.

Reproducing WebGL conformance test failures locally

  1. From the buildbot build output page, click on the failed shard to get to the swarming task page. Scroll to the bottom of the left panel for a command to run the task locally. This will automatically download the build and any other inputs needed.
  2. Alternatively, to run the test on a local build, pass the arguments “--browser=exact --browser-executable=/path/to/binary” to content/test/gpu/run_gpu_integration_test.py. Also see the telemetry documentation.

Updating the WebGL Conformance Tests

  1. Use the src/tools/roll_webgl_conformance.py script to create a roll CL.
  2. The script will automatically start a CQ dry run of the roll, including some currently-optional tests like the WebGL 2.0 conformance suite.
  3. If any of the try jobs fail, update the WebGL conformance suite expectations to suppress failures as necessary. File bugs about the need for these suppressions so they can be removed in the future.
  4. Watch the GPU bots on the chromium.gpu and chromium.gpu.fyi waterfalls. There are more OS and GPU combinations than the GPU servers can reasonably try. An update to the WebGL conformance suite is likely to fail on one or more bots. Follow up the roll with any needed updates to the test expectations.

Extending the GPU Pixel Wrangling Rotation

See the Chrome Internal GPU Pixel Wrangling Instructions for information on extending the rotation.