The thesis on Sheriffing.
These threefold purposes apply to sheriffs
Make sure tip-of-tree (ToT) breakages are identified and addressed in a timely fashion.
Manually watch over our build system in ways automation doesn‘t/can’t do.
Give developers a chance to learn a little more about how build works and breaks.
In exchange for the above, sheriffs are granted Code Review +2 access to the Chrome OS tree. Sheriffing is an important responsibility and fulfilling the responsibility grants you CR+2 access.
There are many CQ‘s running at the same time. The CQ’s are the concern of the CL authors. Frequently (the following diagram says 10 minutes, but the interval changes over time. See CrOS LUCI Scheduler for the current interval), a builder called Annealing takes a snapshot of the tree and uprevs packages. Then, the postsubmit builder takes the most recent of these (the following diagram says every three hours but it now runs continously) and runs builds and hardware tests. There are also snapshot builds launched at a higher frequency which may show tip-of-tree build breakages more quickly as they're launched and complete at a more rapid cadence.As a sheriff, your primary concern is the status of the tip-of-tree as shown by the postsubmit\snapshots as tracked in Sheriff-o-Matic (see below).By fixing the tip-of-tree, you unblock CQ runs that are running from the same broken state of the tree.
As Chrome OS transition to the 12-sheriff rotations (US sheriffs will transition starting in April 2021), the secondary sheriffs will be deprecated, and the tip-of-tree release build health will be managed by the primary sheriffs too.
This diagram attempts to capture this at an abstract level:
To add calendar entries to your own @google.com calendar, follow directions at go/oncall2calendar.
Gardeners are added and removed from a Ganpati group (Gardeners group, Shadow Gardeners group) by managers.
To swap rotations now, all that need be done is to manually edit the g3 oncall file and put the username of the person taking your shift where yours occurs. These direct links will open an editor for you to make the change, immediately:
At the beginning of your stint as Sheriff, please perform the following tasks:
At the end of your stint as Sheriff, please attend the following week's handoff meeting.
In the diagram above, there‘s a purple arrow that proceeds off to the right. This represents the progress of the manifest-internal repository over time (as repos in the Chrome OS forest progress). The Annealing builder makes snapshots of this progress and CQ and postsubmit builders run with one of these snapshots as their base commit. For postsubmit, it just always uses the most recent snapshot. For CQ runs, it uses one of the most recent snapshot as its base but then also cherry-picks the CL’s under test into itself. In both CQ and postsubmit, the whole build is controlled by a parent builder called Orchestrator. This builder is responsible for starting child builders to produce OS test images and then, later, scheduling hardware and VM testing. This is represented by this diagram:
In the diagram above, the Orchestrator creates the child builders to create all OS images. Some of those builders analyze the patched in changes (in CQ only) and decide that they do not have the change in their Portage graph; these are the Irrelevant Builds. For the remaining Relevant Builds, they upload their finished OS test images to Google Storage. Their build process looks something like this simplified diagram:
When the Orchestrator notices that all child builds have been completed, it will schedule all hardware testing and VM testings at once. The Orchestrator will wait for up to 20 hours for hardware test capacity in the lab to become available. In most cases, the wait isn't this long.
Finally, when everything is done, the exit status (success/failure) of the Orchestrator is based on whether or not the builds, HW tests, and VM tests all succeeded.
The CQ Orchestrator runs will only run those parts of the build that didn't succeed last time. Thus, if a particular builder was broken and it is fixed in ToT by the sheriff rolling back a breakage, the next time the user runs their change through the CQ, the Orchestrator will skip everything that succeeded and only run the failed bits.
One consequence is that if some HW tests keep failing because of a tree breakage, the CQ will only retry the test, without rebuilding the image. To force rebuilding some images, follow these instructions: Force CQ to rerun builds.
When Sheriffs encounter build failures on the public Chromium OS builder, they should follow the following process:
Wrong! When the tree is green, it's a great time to start investigating and fixing all the niggling things that make it hard to sheriff the tree.
If a bug in Chrome does not get caught by PUpr CQ run, you should first engage with the Chrome gardener. They are responsible for helping to find and fix or revert the Chrome change that caused the problem.
If the Chrome bug is serious enough to be causing failures on the canaries or the CQ, you should work with the gardener to revert the Chrome uprev:
$BUG
.If you've found a commit that broke the build, you can revert it using these steps:
Add the line “Exempt-From-Owner-Approval:” to your commit message along with a brief explanation of why you are bypassing OWNERS. Your CL will still require a Code-Review +2 vote from someone other than yourself: Ask for a reviewer in CrOS Oncall if you don't have someone handy.
There are several common reasons why the VMTests fail. First pull up the stdio link for the VMTest stage, and then check for each of the possibilities below.
Once you‘ve got the VMTest stage’s stdio output loaded, search for ‘Total PASS’. This will get you to the Autotest test summary. You'll see something like
Total PASS: 29/33 (87%)
Assuming the number is less than 100%, there was a failure in one of the autotests. Scroll backwards from ‘Total PASS’ to identify the specific test (or tests) that failed. You'll see something like this:
/tmp/cbuildbotXXXXXX/test_harness/all/SimpleTestUpdateAndVerify/<...>/login_CryptohomeMounted [ FAILED ] /tmp/cbuildbotXXXXXX/test_harness/all/SimpleTestUpdateAndVerify/<...>/login_CryptohomeMounted FAIL: Unhandled JSONInterfaceError: Automation call {'username': 'performancetestaccount@gmail.com', 'password': 'perfsmurf', 'command': 'Login'} received empty response. Perhaps the browser crashed.
In this case Chrome failed to login for any of the 3 reasons: 1) could not find network, 2) could not get online, 3) could not show webui login prompt. Look for chrome log in /var/log/chrome/chrome, or find someone who works on UI.
(If you‘re annoyed by the long string before the test name, please consider working on crbug.com/313971, when you’re gardening.)
Sometimes, all the tests will pass, but one or more processes crashed during the test. Not all crashes are failures, as some tests are intended to test the crash system. However, if a problematic crash is detected, the VMTest stdio output will have something like this:
Crashes detected during testing: ---------------------------------------------------------- chrome sig 11 login_CryptohomeMounted
If there is a crash, proceed to the next section, “How to find test results/crash reports”?
The x86-generic-asan and amd64-generic-asan builders instrument some programs (e.g. Chrome) with code to detect memory access errors. When an error is detected, ASAN prints error information, and terminates the process. Similarly to crashes, it is possible for all tests to pass even though a process terminated.
If Chrome triggers an ASAN error report, you'll see the message “Asan crash occurred. See asan_logs in Artifacts”. As suggested in that message, you should download “asan_logs”. See the next section, “How to find test results/crash reports” for details on how to download those logs.
Note: in addition to Chrome, several system daemons (e.g. shill) are built with ASAN instrumentation. However, we don‘t yet bubble up those errors in the test report. See crbug.com/314678 if you’re interested in fixing that.
The test framework needs to log in to the VM, in order to do things like execute tests, and download log files. Sometimes, this fails. In these cases, we have no logs to work from, so we need the VM disk image instead.
You‘ll know that you’re in this case if you see messages like this:
Connection timed out during banner exchange Connection timed out during banner exchange Failed to connect to virtual machine, retrying ...
When this happens, look in the build report for “vm_disk” and “vm_image” links. These should be right after the “stdio” link. For example, if you're looking at the build report for “lumpy nightly chrome PFQ Build #3977” :
Download the disk and memory images, and then resume the VM using kvm on your workstation.
$ tar --use-compress-program=pbzip2 -xf \ failed_SimpleTestUpdateAndVerify_1_update_chromiumos_qemu_disk.bin.8Fet3d.tar $ tar --use-compress-program=pbzip2 -xf \ failed_SimpleTestUpdateAndVerify_1_update_chromiumos_qemu_mem.bin.TgS3dn.tar $ cros_start_vm \ --image_path=chromiumos_qemu_disk.bin.8Fet3d \ --mem_path=chromiumos_qemu_mem.bin.TgS3dn
You should now have a VM which has resumed at exactly the point where the test framework determined that it could not connect.
Note that, at this time, we don‘t have an easy way to mount the VM filesystem, without booting it. If you’re interested in improving that, please see crbug.com/313484.)
For more information about troubleshooting VMs, see how to run Chrome OS image under VMs.
The complete results from VMTest runs are available on googlestorage, by clicking the [ Artifacts ] link in-line on the waterfall display in the report section:
From there, you should see a file named chrome.*.dmp.txt that contains the crash log. Example
If you see a stack trace here, search for issues with a similar call stack and add the google storage link, or file a new issue.
Normally, you should never need to extract stack traces manually, because they will be included in the Artifacts link, as described above. However, if you need to, here's how:
minidump_stackwalk [filename].dmp debug/breakpad > stack.txt 2>/dev/null
If you successfully retrieve a stack trace, search for issues with a similar call stack and add the google storage link, or file a new issue. Note that in addition to breakpad dmp files, the test_results.tgz also has raw linux core files. These can be loaded into gdb and can often produce better stack traces than minidump_stackwalk (eg. expanding all inlined frames).
Probably nothing. Most of the time, when a child builder is purple, that just indicates that it is already tracked by the CI oncaller. Try pinging the CI oncaller on go/cros-oncall-weekly.
This test searches through all ELF binaries on the image and identifies binaries that have not been compiled with the correct hardened flags.
To find out what test is failing and how, look at the *.DEBUG log in your autotest directory. Do a grep -A10 FAILED *.DEBUG. You will find something like this:
05/08 09:23:33 DEBUG|platform_T:0083| Test Executable Stack 2 failures, 1 in allowlist, 1 in filtered, 0 new passes FAILED: /opt/google/chrome/pepper/libnetflixplugin2.so 05/08 09:23:33 ERROR|platform_T:0250| Test Executable Stack 1 failures FAILED: /path/to/binary
This means that the test called “Executable Stack” reported 2 failures, there is one entry in the allowlist of this test, and after filtering the failures through the allowlist, there is still a file. The name of the file is /path/to/binary.
The “new passes” indicate files that are in the allowlist but passed this time.
To find the owner who wrote this test, do a git blame on this file: https://chromium.googlesource.com/chromiumos/third_party/autotest/+blame/HEAD/client/site_tests/platform_ToolchainOptions/platform_ToolchainOptions.py and grep for the test name (“Executable Stack” in this case).
Find the change that added the new binary that fails the test, or changed compiler options for a package such that the test now fails, and revert it. File an issue on the author with the failure log, and CC the owner of the test (found by git blame above).
Visit go/arc++docs and see the Contact Information section.
When you see an error like:
NotEnoughDutsError: Not enough DUTs for board: <board>, pool: <pool>; required: 4, found: 3, suite: au, build: <build> Will return from run_suite with status: INFRA_FAILURE
Contact the on duty Deputy to balance the pools, Sheriffs are responsible to ensure that there aren't bad changes that continue to take out DUTs, however Deputies are responsible for DUT allocation.
This test validates all processes running as root against an allowlist of processes that can run as root. It also verifies that a baseline list of processes are running with the correct set of permissions (no privilege escalation).
When you see failures in this test, file a bug and try to identify the owners of the processes that are running as root. They must fix their code so that they are properly sandboxed. Sometimes, this test can fail due to test ordering (since some tests may not clean up after themselves nicely). Contact the test owners for help in that case.
DO NOT mark this test as informational. It enforces an important security property of Chrome OS.
Other handy links to information: