tree: 4f7de2805eca1a71b3ab7d3e7a21c2be021720da [path history] [tgz]
  1. filters/
  2. scripts/
  3. .gitignore
  4. .style.yapf
  5. buildbot_json_magic_substitutions.py
  6. buildbot_json_magic_substitutions_unittest.py
  7. chrome.json
  8. chromium.android.fyi.json
  9. chromium.android.json
  10. chromium.angle.json
  11. chromium.chromiumos.json
  12. chromium.clang.json
  13. chromium.dawn.json
  14. chromium.dev.json
  15. chromium.devtools-frontend.json
  16. chromium.fuzz.json
  17. chromium.fyi.json
  18. chromium.goma.fyi.json
  19. chromium.goma.json
  20. chromium.gpu.fyi.json
  21. chromium.gpu.json
  22. chromium.json
  23. chromium.linux.json
  24. chromium.mac.json
  25. chromium.memory.json
  26. chromium.mojo.json
  27. chromium.perf.calibration.json
  28. chromium.perf.fyi.json
  29. chromium.perf.json
  30. chromium.reclient.fyi.json
  31. chromium.rust.json
  32. chromium.swangle.json
  33. chromium.updater.json
  34. chromium.webrtc.fyi.json
  35. chromium.webrtc.json
  36. chromium.win.json
  37. client.devtools-frontend.integration.json
  38. client.openscreen.chromium.json
  39. client.v8.branches.json
  40. client.v8.chromium.json
  41. client.v8.fyi.json
  42. DIR_METADATA
  43. generate_buildbot_json.py
  44. generate_buildbot_json_coveragetest.py
  45. generate_buildbot_json_unittest.py
  46. generate_buildbot_json_unittest.py.vpython
  47. gn_isolate_map.pyl
  48. infra.json
  49. internal.chrome.fyi.json
  50. internal.chromeos.fyi.json
  51. manage.py
  52. mixins.pyl
  53. openscreen.ci.json
  54. OWNERS
  55. PRESUBMIT.py
  56. README.md
  57. test_suite_exceptions.pyl
  58. test_suites.pyl
  59. trybot_analyze_config.json
  60. tryserver.chromium.android.json
  61. tryserver.chromium.linux.json
  62. tryserver.devtools-frontend.json
  63. tryserver.webrtc.json
  64. variants.pyl
  65. waterfalls.pyl
testing/buildbot/README.md

Buildbot Testing Configuration Files

The files in this directory control how tests are run on the Chromium buildbots. In addition to specifying what tests run on which builders, they also specify special arguments and constraints for the tests.

Adding a new test suite?

The bar for adding new test suites is high. New test suites result in extra linking time for builders, and sending binaries around to the swarming bots. This is especially onerous for suites such as browser_tests (more than 300MB as of this writing). Unless there is a compelling reason to have a standalone suite, include your tests in existing test suites. For example, all InProcessBrowserTests should be in browser_tests. Similarly any unit-tests in components should be in components_unittests.

A tour of the directory

  • <master_name>.json -- buildbot configuration json files. These are used to configure what tests are run on what builders, in addition to specifying builder-specific arguments and parameters. They are now autogenerated, mainly using the generate_buildbot_json tool in this directory.
  • generate_buildbot_json.py -- generates most of the buildbot json files in this directory, based on data contained in the waterfalls.pyl, test_suites.pyl, and test_suite_exceptions.pyl files.
  • waterfalls.pyl -- describes the bots on the various waterfalls, and which test suites they run. By design, this file can only refer (by name) to test suites that are defined in test_suites.pyl.
  • test_suites.pyl -- describes the test suites that are referred to by waterfalls.pyl. A test suite describes groups of tests that are run on one or more bots.
  • test_suite_exceptions.pyl -- describes exceptions to the test suites, for example excluding a particular test from running on one bot. The goal is to have very few or no exceptions, which is why this information is factored into a separate file.
  • gn_isolate_map.pyl -- maps Ninja build target names to GN labels. Allows for certain overrides to get certain tests targets to work with GN (and properly run when isolated).
  • trybot_analyze_config.json -- used to provide exclusions to the analyze step on trybots.
  • filters/ -- filters out tests that shouldn't be run in a particular mode.
  • timeouts.py -- calculates acceptable timeouts for tests by analyzing their execution on swarming.
  • manage.py -- makes sure the buildbot configuration json is in a standardized format.

How the files are consumed

Buildbot configuration json

Logic in the Chromium recipe looks up each builder for each master and test generators in chromium_tests/steps.py parse the data. For example, as of a6e11220 generate_gtest parses any entry in a builder's ‘gtest_tests’ entry.

Making changes

All of the JSON files in this directory are autogenerated. The “how to use” section below describes the main tool, generate_buildbot_json.py, which manages most of the waterfalls. It's no longer possible to hand-edit the JSON files; presubmit checks forbid doing so.

Note that trybots mirror regular waterfall bots, with the mapping defined in trybots.py. This means that, as of 81fcc4bc, if you want to edit linux_android_rel_ng, you actually need to edit Android Tests.

Trying the changes on trybots

You should be able to try build changes that affect the trybots directly (for example, adding a test to linux_android_rel_ng should show up immediately in your tryjob). Non-trybot changes have to be landed manually :(.

Capacity considerations when editing the configuration files

When adding tests or bumping timeouts, care must be taken to ensure the infrastructure has capacity to handle the extra load. This is especially true for the established Chromium CQ builders, as they operate under strict execution requirements. Make sure to get a resource owner or a member of Chrome Browser Core EngProd to sign off that there is both builder and swarmed test shard capacity available.

In particular, pay attention to the capacity of the builder which compiles and then triggers and collects swarming task shards. If you‘re adding a new test suite to a bot, and know that the test suite adds one hour of testing time to the swarming shards, and know that you have enough swarmed capacity to handle that one hour of testing, that’s a good start. But if that test also happens to run in shards which take 10 minutes longer than any other shards on that current bot, that means that the top-level builder will also take 10 minutes longer to run -- or 20 minutes longer if there are failures and retries. Ensure that the builder pool has enough capacity to handle that increase as well.

How to use the generate_buildbot_json tool

Test suites

Basic test suites

The test_suites.pyl file describes groups of tests that run on bots -- both waterfalls and trybots. In order to specify that a test like base_unittests runs on a bot, it must be put inside a test suite. This organization helps enforce sharing of test suites among multiple bots.

An example of a simple test suite:

'basic_chromium_gtests': {
  'base_unittests': {},
}

If a bot in waterfalls.pyl refers to the test suite basic_chromium_gtests, then that bot will run base_unittests.

The test's name is usually both the build target as well as how the test appears in the steps that the bot runs. However, this can be overridden using dictionary arguments like test and isolate_name; see below.

The dictionary following the test's name can contain multiple entries that affect how the test runs. Generally speaking, these are copied verbatim into the generated JSON file. Commonly used arguments include:

  • args: an array of command line arguments for the test.

  • ci_only: a boolean value (True|False) indicating whether the test should only be run post-submit on the continuous (CI) builders, instead of run both post-submit and on any matching pre-submit / cq / try builders. This flag should be set rarely, usually only temporarily to manage capacity concerns during an outage.

  • swarming: a dictionary of Swarming parameters. Note that these will be applied to every bot that refers to this test suite. It is often more useful to specify the Swarming dimensions at the bot level, in waterfalls.pyl. More on this below.

    • can_use_on_swarming_builders: if set to False, disables running this test on Swarming on any bot.

    • idempotent: if set to False, prevents Swarming from returning the same results of a similar run of the same test. See task deduplication for more info.

  • experiment_percentage: an integer indicating that the test should be run as an experiment in the given percentage of builds. Tests running as experiments will not cause the containing builds to fail. Values should be in [0, 100] and will be clamped accordingly.

  • android_swarming: Swarming parameters to be applied only on Android bots. (This feature was added mainly to match the original handwritten JSON files, and further use is discouraged. Ideally it should be removed.)

Arguments specific to GTest-based tests:

  • test: the target to build and run, if different from the test's name. This allows the same test to be run multiple times on the same bot with different command line arguments or Swarming dimensions, for example.

Arguments specific to isolated script tests:

  • isolate_name: the target to build and run, if different than the test's name.

There are other arguments specific to other test types (script tests, JUnit tests); consult the generator script and test_suites.pyl for more details and examples.

Compound test suites

Composition test suites

One level of grouping of test suites is composition test suites. A composition test suite is an array whose contents must all be names of individual test suites. Composition test suites may not refer to other composition or matrix compound test suites. This restriction is by design. First, adding multiple levels of indirection would make it more difficult to figure out which bots run which tests. Second, having only one minimal grouping construct motivates authors to simplify the configurations of tests on the bots and reduce the number of test suites.

An example of a composition test suite:

'common_gtests': {
  'base_unittests': {},
},

'linux_specific_gtests': {
  'x11_unittests': {},
},

# Composition test suite
'linux_gtests': [
  'common_gtests',
  'linux_specific_gtests',
],

A bot referring to linux_gtests will run both base_unittests and x11_unittests.

Matrix compound test suites

Another level of grouping of basic test suites is the matrix compound test suite. A matrix compound test suite is a dictionary, composed of references to basic test suites (key) and configurations (value). Matrix compound test suites have the same restrictions as composition test suites, in that they cannot reference other composition or matrix test suites. Configurations defined for a basic test suite in a matrix test suite are applied to each tests for the referenced basic test suite. “variants” is the only supported key via matrix compound suites at this time. Matrix compound test suites also supports no “variants”. So if you want a compound test suites, which some of basic test suites have “variants”, and other basic test suites don't have “variants”, you will define a matrix compound test suites.

Variants

“variants” is a top-level group introduced into matrix compound suites designed to allow targeting a test against multiple variants. Each variant supports args, mixins and swarming definitions. When variants are defined, args, mixins and swarming aren’t specified at the same level.

Args, mixins, and swarming configurations that are defined by both the test suite and variants are merged together. Args and mixins are lists, and thus are appended together. Swarming configurations follow the same merge process - dimension sets are merged via the existing dictionary merge behavior, and other keys are appended.

identifier is a required key for each variant. The identifier is used to make the test name unique. Each test generated from the resulting .json file is identified uniquely by name, thus, the identifier is appended to the test name in the format: “test_name” + “_” + “identifier”

For example, iOS requires running a test suite against multiple devices. If we have the following basic test suite:

'ios_eg2_tests': {
  'basic_unittests': {
    'args': [
      '--some-arg',
    ]
  }
}

and a matrix compound suite with this variants definition:

'matrix_compound_test': {
  'ios_eg2_tests': {
    'variants': [
      {
        'args': [
          '--platform',
          'iPhone X',
          '--version',
          '13.3'
        ],
        'identifier': 'iPhone_X_13.3',
      },
      {
        'identifier': 'device_iPhone_X_13.3',
        'swarming': {
          'dimension_sets': [
            {
              'os': 'iOS-iPhone10,3'
            }
          ]
        }
      }
    ]
  }
}

we can expect the following output:

{
  'args': [
    '--some-arg',
    '--platform',
    'iPhone X',
    '--version',
    '13.3'
  ],
  'merge': {
    'args': [],
    'script': 'some/merge/script.py'
  }
  'name': 'basic_unittests_iPhone_X_13.3',
  'test': 'basic_unittests'
},
{
  'args': [
    '--some-arg'
  ],
  'merge': {
    'args': [],
    'script': 'some/merge/script.py',
  },
  'name': 'basic_unittests_device_iPhone_X_13.3',
  'swarming': {
    'dimension_sets': [
      {
        'os': 'iOS-iPhone10,3'
      }
    ]
  },
  'test': 'basic_unittests'
}

Due to limitations of the merging algorithm, merging dimension sets fail when there are more dimension sets defined in the matrix test suite than the basic test suite. On failure, the user is notified of an error merging list key dimension sets.

Waterfalls

waterfalls.pyl describes the waterfalls, the bots on those waterfalls, and the test suites which those bots run.

A bot can specify a swarming dictionary including dimension_sets. These parameters are applied to all tests that are run on this bot. Since most bots run their tests on Swarming, this is one of the mechanisms that dramatically reduces redundancy compared to maintaining the JSON files by hand.

A waterfall is a dictionary containing the following:

  • name: the waterfall's name, for example 'chromium.win'.
  • machines: a dictionary mapping machine names to dictionaries containing bot descriptions.

Each bot's description is a dictionary containing the following:

  • additional_compile_targets: if specified, an array of compile targets to build in addition to those for all of the tests that will run on this bot.

  • test_suites: a dictionary optionally containing any of these kinds of tests. The value is a string referring either to a basic or composition test suite from test_suites.pyl.

    • gtest_tests: GTest-based tests (or other kinds of tests that emulate the GTest-based API), which can be run either locally or under Swarming.
    • isolated_scripts: Isolated script tests. These are bundled into an isolate, invoke a wrapper script from src/testing/scripts as their top-level entry point, and are used to adapt to multiple kinds of test harnesses. These must implement the Test Executable API and can also be run either locally or under Swarming.
    • junit_tests: (Android-specific) JUnit tests. These are not run under Swarming.
    • scripts: Legacy script tests living in src/testing/scripts. These also are not (and usually can not) be run under Swarming. These types of tests are strongly discouraged.
  • swarming: a dictionary specifying Swarming parameters to be applied to all tests that run on the bot.

  • os_type: the type of OS this bot tests. The only useful value currently is 'android', and enables outputting of certain Android-specific entries into the JSON files.

  • skip_cipd_packages: (Android-specific) when True, disables emission of the 'cipd_packages' Swarming dictionary entry. Not commonly used; further use is discouraged.

  • skip_merge_script: (Android-specific) when True, disables emission of the 'merge' script key. Not commonly used; further use is discouraged.

  • skip_output_links: (Android-specific) when True, disables emission of the 'output_links' Swarming dictionary entry. Not commonly used; further use is discouraged.

  • use_swarming: can be set to False to disable Swarming on a bot.

Test suite exceptions

test_suite_exceptions.pyl contains specific exceptions to the general rules about which tests run on which bots described in test_suites.pyl and waterfalls.pyl.

In general, the design should be to have no exceptions. Roughly speaking, all bots should be treated identically, and ideally, the same set of tests should run on each. In practice, of course, this is not possible.

The test suite exceptions can only be used to remove tests from a bot, modify how a test is run on a bot, or remove keys from a test's specification on a bot. The exceptions can not be used to add a test to a bot. This restriction is by design, and helps prevent taking shortcuts when designing test suites which would make the test descriptions unmaintainable. (The number of exceptions needed to describe Chromium's waterfalls in their previous hand-maintained state has already gotten out of hand, and a concerted effort should be made to eliminate them wherever possible.)

The exceptions file supports the following options per test:

  • remove_from: a list of bot names on which this test should not run. Currently, bots on different waterfalls that have the same name can be disambiguated by appending the waterfall's name: for example, Nougat Phone Tester chromium.android.

  • modifications: a dictionary mapping a bot‘s name to a dictionary of modifications that should be merged into the test’s specification on that bot. This can be used to add additional command line arguments, Swarming parameters, etc.

  • replacements: a dictionary mapping bot names to a dictionaries of field names to dictionaries of key/value pairs to replace. If the given value is None, then the key will simply be removed. For example:

    'foo_tests': {
      'Foo Tester': {
        'args': {
          '--some-flag': None,
          '--another-flag': 'some-value',
        },
      },
    }
    

    would remove the --some-flag and replace whatever value --another-flag was set to with some-value. Note that passing None only works if the flag being removed either has no value or is in the --key=value format. It does not work if the key and value are two separate entries in the args list.

Order of application of test changes

A test's final JSON description comes from the following, in order:

  • The dictionary specified in test_suites.pyl. This is used as the starting point for the test's description on all bots.

  • The specific bot‘s description in waterfalls.pyl. This dictionary is merged in to the test’s dictionary. For example, the bot's Swarming parameters will override those specified for the test.

  • Any exceptions specified per-bot in test_suite_exceptions.pyl. For example, any additional command line arguments will be merged in here. Any Swarming dictionary entries specified here will override both those specified in test_suites.pyl and waterfalls.pyl.

Tips when making changes to the bot and test descriptions

In general, the only specialization of test suites that should be necessary is per operating system. If you add a new test to the bots and find yourself adding lots of exceptions to exclude the test from bots all of one particular type (like Android, Chrome OS, etc.), here are options to consider:

  • Look for a different test suite to add it to -- such as one that runs everywhere except on that OS type.

  • Add a new test suite that runs on all of the OS types where your new test should run, and add that test suite to the composition test suites referenced by the appropriate bots.

  • Split one of the existing test suites into two, and add the newly created test suite (including your new test) to all of the bots except those which should not run the new test.

If adding a new waterfall, or a new bot to a waterfall, please avoid adding new test suites. Instead, refer to one of the existing ones that is most similar to the new bot(s) you are adding. There should be no need to continue over-specializing the test suites.

If you see an opportunity to reduce redundancy or simplify test descriptions, please consider making a contribution to the generate_buildbot_json script or the data files. Some examples might include:

  • Automatically doubling the number of shards on Debug bots, by describing to the tool which bots are debug bots. This could eliminate the need for a lot of exceptions.

  • Specifying a single hard_timeout per bot, and eliminating all per-test timeouts from test_suites.pyl and test_suite_exceptions.pyl.

  • Merging some test suites. When the generator tool was written, the handwritten JSON files were replicated essentially exactly. There are many opportunities to simplify the configuration of which tests run on which bots. For example, there's no reason why the top-of-tree Clang bots should run more tests than the bots on other waterfalls running the same OS.

dpranke, jbudorick or kbr will be glad to review any improvements you make to the tools. Thanks in advance for contributing!