tree: 6410de52b65d6c2dd5dd6afa93afd8fabad9d014 [path history] [tgz]
  1. goma/
  2. recipe_modules/
  3. recipe_proto/
  4. recipes/
  5. reclient/
  6. unittests/
  7. .gitattributes
  8. .gitignore
  9. __init__.py
  10. bot_utils.py
  11. build_directory.py
  12. BUILD_TEAM_OWNERS
  13. cloudtail_utils.py
  14. crash_utils.py
  15. daemonizer.py
  16. extract_build.py
  17. goma_bq_utils.py
  18. goma_utils.py
  19. kill_processes.py
  20. OWNERS
  21. PEEPS_SECURITY_TEAM_OWNERS
  22. PRESUBMIT.py
  23. README.md
  24. recipes.py
  25. runisolatedtest.py
  26. runtest.py
  27. tee.py
  28. test.json
  29. upload_goma_logs.py
  30. upload_goma_logs.py.vpython
  31. withxvfb.py
  32. xvfb.py
  33. zip_build.py
recipes/README.md

recipes

This directory contains scripts which run on LUCI builders. These scripts take the form of recipes, a command-execution DSL. The recipes in this repo live in the recipes and recipe_modules subdirectories.

Quick How-To-Modify-Recipes

  1. Make your changes to the various recipes or recipe_modules that you need.

  2. Run the simulator by running ./recipes.py test train.

    If you want to update a single recipe, run ./recipes.py test train --filter=<recipe-name>.

    This will run through all the simulation inputs generated by the recipes' GenTests methods, executing the recipe code for each one, using the simulation inputs to mock the output of the various steps. The list of steps that the recipe would have run, given that simulation‘s inputs, will be recorded as a JSON file in the recipe’s <recipe.expected> folder.

  3. Upload the recipe changes as well as the new expectation files.

  4. Review the expectation file diffs to make sure they‘re actually what you intended. If your CL affects many expectation files, it’s STRONGLY recommended to make your CL change as small as possible so that it's easy to correlate the code change with the expectation changes.

The recipe simulator insists on 100% line coverage of the recipes. If it complains about uncovered lines, you should add a new simulation case to the GenTests method of the given recipe. Give it a descriptive name for what case you're trying to cover, and give it step output values which would lead to that case. For example 'report_bad_targets_on_compile_failure' might have the compile step exit with a return code of 1 and emit a JSON file containing the bad target names. You would instruct the test in GenTests to do this, and when the simulator runs it, it will use your supplied return code and JSON file to simulate that step having those outputs.

Failing to train the expectations will cause the CQ to reject the patch; it will run the simulation tests in test mode, which checks to see that the expectation files match the current recipe code, but doesn't emit the new JSON files if it finds a descrepancy.

Recipes documentation

The recipe code in this directory (modules in recipe_modules and recipes in recipes) have some documentation in them, in the standard Python docstring style. You can view this documentation by reading the files. The recipe system also has a way to generate documentation for a set of recipes. Running ./recipes.py doc from this directory will generate a README.recipes.md file. This file may be easier to use than directly browsing the files themselves.