Repo documentation for recipe_engine

Table of Contents

Recipe Modules

  • archive
  • assertions
  • buildbucket — API for interacting with the buildbucket service.
  • cipd — API for interacting with CIPD.
  • commit_position
  • context — The context module provides APIs for manipulating a few pieces of ‘ambient’ data that affect how steps are run.
  • cq
  • file — File manipulation (read/write/delete/glob) methods.
  • generator_script — A simplistic method for running steps generated by an external script.
  • isolated
  • json — Methods for producing and consuming JSON.
  • led
  • path — All functions related to manipulating paths in recipes.
  • platform — Mockable system platform identity functions.
  • properties — Provides access to the recipes input properties.
  • python — Provides methods for running python scripts correctly.
  • random — Allows randomness in recipes.
  • raw_io — Provides objects for reading and writing raw data to and from steps.
  • runtime
  • scheduler — API for interacting with the LUCI Scheduler service.
  • service_account — API for getting OAuth2 access tokens for LUCI tasks or private keys.
  • source_manifest
  • step — Step is the primary API for running steps (external programs, scripts, etc.
  • swarming
  • tempfile — Simplistic temporary directory manager (deprecated).
  • time — Allows mockable access to the current time.
  • tricium — API for Tricium analyzers to use.
  • url — Methods for interacting with HTTP(s) URLs.
  • uuid — Allows test-repeatable access to a random UUID.

Recipes

Recipe Modules

recipe_modules / archive

DEPS: json, path, platform, python, step

class ArchiveApi(RecipeApi):

Provides steps to manipulate archive files (tar, zip, etc.).

def extract(self, step_name, archive_file, output, mode=‘safe’, include_files=()):

Step to uncompress |archive_file| into |output| directory.

Archive will be unpacked to |output| so that root of an archive is in |output|, i.e. archive.tar/file.txt will become |output|/file.txt.

Step will FAIL if |output| already exists.

Args:

  • step_name (str): display name of a step.
  • archive_file (Path): path to an archive file to uncompress, MUST exist.
  • output (Path): path to a directory to unpack to, MUST NOT exist.
  • mode (str): Must be either ‘safe’ or ‘unsafe’. In safe mode, if the archive attempts to extract files which would escape the extraction output location, the extraction will fail (raise StepException) which contains a member StepException.archive_skipped_files (all other files will be extracted normally). If ‘unsafe’, then tarfiles containing paths escaping output will be extracted as-is.
  • include_files (List[str]) - A list of globs matching files within the archive. Any files not matching any of these globs will be skipped. If omitted, all files are extracted (the default). Globs are matched with the fnmatch module. If a file “filename” in the archive exists, include_files with “file*” will match it. All paths for the matcher are converted to posix style (forward slash).

def package(self, root):

Returns Package object that can be used to compress a set of files.

Usage:

# Archive root/file and root/directory/**
(api.archive.package(root).
    with_file(root.join('file')).
    with_dir(root.join('directory')).
    archive('archive step', output, 'tbz'))

# Archive root/**
zip_path = (
  api.archive.package(root).
  archive('archive step', api.path['start_dir'].join('output.zip'))
)

Args:

  • root: a directory that would become root of a package, all files added to an archive must be Paths which are under this directory. If no files or directories are added with ‘with_file’ or ‘with_dir’, the entire root directory is packaged.

Returns: Package object.

recipe_modules / assertions

class AssertionsApi(RecipeApi):

Provides access to the assertion methods of the python unittest module.

Asserting non-step aspects of code (return values, non-step side effects) is expressed more naturally by making assertions within the RunSteps function of the test recipe. This api provides access to the assertion methods of unittest.TestCase to be used within test recipes.

All non-deprecated assertion methods of unittest.TestCase can be used.

An enhancement to the assertion methods is that if a custom msg is used, values for the non-msg arguments can be substituted into the message using named substitution with the format method of strings. e.g. self.AssertEqual(0, 1, ‘{first} should be {second}’) will raise an AssertionError with the message: ‘0 should be 1’.

The attributes longMessage and maxDiff are supported and have the same behavior as the unittest module.

Example (.../recipe_modules/my_module/tests/foo.py): DEPS = [ ‘my_module’, ‘recipe_engine/assertions’, ‘recipe_engine/properties’, ‘recipe_engine/runtime’, ]

def RunSteps(api):

Behavior of foo depends on whether build is experimental

value = api.my_module.foo() expected_value = api.properties.get(‘expected_value’) api.assertions.assertEqual(value, expected_value)

def GenTests(api): yield ( api.test(‘basic’) + api.properties(expected_value=‘normal value’) )

yield ( api.test(‘experimental’) + api.properties(expected_value=‘experimental value’) + api.properties(is_luci=True, is_experimental=True) )

recipe_modules / buildbucket

DEPS: json, path, platform, properties, raw_io, runtime, step, uuid

API for interacting with the buildbucket service.

Requires buildbucket command in $PATH: https://godoc.org/go.chromium.org/luci/buildbucket/client/cmd/buildbucket

url_title_fn parameter used in this module is a function that accepts a build_pb2.Build and returns a link title. If it returns None, the link is not reported. Default link title is build id.

class BuildbucketApi(RecipeApi):

A module for interacting with buildbucket.

@property
def bucket_v1(self):

Returns bucket name in v1 format.

Mostly useful for scheduling new builds using V1 API.

@property
def build(self):

Returns current build as a buildbucket.v2.Build protobuf message.

For value format, see Build message in build.proto.

DO NOT MODIFY the returned value. Do not implement conditional logic on returned tags; they are for indexing. Use returned build.input instead.

Pure Buildbot support: to simplify transition to buildbucket, returns a message even if the current build is not a buildbucket build. Provides as much information as possible. Some fields may be left empty, violating the rules described in the .proto files. If the current build is not a buildbucket build, returned build.id is 0.

@property
def build_id(self):

DEPRECATED, use build.id instead.

@property
def build_input(self):

DEPRECATED, use build.input instead.

def build_url(self, host=None, build_id=None):

Returns url to a build. Defaults to current build.

@property
def builder_cache_path(self):

Path to the builder cache directory.

Such directory can be used to cache builder-specific data. It remains on the bot from build to build. See “Builder cache” in https://chromium.googlesource.com/infra/luci/luci-go/+/master/buildbucket/proto/project_config.proto

@property
def builder_id(self):

Deprecated. Use build.builder instead.

@property
def builder_name(self):

Returns builder name. Shortcut for .build.builder.builder.

def cancel_build(self, build_id, **kwargs):

def collect_build(self, build_id, mirror_status=False, **kwargs):

Shorthand for collect_builds below, but for a single build only.

Args:

  • build_id: Integer ID of the build to wait for.
  • mirror_status: Set step status to build status.

Returns: Build. for the ended build.

def collect_builds(self, build_ids, interval=None, timeout=None, step_name=None, raise_if_unsuccessful=False):

Waits for a set of builds to end and returns their details.

Args:

  • build_ids: List of build IDs to wait for.
  • interval: Delay (in secs) between requests while waiting for build to end. Defaults to 1m.
  • timeout: Maximum time to wait for builds to end. Defaults to 1h.
  • step_name: Custom name for the generated step.
  • raise_if_unsuccessful: if any build being collected did not succeed, raise an exception.

Returns: A map from integer build IDs to the corresponding Build for all specified builds.

def get(self, build_id, url_title_fn=None, step_name=None):

Gets a build.

Args:

  • build_id: a buildbucket build ID.
  • url_title_fn: generates build URL title. See module docstring.
  • step_name: name for this step.

Returns: A build_pb2.Build.

def get_build(self, build_id, **kwargs):

DEPRECATED. Use get().

def get_multi(self, build_ids, url_title_fn=None, step_name=None):

Gets multiple builds.

Args:

  • build_ids: a list of build IDs.
  • url_title_fn: generates build URL title. See module docstring.
  • step_name: name for this step.

Returns: A dict {build_id: build_pb2.Build}.

@property
def gitiles_commit(self):

Returns input gitiles commit. Shortcut for .build.input.gitiles_commit.

For value format, see GitilesCommit message.

Never returns None, but sub-fields may be empty.

@host.setter
def host(self, value):

def is_critical(self, build=None):

Returns True if the build is critical. Build defaults to the current one.

@property
def properties(self):

DEPRECATED, use build attribute instead.

def put(self, builds, **kwargs):

Puts a batch of builds.

DEPRECATED. Use schedule() instead.

Args:

  • builds (list): A list of dicts, where keys are:
    • ‘bucket’: (required) name of the bucket for the request.
    • ‘parameters’ (dict): (required) arbitrary json-able parameters that a build system would be able to interpret.
    • ‘experimental’: (optional) a bool indicating whether build is experimental. If not provided, the value will be determined by whether the currently running build is experimental.
    • ‘tags’: (optional) a dict(str->str) of tags for the build. These will be added to those generated by this method and override them if appropriate. If you need to remove a tag set by default, set its value to None (for example, tags={'buildset': None} will ensure build is triggered without buildset tag).

Returns: A step that as its .stdout property contains the response object as returned by buildbucket.

def run(self, schedule_build_requests, collect_interval=None, timeout=None, url_title_fn=None, step_name=None, raise_if_unsuccessful=False):

Runs builds and returns results.

A shortcut for schedule() and collect_builds(). See their docstrings.

Returns: A list of completed Builds in the same order as schedule_build_requests.

def schedule(self, schedule_build_requests, url_title_fn=None, step_name=None):

Schedules a batch of builds.

Example:

    req = api.buildbucket.schedule_request(builder='linux')
    api.buildbucket.schedule([req])

Hint: when scheduling builds for CQ, let CQ know about them:

    api.cq.record_triggered_builds(*api.buildbucket.schedule([req1, req2]))

Args:

  • schedule_build_requests: a list of buildbucket.v2.ScheduleBuildRequest protobuf messages. Create one by calling schedule_request method.
  • url_title_fn: generates a build URL title. See module docstring.
  • step_name: name for this step.

Returns: A list of Build messages in the same order as requests.

Raises: InfraFailure if any of the requests fail.

def schedule_request(self, builder, project=None, bucket=None, properties=None, experimental=None, gitiles_commit=None, gerrit_changes=None, tags=None, inherit_buildsets=True, dimensions=None, priority=None, critical=None):

Creates a new ScheduleBuildRequest message with reasonable defaults.

This is a convenient function to create a ScheduleBuildRequest message.

Among args, messages can be passed as dicts of the same structure.

Example:

request = api.buildbucket.schedule_request(
    builder='linux',
    tags=api.buildbucket.tags(a='b'),
)
build = api.buildbucket.schedule([request])[0]

Args:

  • builder (str): name of the destination builder.
  • project (str): project containing the destinaiton builder. Defaults to the project of the current build.
  • bucket (str): bucket containing the destination builder. Defaults to the bucket of the current build.
  • properties (dict): input properties for the new build.
  • experimental: whether the build is allowed to affect prod. If not None, must be common_pb2.Trinary or bool. Defaults to the value of the current build. Read more about [experimental field](https://cs.chromium.org/chromium/infra/go/src/go.chromium.org/luci/buildbucket/proto/build.proto?q=“bool experimental”).
  • gitiles_commit (common_pb2.GitilesCommit): input commit. Defaults to the input commit of the current build. Read more about gitiles_commit.
  • gerrit_changes (list or common_pb2.GerritChange): list of input CLs. Defaults to gerrit changes of the current build. Read more about gerrit_changes.
  • tags (list or common_pb2.StringPair): tags for the new build.
  • inherit_buildsets (bool): if True (default), the returned request will include buildset tags from the current build.
  • dimensions (list of common_pb2.RequestedDimension): override dimensions defined on the server.
  • priority (int): Swarming task priority. The lower the more important. Valid values are [20..255]. Defaults to the value of the current build.
  • critical: whether the build status should not be used to assess correctness of the commit/CL. Defaults to .build.critical. See also Build.critical in https://chromium.googlesource.com/infra/luci/luci-go/+/master/buildbucket/proto/build.proto

def search(self, predicate, limit=None, url_title_fn=None, step_name=None):

Searches for builds.

Example: find all builds of the current CL.

from PB.go.chromium.org.luci.buildbucket.proto import rpc as rpc_pb2

related_builds = api.buildbucket.search(rpc_pb2.BuildPredicate(
  gerrit_changes=list(api.buildbucket.build.input.gerrit_changes),
))

Args:

  • predicate: a rpc_pb2.BuildPredicate object or a list thereof. If a list, the predicates are connected with logical OR.
  • limit: max number of builds to return. Defaults to 1000.
  • url_title_fn: generates a build URL title. See module docstring.

Returns: A list of builds ordered newest-to-oldest.

def set_buildbucket_host(self, host):

DEPRECATED. Use host property.

def set_output_gitiles_commit(self, gitiles_commit):

Sets buildbucket.v2.Build.output.gitiles_commit field.

This will tell other systems, consuming the build, what version of the code was actually used in this build and what is the position of this build relative to other builds of the same builder.

Args:

  • gitiles_commit(buildbucket.common_pb2.GitilesCommit): the commit that was actually checked out. Must have host, project and id. ID must match r'^[0-9a-f]{40}$' (git revision). If position is present, the build can be ordered along commits. Position requires ref. Ref, if not empty, must start with refs/.

Can be called at most once per build.

def tags(self, **tags):

Alias for tags in util.py. See doc there.

@property
def tags_for_child_build(self):

A dict of tags (key -> value) derived from current (parent) build for a child build.

def use_service_account_key(self, key_path):

Tells this module to start using given service account key for auth.

Otherwise the module is using the default account (when running on LUCI or locally), or no auth at all (when running on Buildbot).

Exists mostly to support Buildbot environment. Recipe for LUCI environment should not use this.

Args:

  • key_path (str): a path to JSON file with service account credentials.

recipe_modules / cipd

DEPS: json, path, platform, properties, python, raw_io, service_account, step

API for interacting with CIPD.

Depends on ‘cipd’ binary available in PATH: https://godoc.org/go.chromium.org/luci/cipd/client/cmd/cipd

class CIPDApi(RecipeApi):

CIPDApi provides basic support for CIPD.

This assumes that cipd (or cipd.exe or cipd.bat on windows) has been installed somewhere in $PATH.

def acl_check(self, pkg_path, reader=True, writer=False, owner=False):

Checks whether the caller has a given roles in a package.

Args:

  • pkg_path (str) - The package subpath.
  • reader (bool) - Check for READER role.
  • writer (bool) - Check for WRITER role.
  • owner (bool) - Check for OWNER role.

Returns True if the caller has given roles, False otherwise.

def build(self, input_dir, output_package, package_name, compression_level=None, install_mode=None, preserve_mtime=False, preserve_writable=False):

Builds, but does not upload, a cipd package from a directory.

Args:

  • input_dir (Path) - The directory to build the package from.
  • output_package (Path) - The file to write the package to.
  • package_name (str) - The name of the cipd package as it would appear when uploaded to the cipd package server.
  • compression_level (None|[0-9]) - Deflate compression level. If None, defaults to 5 (0 - disable, 1 - best speed, 9 - best compression).
  • install_mode (None|‘copy’|‘symlink’) - The mechanism that the cipd client should use when installing this package. If None, defaults to the platform default (‘copy’ on windows, ‘symlink’ on everything else).
  • preserve_mtime (bool) - Preserve file's modification time.
  • preserve_writable (bool) - Preserve file's writable permission bit.

Returns the CIPDApi.Pin instance.

def build_from_pkg(self, pkg_def, output_package, compression_level=None):

Builds a package based on a PackageDefinition object.

Args:

  • pkg_def (PackageDefinition) - The description of the package we want to create.
  • output_package (Path) - The file to write the package to.
  • compression_level (None|[0-9]) - Deflate compression level. If None, defaults to 5 (0 - disable, 1 - best speed, 9 - best compression).

Returns the CIPDApi.Pin instance.

def build_from_yaml(self, pkg_def, output_package, pkg_vars=None, compression_level=None):

Builds a package based on on-disk YAML package definition file.

Args:

  • pkg_def (Path) - The path to the yaml file.
  • output_package (Path) - The file to write the package to.
  • pkg_vars (dict[str]str) - A map of var name -> value to use for vars referenced in package definition file.
  • compression_level (None|[0-9]) - Deflate compression level. If None, defaults to 5 (0 - disable, 1 - best speed, 9 - best compression).

Returns the CIPDApi.Pin instance.

def create_from_pkg(self, pkg_def, refs=None, tags=None, compression_level=None):

Builds and uploads a package based on a PackageDefinition object.

This builds and uploads the package in one step.

Args:

  • pkg_def (PackageDefinition) - The description of the package we want to create.
  • refs (list[str]) - A list of ref names to set for the package instance.
  • tags (dict[str]str) - A map of tag name -> value to set for the package instance.
  • compression_level (None|[0-9]) - Deflate compression level. If None, defaults to 5 (0 - disable, 1 - best speed, 9 - best compression).

Returns the CIPDApi.Pin instance.

def create_from_yaml(self, pkg_def, refs=None, tags=None, pkg_vars=None, compression_level=None):

Builds and uploads a package based on on-disk YAML package definition file.

This builds and uploads the package in one step.

Args:

  • pkg_def (Path) - The path to the yaml file.
  • refs (list[str]) - A list of ref names to set for the package instance.
  • tags (dict[str]str) - A map of tag name -> value to set for the package instance.
  • pkg_vars (dict[str]str) - A map of var name -> value to use for vars referenced in package definition file.
  • compression_level (None|[0-9]) - Deflate compression level. If None, defaults to 5 (0 - disable, 1 - best speed, 9 - best compression).

Returns the CIPDApi.Pin instance.

def describe(self, package_name, version, test_data_refs=None, test_data_tags=None):

Returns information about a pacakge instance given its version: who uploaded the instance and when and a list of attached tags.

Args:

  • package_name (str) - The name of the cipd package.
  • version (str) - The package version to point the ref to.
  • test_data_refs (seq[str]) - The list of refs for this call to return by default when in test mode.
  • test_data_tags (seq[str]) - The list of tags (in ‘name:val’ form) for this call to return by default when in test mode.

Returns the CIPDApi.Description instance describing the package.

def ensure(self, root, ensure_file):

Ensures that packages are installed in a given root dir.

Args:

  • root (Path) - Path to installation site root directory.
  • ensure_file (EnsureFile) - List of packages to install.

Returns the map of subdirectories to CIPDApi.Pin instances.

@property
def executable(self):

def pkg_deploy(self, root, package_file):

Deploys the specified package to root.

ADVANCED METHOD: You shouldn‘t need this unless you’re doing advanced things with CIPD. Typically you should use the ensure method here to fetch+install packages to the disk.

Args:

  • package_file (Path) - Path to a package file to install.
  • root (Path) - Path to a CIPD root.

Returns a Pin for the deployed package.

def pkg_fetch(self, destination, package_name, version):

Downloads the specified package to destination.

ADVANCED METHOD: You shouldn‘t need this unless you’re doing advanced things with CIPD. Typically you should use the ensure method here to fetch+install packages to the disk.

Args:

  • destination (Path) - Path to a file location which will be (over)written with the package file contents.
  • package_name (str) - The package name (or pattern with e.g. ${platform})
  • version (str) - The CIPD version to fetch

Returns a Pin for the downloaded package.

def register(self, package_name, package_path, refs=(), tags=None):

Uploads and registers package instance in the package repository.

Args:

  • package_name (str) - The name of the cipd package.
  • package_path (Path) - The path to package instance file.
  • refs (seq[str]) - A list of ref names to set for the package instance.
  • tags (dict[str]basestring) - A map of tag name -> value to set for the package instance.

Returns: The CIPDApi.Pin instance.

def search(self, package_name, tag):

Searches for package instances by tag, optionally constrained by package name.

Args:

  • package_name (str) - The name of the cipd package.
  • tag (str) - The cipd package tag.

Returns the list of CIPDApi.Pin instances.

def set_ref(self, package_name, version, refs):

Moves a ref to point to a given version.

Args:

  • package_name (str) - The name of the cipd package.
  • version (str) - The package version to point the ref to.
  • refs (list[str]) - A list of ref names to set for the package instance.

Returns the CIPDApi.Pin instance.

@contextlib.contextmanager
def set_service_account(self, service_account):

Temporarily sets the service account used for authentication to CIPD.

Implemented as a context manager to avoid one part of a recipe from overwriting another's specified service account.

Args:

  • service_account(service_account.api.ServiceAccount): Service account to use for authentication.

def set_tag(self, package_name, version, tags):

Tags package of a specific version.

Args:

  • package_name (str) - The name of the cipd package.
  • version (str) - The package version to resolve. Could also be itself a tag or ref.
  • tags (dict[str]str) - A map of tag name -> value to set for the package instance.

Returns the CIPDApi.Pin instance.

recipe_modules / commit_position

class CommitPositionApi(RecipeApi):

Recipe module providing commit position parsing and formatting.

@classmethod
def format(cls, ref, revision_number):

Returns a commit position string.

ref must start with ‘refs/’.

@classmethod
def parse(cls, value):

Returns (ref, revision_number) tuple.

recipe_modules / context

DEPS: path

The context module provides APIs for manipulating a few pieces of ‘ambient’ data that affect how steps are run.

The pieces of information which can be modified are:

  • cwd - The current working directory.
  • env - The environment variables.
  • infra_step - Whether or not failures should be treated as infrastructure failures vs. normal failures.
  • namespace - A nesting namespace for all steps.
  • name_prefix - A prefix for all step names (within the current namespace).

The values here are all scoped using Python‘s with statement; there’s no mechanism to make an open-ended adjustment to these values (i.e. there's no way to change the cwd permanently for a recipe, except by surrounding the entire recipe with a with statement). This is done to avoid the surprises that typically arise with things like os.environ or os.chdir in a normal python program.

Example:

with api.context(cwd=api.path['start_dir'].join('subdir')):
  # this step is run inside of the subdir directory.
  api.step("cat subdir/foo", ['cat', './foo'])

class ContextApi(RecipeApi):

@contextmanager
def __call__(self, cwd=None, env_prefixes=None, env_suffixes=None, env=None, infra_steps=None, name_prefix=None, namespace=None):

Allows adjustment of multiple context values in a single call.

Args:

  • cwd (Path) - the current working directory to use for all steps. To ‘reset’ to the original cwd at the time recipes started, pass api.path['start_dir'].
  • env_prefixes (dict) - Environmental variable prefix augmentations. See below for more info.
  • env_suffixes (dict) - Environmental variable suffix augmentations. See below for more info.
  • env (dict) - Environmental variable overrides. See below for more info.
  • infra_steps (bool) - if steps in this context should be considered infrastructure steps. On failure, these will raise InfraFailure exceptions instead of StepFailure exceptions.
  • namespace (basestring) - Nest steps under this additional namespace. Resets the name_prefix.
  • name_prefix (basestring) - A string to prepend to the names of all steps and sub-namespaces within the current namespace. If there's already a name_prefix defined in the context, this appends to it.

Name prefixes and namespaces:

Example:

with api.context(name_prefix='cool '):
  # has name 'cool something'
  api.step('something', ['echo', 'something'])

  with api.context(namespace='world', name_prefix='hot '):
    # has name 'cool world|hot other'
    api.step('other', ['echo', 'other'])

    with api.context(name_prefix='tamale '):
      # has name 'cool world|hot tamale yowza'
      api.step('yowza', ['echo', 'yowza'])

  with api.context(namespace='ocean'):
    # has name 'cool ocean|mild'
    api.step('other', ['echo', 'mild'])

Environmental Variable Overrides:

Env is a mapping of environment variable name to the value you want that environment variable to have. The value is one of:

  • None, indicating that the environment variable should be removed from the environment when the step runs.
  • A string value. Note that string values will be %-formatted with the current value of the environment at the time the step runs. This means that you can have a value like: “/path/to/my/stuff:%(PATH)s” Which, at the time the step executes, will inject the current value of $PATH.

“env_prefix” and “env_suffix” are a list of Path or strings that get prefixed (or suffixed) to their respective environment variables, delimited with the system's path separator. This can be used to add entries to environment variables such as “PATH” and “PYTHONPATH”. If prefixes are specified and a value is also defined in “env”, the value will be installed as the last path component if it is not empty.

Look at the examples in “examples/” for examples of context module usage.

@property
def cwd(self):

Returns the current working directory that steps will run in.

Returns (Path|None) - The current working directory. A value of None is equivalent to api.path[‘start_dir’], though only occurs if no cwd has been set (e.g. in the outermost context of RunSteps).

@property
def env(self):

Returns modifications to the environment.

By default this is empty; There‘s no facility to observe the program’s startup environment. If you want to pass data to the recipe, it should be done with properties.

Returns (dict) - The env-key -> value mapping of current environment modifications.

@property
def env_prefixes(self):

Returns Path prefix modifications to the environment.

This will return a mapping of environment key to Path tuple for Path prefixes registered with the environment.

Returns (dict) - The env-key -> value(Path) mapping of current environment prefix modifications.

@property
def env_suffixes(self):

Returns Path suffix modifications to the environment.

This will return a mapping of environment key to Path tuple for Path suffixes registered with the environment.

Returns (dict) - The env-key -> value(Path) mapping of current environment suffix modifications.

@property
def infra_step(self):

Returns the current value of the infra_step setting.

Returns (bool) - True iff steps are currently considered infra steps.

@property
def namespace(self):

Gets the current namespace.

Returns (Tuple[str]) - The current step namespace plus name prefix for nesting.

def record_step_name(self, name):

Records a step name in the current namespace.

Args:

  • name (str) - The name of the step we want to run in the current context.

Returns Tuple[str] of the step name_tokens that should ACTUALLY run.

Side-effect: Updates global tracking state for this step name.

recipe_modules / cq

DEPS: properties, step

class CQApi(RecipeApi):

This module provides recipe API of LUCI CQ, aka pre-commit testing system.

More information about CQ: https://chromium.googlesource.com/infra/luci/luci-go/+/master/cq

def initialize(self):

def record_triggered_build_ids(self, *build_ids):

Adds given Buildbucket build ids to the list of triggered builds for CQ to wait on corresponding build completion later.

Must be called after some step.

Args:

  • build_id (int or string): Buildbucket build id.

def record_triggered_builds(self, *builds):

Adds given Buildbucket builds to the list of triggered builds for CQ to wait on corresponding build completion later.

Must be called after some step.

Expected usage:

  api.cq.record_triggered_builds(*api.buildbucket.schedule([req1, req2]))

Args:

  • Build objects, typically returned by api.buildbucket.schedule.

@property
def state(self):

CQ state pertaining to this recipe execution.

@property
def triggered_build_ids(self):

Returns recorded Buildbucket build ids as a list of integers.

recipe_modules / file

DEPS: json, path, python, raw_io, step

File manipulation (read/write/delete/glob) methods.

class FileApi(RecipeApi):

def copy(self, name, source, dest):

Copies a file (including mode bits) from source to destination on the local filesystem.

Behaves identically to shutil.copy.

Args:

  • name (str) - The name of the step.
  • source (Path|Placeholder) - The path to the file you want to copy.
  • dest (Path|Placeholder) - The path to the destination file name. If this path exists and is a directory, the basename of source will be appended to derive a path to a destination file.

Raises file.Error

def copytree(self, name, source, dest, symlinks=False):

Recursively copies a directory tree.

Behaves identically to shutil.copytree. dest must not exist.

Args:

  • name (str) - The name of the step.
  • source (Path) - The path of the directory to copy.
  • dest (Path) - The place where you want the recursive copy to show up. This must not already exist.
  • symlinks (bool) - Preserve symlinks. No effect on Windows.

Raises file.Error

def ensure_directory(self, name, dest, mode=511):

Ensures that dest exists and is a directory.

Args:

  • name (str) - The name of the step.
  • dest (Path) - The directory to ensure.
  • mode (int) - The mode to use if the directory doesn't exist. This method does not ensure the mode if the directory already exists (if you need that behaviour, file a bug).

Raises file.Error if the path exists but is not a directory.

def filesizes(self, name, files, test_data=None):

Returns list of filesizes for the given files.

Args:

  • name (str) - The name of the step.
  • files (list[Path]) - Paths to files.

Returns list[int], size of each file in bytes.

def flatten_single_directories(self, name, path):

Flattens singular directories, starting at path.

Example:

$ mkdir -p dir/which_has/some/singlular/subdirs/
$ touch dir/which_has/some/singlular/subdirs/with
$ touch dir/which_has/some/singlular/subdirs/files
$ flatten_single_directories(dir)
$ ls dir
with
files

This can be useful when you just want the ‘meat’ of a very sparse directory structure. For example, some tarballs like foo-1.2.tar.gz extract all their contents into a subdirectory foo-1.2/.

Using this function would essentially move all the actual contents of the extracted archive up to the top level directory, removing the need to e.g. hard-code/find the subfolder name after extraction (not all archives are even named after the subfolder they extract to).

Args:

  • name (str) - The name of the step.
  • path (Path|str) - The absolute path to begin flattening.

Raises file.Error

def glob_paths(self, name, source, pattern, test_data=()):

Performs glob expansion on pattern.

glob rules for pattern follow the same syntax as for the python glob stdlib module.

Args:

  • name (str) - The name of the step.
  • source (Path) - The directory whose contents should be globbed.
  • pattern (str) - The glob pattern to apply under source.
  • test_data (iterable[str]) - Some default data for this step to return when running under simulation. This should be the list of file items found in this directory.

Returns list[Path] - All paths found.

Raises file.Error.

def listdir(self, name, source, test_data=()):

List all files inside a directory.

Args:

  • name (str) - The name of the step.
  • source (Path) - The directory to list.
  • test_data (iterable[str]) - Some default data for this step to return when running under simulation. This should be the list of file items found in this directory.

Returns list[Path]

Raises file.Error.

def move(self, name, source, dest):

Moves a file or directory.

Behaves identically to shutil.move.

Args:

  • name (str) - The name of the step.
  • source (Path) - The path of the item to move.
  • dest (Path) - The new name of the item.

Raises file.Error

def read_json(self, name, source, test_data=''):

Reads a file as UTF-8 encoded json.

Args:

  • name (str) - The name of the step.
  • source (Path) - The path of the file to read.
  • test_data (object) - Some default json serializable data for this step to return when running under simulation.

Returns (object) - The content of the file.

Raise file.Error

def read_raw(self, name, source, test_data=''):

Reads a file as raw data.

Args:

  • name (str) - The name of the step.
  • source (Path) - The path of the file to read.
  • test_data (str) - Some default data for this step to return when running under simulation.

Returns (str) - The unencoded (binary) contents of the file.

Raises file.Error

def read_text(self, name, source, test_data=''):

Reads a file as UTF-8 encoded text.

Args:

  • name (str) - The name of the step.
  • source (Path) - The path of the file to read.
  • test_data (str) - Some default data for this step to return when running under simulation.

Returns (str) - The content of the file.

Raises file.Error

def remove(self, name, source):

Remove a file.

Does not raise Error if the file doesn't exist.

Args:

  • name (str) - The name of the step.
  • source (Path) - The file to remove.

Raises file.Error.

def rmcontents(self, name, source):

Similar to rmtree, but removes only contents not the directory.

This is useful e.g. when removing contents of current working directory. Deleting current working directory makes all further getcwd calls fail until chdir is called. chdir would be tricky in recipes, so we provide a call that doesn't delete the directory itself.

Args:

  • name (str) - The name of the step.
  • source (Path) - The directory whose contents should be removed.

Raises file.Error.

def rmglob(self, name, source, pattern):

Removes all entries in source matching the glob pattern.

Args:

  • name (str) - The name of the step.
  • source (Path) - The directory whose contents should be filtered and removed.
  • pattern (str) - The glob pattern to apply under source. Anything matching this pattern will be removed.

Raises file.Error.

def rmtree(self, name, source):

Recursively removes a directory.

This uses a native python on Linux/Mac, and uses rd on Windows to avoid issues w.r.t. path lengths and read-only attributes. If the directory is gone already, this returns without error.

Args:

  • name (str) - The name of the step.
  • source (Path) - The directory to remove.

Raises file.Error.

def symlink(self, name, source, linkname):

Creates a symlink on the local filesystem.

Behaves identically to os.symlink.

Args:

  • name (str) - The name of the step.
  • source (Path|Placeholder) - The path to link from.
  • linkname (Path|Placeholder) - The destination to link to.

Raises file.Error

def symlink_tree(self, root):

Creates a SymlinkTree, given a root directory.

Args:

  • root (Path): root of a tree of symlinks.

def truncate(self, name, path, size_mb=100):

Creates an empty file with path and size_mb on the local filesystem.

Args:

  • name (str) - The name of the step.
  • path (Path|str) - The absolute path to create.
  • size_mb (int) - The size of the file in megabytes. Defaults to 100

Raises file.Error

def write_json(self, name, dest, data):

Write the given json serializable data to dest.

Args:

  • name (str) - The name of the step.
  • dest (Path) - The path of the file to write.
  • data (object) - Json serializable data to write.

Raises file.Error.

def write_raw(self, name, dest, data):

Write the given data to dest.

Args:

  • name (str) - The name of the step.
  • dest (Path) - The path of the file to write.
  • data (str) - The data to write.

Raises file.Error.

def write_text(self, name, dest, text_data):

Write the given UTF-8 encoded text_data to dest.

Args:

  • name (str) - The name of the step.
  • dest (Path) - The path of the file to write.
  • text_data (str) - The UTF-8 encoded data to write.

Raises file.Error.

recipe_modules / generator_script

DEPS: context, json, path, python, step

A simplistic method for running steps generated by an external script.

This module was created before there was a way to put recipes directly into another repo. It is not recommended to use this, and it will be removed in the near future.

class GeneratorScriptApi(RecipeApi):

def __call__(self, path_to_script, *args):

Run a script and generate the steps emitted by that script.

The script will be invoked with --output-json /path/to/file.json. The script is expected to exit 0 and write steps into that file. Once the script outputs all of the steps to that file, the recipe will read the steps from that file and execute them in order. Any *args specified will be additionally passed to the script.

The step data is formatted as a list of JSON objects. Each object corresponds to one step, and contains the following keys:

  • name: the name of this step.
  • cmd: a list of strings that indicate the command to run (e.g. argv)
  • env: a {key:value} dictionary of the environment variables to override. every value is formatted with the current environment with the python % operator, so a value of “%(PATH)s:/some/other/path” would resolve to the current PATH value, concatenated with “:/some/other/path”
  • cwd: an absolute path to the current working directory for this script.
  • always_run: a bool which indicates that this step should run, even if some previous step failed.
  • outputs_presentation_json: a bool which indicates that this step will emit a presentation json file. If this is True, the cmd will be extended with a --presentation-json /path/to/file.json. This file will be used to update the step's presentation on the build status page. The file will be expected to contain a single json object, with any of the following keys:
    • logs: {logname: [lines]} specifies one or more auxillary logs.
    • links: {link_name: link_content} to add extra links to the step.
    • step_summary_text: A string to set as the step summary.
    • step_text: A string to set as the step text.
    • properties: {prop: value} build_properties to add to the build status page. Note that these are write-only: The only way to read them is via the status page. There is intentionally no mechanism to read them back from inside of the recipes.
  • allow_subannotations: allow this step to emit legacy buildbot subannotations. If you don‘t know what this is, you shouldn’t use it. If you know what it is, you also shouldn't use it.

recipe_modules / isolated

DEPS: cipd, context, json, path, properties, raw_io, runtime, step

class IsolatedApi(RecipeApi):

API for interacting with isolated.

The isolated client implements a tar-like scatter-gather mechanism for archiving files. The tool's source lives at http://go.chromium.org/luci/client/cmd/isolated.

This module will deploy the client to [CACHE]/isolated_client/; users should add this path to the named cache for their builder.

def download(self, step_name, isolated_hash, output_dir, isolate_server=None):

Downloads an isolated tree from an isolate server.

Args: step_name (str): name of the step. isolated_hash (str): the hash of an isolated tree. output_dir (Path): Path to an output directory. If a non-existent directory, it will be created; else if already existent, conflicting files will be overwritten and non-conflicting files already in the directory will be ignored. isolate_server (str|None): an isolate server to donwload from; if None, the module's default server will be used instead.

def initialize(self):

@property
def isolate_server(self):

Returns the associated isolate server.

def isolated(self, root_dir):

Returns an Isolated object that can be used to archive a set of files and directories, relative to a given root directory.

Args: root_dir (Path): directory relative to which files and directory will be isolated.

@property
def namespace(self):

Returns the associated namespace.

@contextlib.contextmanager
def on_path(self):

This context manager ensures the go isolated client is available on $PATH.

Example:

with api.isolated.on_path():
  # do your steps which require the isolated binary on path

recipe_modules / json

DEPS: properties, python, raw_io

Methods for producing and consuming JSON.

class JsonApi(RecipeApi):

@returns_placeholder
def input(self, data):

A placeholder which will expand to a file path containing .

def is_serializable(self, obj):

Returns True if the object is JSON-serializable.

@staticmethod
def loads(data, **kwargs):

Works like json.loads, but strips out unicode objects (replacing them with utf8-encoded str objects).

@returns_placeholder
def output(self, add_json_log=True, name=None, leak_to=None):

A placeholder which will expand to ‘/tmp/file’.

If leak_to is provided, it must be a Path object. This path will be used in place of a random temporary file, and the file will not be deleted at the end of the step.

Args:

  • add_json_log (True|False|‘on_failure’) - Log a copy of the output json to a step link named name. If this is ‘on_failure’, only create this log when the step has a non-SUCCESS status.

def read(self, name, path, add_json_log=True, output_name=None, **kwargs):

Returns a step that reads a JSON file.

This method is deprecated. Use file.read_json instead.

recipe_modules / led

DEPS: cipd, json, path, service_account, step

class LedApi(RecipeApi):

Interface to the led tool.

“led” stands for LUCI editor. It allows users to debug and modify LUCI jobs. It can be used to modify many aspects of a LUCI build, most commonly including the recipes used.

The main interface this module provides is a direct call to the led binary:

led_result = api.led( ‘get-builder’, [‘luci.chromium.try:chromium_presubmit’]) final_data = led_result.then(‘edit-recipe-bundle’).result

See the led binary for full documentation of commands.

def __call__(self, *cmd):

Runs led with the given arguments. Wraps result in a LedResult.

recipe_modules / path

DEPS: platform, properties

All functions related to manipulating paths in recipes.

Recipes handle paths a bit differently than python does. All path manipulation in recipes revolves around Path objects. These objects store a base path (always absolute), plus a list of components to join with it. New paths can be derived by calling the .join method with additional components.

In this way, all paths in Recipes are absolute, and are constructed from a small collection of anchor points. The built-in anchor points are:

  • api.path['start_dir'] - This is the directory that the recipe started in. it‘s similar to cwd, except that it’s constant.
  • api.path['cache'] - This directory is provided by whatever's running the recipe. Files and directories created under here /may/ be evicted in between runs of the recipe (i.e. to relieve disk pressure).
  • api.path['cleanup'] - This directory is provided by whatever's running the recipe. Files and directories created under here /are guaranteed/ to be evicted in between runs of the recipe. Additionally, this directory is guaranteed to be empty when the recipe starts.
  • api.path['tmp_base'] - This directory is the system-configured temp dir. This is a weaker form of ‘cleanup’, and its use should be avoided. This may be removed in the future (or converted to an alias of ‘cleanup’).
  • api.path['checkout'] - This directory is set by various ‘checkout’ modules in recipes. It was originally intended to make recipes easier to read and make code somewhat generic or homogenous, but this was a mistake. New code should avoid ‘checkout’, and instead just explicitly pass paths around. This path may be removed in the future.

There are other anchor points which can be defined (e.g. by the depot_tools/infra_paths module). Refer to those modules for additional documentation.

class PathApi(RecipeApi):

def __getitem__(self, name):

Gets the base path named name. See module docstring for more information.

def abs_to_path(self, abs_string_path):

Converts an absolute path string string_path to a real Path object, using the most appropriate known base path.

  • abs_string_path MUST be an absolute path
  • abs_string_path MUST be rooted in one of the configured base paths known to the path module.

This method will find the longest match in all the following:

  • module resource paths
  • recipe resource paths
  • repo paths
  • dynamic_paths
  • base_paths

Example:

# assume [START_DIR] == "/basis/dir/for/recipe"
api.path.abs_to_path("/basis/dir/for/recipe/some/other/dir") ->
  Path("[START_DIR]/some/other/dir")

Raises an ValueError if the preconditions are not met, otherwise returns the Path object.

def assert_absolute(self, path):

Raises AssertionError if the given path is not an absolute path.

Args:

  • path (Path|str) - The path to check.

def get(self, name, default=None):

Gets the base path named name. See module docstring for more information.

def get_config_defaults(self):

Internal recipe implementation function.

def initialize(self):

Internal recipe implementation function.

def mkdtemp(self, prefix=tempfile.template):

Makes a new temporary directory, returns Path to it.

Args:

  • prefix (str) - a tempfile template for the directory name (defaults to “tmp”).

Returns a Path to the new directory.

def mkstemp(self, prefix=tempfile.template):

Makes a new temporary file, returns Path to it.

Args:

  • prefix (str) - a tempfile template for the file name (defaults to “tmp”).

Returns a Path to the new file. Unlike tempfile.mkstemp, the file's file descriptor is closed.

def mock_add_paths(self, path):

For testing purposes, mark that |path| exists.

def mock_copy_paths(self, source, dest):

For testing purposes, copy |source| to |dest|.

def mock_remove_paths(self, path, filt=(lambda p: True)):

For testing purposes, assert that |path| doesn't exist.

Args:

  • path (str|Path) - The path to remove.
  • filt (func[str] bool) - Called for every candidate path. Return True to remove this path.

recipe_modules / platform

Mockable system platform identity functions.

class PlatformApi(RecipeApi):

Provides host-platform-detection properties.

Mocks:

  • name (str): A value equivalent to something that might be returned by sys.platform.
  • bits (int): Either 32 or 64.

@property
def arch(self):

Returns the current CPU architecture.

TODO: This is currently always hard-coded to ‘intel’... Apparently no one has actually needed this function?

@property
def bits(self):

Returns the bitness of the userland for the current system (either 32 or 64 bit).

TODO: If anyone needs to query for the kernel bitness, another accessor should be added.

@property
def cpu_count(self):

The number of CPU cores, according to multiprocessing.cpu_count().

@property
def is_linux(self):

Returns True iff the recipe is running on Linux.

@property
def is_mac(self):

Returns True iff the recipe is running on OS X.

@property
def is_win(self):

Returns True iff the recipe is running on Windows.

@property
def name(self):

Returns the current platform name which will be in:

  • win
  • mac
  • linux

@staticmethod
def normalize_platform_name(plat):

One of python's sys.platform values -> ‘win’, ‘linux’ or ‘mac’.

recipe_modules / properties

Provides access to the recipes input properties.

Every recipe is run with a JSON object called “properties”. These contain all inputs to the recipe. Some common examples would be properties like “revision”, which the build scheduler sets to tell a recipe to build/test a certain revision.

The properties that affect a particular recipe are defined by the recipe itself, and this module provides access to them.

Recipe properties are read-only; the values obtained via this API reflect the values provided to the recipe engine at the beginning of execution. There is intentionally no API to write property values (lest they become a kind of random-access global variable).

class PropertiesApi(RecipeApiPlain, collections.Mapping):

PropertiesApi implements all the standard Mapping functions, so you can use it like a read-only dict.

def legacy(self):

DEPRECATED: Returns a set of properties, possibly used by legacy scripts.

This excludes any recipe module-specific properties (i.e. those beginning with $).

Instead of passing all of the properties as a blob, please consider passing specific arguments to scripts that need them. Doing this makes it much easier to debug and diagnose which scripts use which properties.

def thaw(self):

Returns a read-write copy of all of the properties.

recipe_modules / python

DEPS: context, raw_io, step

Provides methods for running python scripts correctly.

This includes support for vpython, and knows how to specify parameters correctly for bots (e.g. ensuring that python is working on Windows, passing the unbuffered flag, etc.)

class PythonApi(RecipeApi):

def __call__(self, name, script, args=None, unbuffered=True, venv=None, **kwargs):

Return a step to run a python script with arguments.

TODO: We should just use a single “args” list. Having “script” separate but required/first leads to weird things like:

(... script='-m', args=['module'])

Args:

  • name (str): The name of the step.
  • script (str or Path): The Path of the script to run, or the first command-line argument to pass to Python.
  • args (list or None): If not None, additional arguments to pass to the Python command.
  • unbuffered (bool): If True, run Python in unbuffered mode.
  • venv (None or False or True or Path): If True, run the script through “vpython”. This will, by default, probe the target script for a configured VirtualEnv and, failing that, use an empty VirtualEnv. If a Path, this is a path to an explicit “vpython” VirtualEnv spec file to use. If False or None (default), the script will be run through the standard Python interpreter.
  • kwargs: Additional keyword arguments to forward to “step”.

Returns (step_data.StepData) - The StepData object as returned by api.step.

def failing_step(self, name, text, as_log=None):

Runs a failing step (exits 1).

def infra_failing_step(self, name, text, as_log=None):

Runs an infra-failing step (exits 1).

def inline(self, name, program, add_python_log=True, **kwargs):

Run an inline python program as a step.

Program is output to a temp file and run when this step executes.

Args:

  • name (str) - The name of the step
  • program (str) - The literal python program text. This will be dumped to a file and run like python /path/to/file.py
  • add_python_log (bool) - Whether to add a ‘python.inline’ link on this step on the build page. If true, the link will point to a log with a copy of program.

Returns (step_data.StepData) - The StepData object as returned by api.step.

def result_step(self, name, text, retcode, as_log=None, **kwargs):

Runs a no-op step that exits with a specified return code.

The recipe engine will raise an exception when seeing a return code != 0.

def succeeding_step(self, name, text, as_log=None):

Runs a succeeding step (exits 0).

recipe_modules / random

Allows randomness in recipes.

This module sets up an internal instance of ‘random.Random’. In tests, this is seeded with 1234, or a seed of your choosing (using the test_api's seed() method)

All members of random.Random are exposed via this API with getattr.

NOTE: This is based on the python random module, and so all caveats which apply there also apply to this (i.e. don't use it for anything resembling crypto).

Example:

def RunSteps(api):
  my_list = range(100)
  api.random.shuffle(my_list)
  # my_list is now random!

class RandomApi(RecipeApi):

def __getattr__(self, name):

Access a member of random.Random.

recipe_modules / raw_io

Provides objects for reading and writing raw data to and from steps.

class RawIOApi(RecipeApi):

@returns_placeholder
@staticmethod
def input(data, suffix='', name=None):

Returns a Placeholder for use as a step argument.

This placeholder can be used to pass data to steps. The recipe engine will dump the ‘data’ into a file, and pass the filename to the command line argument.

data MUST be of type ‘str’ (not basestring, not unicode).

If ‘suffix’ is not '', it will be used when the engine calls tempfile.mkstemp.

See examples/full.py for usage example.

@returns_placeholder
@staticmethod
def input_text(data, suffix='', name=None):

Returns a Placeholder for use as a step argument.

data MUST be of type ‘str’ (not basestring, not unicode). The str is expected to have valid utf-8 data in it.

Similar to input(), but ensures that ‘data’ is valid utf-8 text. Any non-utf-8 characters will be replaced with �.

@returns_placeholder
@staticmethod
def output(suffix='', leak_to=None, name=None, add_output_log=False):

Returns a Placeholder for use as a step argument, or for std{out,err}.

If ‘leak_to’ is None, the placeholder is backed by a temporary file with a suffix ‘suffix’. The file is deleted when the step finishes.

If ‘leak_to’ is not None, then it should be a Path and placeholder redirects IO to a file at that path. Once step finishes, the file is NOT deleted (i.e. it's ‘leaking’). ‘suffix’ is ignored in that case.

Args:

  • add_output_log (True|False|‘on_failure’) - Log a copy of the output to a step link named name. If this is ‘on_failure’, only create this log when the step has a non-SUCCESS status.

@returns_placeholder
@staticmethod
def output_dir(suffix='', leak_to=None, name=None):

Returns a directory Placeholder for use as a step argument.

If ‘leak_to’ is None, the placeholder is backed by a temporary dir with a suffix ‘suffix’. The dir is deleted when the step finishes.

If ‘leak_to’ is not None, then it should be a Path and placeholder redirects IO to a dir at that path. Once step finishes, the dir is NOT deleted (i.e. it's ‘leaking’). ‘suffix’ is ignored in that case.

@returns_placeholder
@staticmethod
def output_text(suffix='', leak_to=None, name=None, add_output_log=False):

Returns a Placeholder for use as a step argument, or for std{out,err}.

Similar to output(), but uses an OutputTextPlaceholder, which expects utf-8 encoded text. Similar to input(), but tries to decode the resulting data as utf-8 text, replacing any decoding errors with �.

Args:

  • add_output_log (True|False|‘on_failure’) - Log a copy of the output to a step link named name. If this is ‘on_failure’, only create this log when the step has a non-SUCCESS status.

recipe_modules / runtime

DEPS: properties

class RuntimeApi(RecipeApi):

This module assists in experimenting with production recipes.

For example, when migrating builders from Buildbot to pure LUCI stack.

@property
def is_experimental(self):

True if this recipe is currently running in experimental mode.

Typical usage is to modify steps which produce external side-effects so that non-production runs of the recipe do not affect production data.

Examples:

  • Uploading to an alternate google storage file name when in non-prod mode
  • Appending a ‘non-production’ tag to external RPCs

@property
def is_luci(self):

True if this recipe is currently running on LUCI stack.

Should be used only during migration from Buildbot to LUCI stack.

recipe_modules / scheduler

DEPS: buildbucket, json, platform, properties, raw_io, runtime, step, time

API for interacting with the LUCI Scheduler service.

Depends on ‘prpc’ binary available in $PATH: https://godoc.org/go.chromium.org/luci/grpc/cmd/prpc Documentation for scheduler API is in https://chromium.googlesource.com/infra/luci/luci-go/+/master/scheduler/api/scheduler/v1/scheduler.proto RPCExplorer available at https://luci-scheduler.appspot.com/rpcexplorer/services/scheduler.Scheduler

class SchedulerApi(RecipeApi):

A module for interacting with LUCI Scheduler service.

def emit_trigger(self, trigger, project, jobs, step_name=None):

Emits trigger to one or more jobs of a given project.

Args: trigger (Trigger): defines payload to trigger jobs with. project (str): name of the project in LUCI Config service, which is used by LUCI Scheduler instance. See https://luci-config.appspot.com/. jobs (iterable of str): job names per LUCI Scheduler config for the given project. These typically are the same as builder names.

def emit_triggers(self, trigger_project_jobs, timestamp_usec=None, step_name=None):

Emits a batch of triggers spanning one or more projects.

Up to date documentation is at https://chromium.googlesource.com/infra/luci/luci-go/+/master/scheduler/api/scheduler/v1/scheduler.proto

Args: trigger_project_jobs (iterable of tuples(trigger, project, jobs)): each tuple corresponds to parameters of emit_trigger API above. timestamp_usec (int): unix timestamp in microseconds. Useful for idempotency of calls if your recipe is doing its own retries. https://chromium.googlesource.com/infra/luci/luci-go/+/master/scheduler/api/scheduler/v1/triggers.proto

@property
def host(self):

Returns the backend hostname used by this module.

def set_host(self, host):

Changes the backend hostname used by this module.

Args: host (str): server host (e.g. ‘luci-scheduler.appspot.com’).

@property
def triggers(self):

Returns a list of triggers that triggered the current build.

A trigger is an instance of triggers_pb2.Trigger.

recipe_modules / service_account

DEPS: path, platform, raw_io, step

API for getting OAuth2 access tokens for LUCI tasks or private keys.

This is a thin wrapper over the luci-auth go executable ( https://godoc.org/go.chromium.org/luci/auth/client/cmd/luci-auth).

Depends on luci-auth to be in PATH.

class ServiceAccountApi(RecipeApi):

def default(self):

Returns an account associated with the task.

On LUCI, this is default account exposed through LUCI_CONTEXT[“local_auth”] protocol. When running locally this is an account the user logged in via “luci-auth login ...” command prior to running the recipe.

def from_credentials_json(self, key_path):

Returns a service account based on a JSON credentials file.

This is the file generated by Cloud Console when creating a service account key. It contains the private key inside.

Args: key_path: (str|Path) object pointing to a service account JSON key.

recipe_modules / source_manifest

class SourceManfiestApi(RecipeApi):

def set_json_manifest(self, name, data):

Uploads a source manifest with the given name.

NOTE: Due to current implementation restrictions, this method may only be called after some step has been run from the recipe. Calling this before running any steps is invalid and will fail. We hope to lift this restriction sometime after we don't need to support buildbot any more.

TODO(iannucci): remove this restriction.

Args:

  • name (str) - the name of the manifest. These names must be valid LogDog stream names, and must be unique within a recipe run. e.g.
    • “main_checkout”
    • “bisect/deadbeef”
  • data (dict) - the JSONPB representation of the source_manifest.proto Manifest message.

recipe_modules / step

DEPS: context, path

Step is the primary API for running steps (external programs, scripts, etc.).

class StepApi(RecipeApiPlain):

@property
def InfraFailure(self):

InfraFailure is a subclass of StepFailure, and will translate to a purple build.

This exception is raised from steps which are marked as infra_steps when they fail.

@property
def StepFailure(self):

This is the base Exception class for all step failures.

It can be manually raised from recipe code to cause the build to turn red.

Usage:

  • raise api.StepFailure("some reason")
  • except api.StepFailure:

@property
def StepTimeout(self):

StepTimeout is a subclass of StepFailure and is raised when a step times out.

@property
def StepWarning(self):

StepWarning is a subclass of StepFailure, and will translate to a yellow build.

@recipe_api.composite_step
def __call__(self, name, cmd, ok_ret=(0,), infra_step=False, wrapper=(), timeout=None, allow_subannotations=None, trigger_specs=None, stdout=None, stderr=None, stdin=None, step_test_data=None):

Returns a step dictionary which is compatible with annotator.py.

Args:

  • name (string): The name of this step.
  • cmd (list of strings): in the style of subprocess.Popen or None to create a no-op fake step.
  • ok_ret (tuple or set of ints, ‘any’, ‘all’): allowed return codes. Any unexpected return codes will cause an exception to be thrown. If you pass in the value ‘any’ or ‘all’, the engine will allow any return code to be returned. Defaults to {0}.
  • infra_step: Whether or not this is an infrastructure step. Infrastructure steps will place the step in an EXCEPTION state and raise InfraFailure.
  • wrapper: If supplied, a command to prepend to the executed step as a command wrapper.
  • timeout: If supplied, the recipe engine will kill the step after the specified number of seconds.
  • allow_subannotations (bool): if True, lets the step emit its own annotations. NOTE: Enabling this can cause some buggy behavior. Please strongly consider using step_result.presentation instead. If you have questions, please contact infra-dev@chromium.org.
  • trigger_specs: a list of trigger specifications
  • stdout: Placeholder to put step stdout into. If used, stdout won‘t appear in annotator’s stdout (and |allow_subannotations| is ignored).
  • stderr: Placeholder to put step stderr into. If used, stderr won‘t appear in annotator’s stderr.
  • stdin: Placeholder to read step stdin from.
  • step_test_data (func -> recipe_test_api.StepTestData): A factory which returns a StepTestData object that will be used as the default test data for this step. The recipe author can override/augment this object in the GenTests function.

Returns a step_data.StepData for the running step.

@property
def active_result(self):

The currently active (open) result from the last step that was run. This is a step_data.StepData object.

Allows you to do things like:

try:
  api.step('run test', [..., api.json.output()])
finally:
  result = api.step.active_result
  if result.json.output:
    new_step_text = result.json.output['step_text']
    api.step.active_result.presentation.step_text = new_step_text

This will update the step_text of the test, even if the test fails. Without this api, the above code would look like:

try:
  result = api.step('run test', [..., api.json.output()])
except api.StepFailure as f:
  result = f.result
  raise
finally:
  if result.json.output:
    new_step_text = result.json.output['step_text']
    api.step.active_result.presentation.step_text = new_step_text

@property
def defer_results(self):

See recipe_api.py for docs.

@contextlib.contextmanager
def nest(self, name):

Nest allows you to nest steps hierarchically on the build UI.

Calling

with api.step.nest(<name>):
  ...

will generate a dummy step with the provided name. All other steps run within this with statement will be hidden from the UI by default under this dummy step in a collapsible hierarchy. Nested blocks can also nest within each other.

recipe_modules / swarming

DEPS: cipd, context, isolated, json, path, properties, raw_io, runtime, step

class SwarmingApi(RecipeApi):

API for interacting with swarming.

The tool's source lives at http://go.chromium.org/luci/client/cmd/swarming.

This module will deploy the client to [CACHE]/swarming_client/; users should add this path to the named cache for their builder.

def collect(self, name, tasks, output_dir=None, timeout=None):

Waits on a set of Swarming tasks.

Args: name (str): The name of the step. tasks ((list(str|TaskRequestMetadata)): A list of ids or metadata objects corresponding to tasks to wait output_dir (Path|None): Where to download the tasks' isolated outputs. If set to None, they will not be downloades; else, a given task's outputs will be downloaded to output_dir//. timeout (str|None): The duration for which to wait on the tasks to finish. If set to None, there will be no timeout; else, timeout follows the format described by https://golang.org/pkg/time/#ParseDuration.

Returns: A list of TaskResult objects.

def initialize(self):

@contextlib.contextmanager
def on_path(self):

This context manager ensures the go swarming client is available on $PATH.

Example:

with api.swarming.on_path():
  # do your steps which require the swarming binary on path

def task_request(self):

Creates a new TaskRequest object.

See documentation for TaskRequest/TaskSlice to see how to build this up into a full task.

Once your TaskRequest is complete, you can pass it to trigger in order to have it start running on the swarming server.

def trigger(self, step_name, requests):

Triggers a set of Swarming tasks.

Args: step_name (str): The name of the step. tasks (seq[TaskRequest]): A sequence of task request objects representing the tasks we want to trigger.

Returns: A list of TaskRequestMetadata objects.

@contextlib.contextmanager
def with_server(self, server):

This context sets the server for Swarming calls.

Example:

with api.swarming.server(‘new-swarming-server.com’): # perform swarming calls

Args: server (str): The swarming server to call within context.

recipe_modules / tempfile

DEPS: file, path

Simplistic temporary directory manager (deprecated).

class TempfileApi(RecipeApi):

@contextlib.contextmanager
def temp_dir(self, prefix):

This makes a temporary directory which lives for the scope of the with statement.

Example:

with api.tempfile.temp_dir("some_prefix") as path:
  # use path
# path is deleted here.

recipe_modules / time

DEPS: python

Allows mockable access to the current time.

class TimeApi(RecipeApi):

def ms_since_epoch(self):

Returns current timestamp as an int number of milliseconds since epoch.

def sleep(self, secs):

Suspend execution of |secs| (float) seconds. Does nothing in testing.

If secs > 60 (sleep longer than one minute), run a step to do the sleep, so that if a user looks at a build, they know what the recipe is doing.

def time(self):

Return current timestamp as a float number of seconds since epoch.

def utcnow(self):

Return current UTC time as a datetime.datetime.

recipe_modules / tricium

DEPS: json, properties, step

API for Tricium analyzers to use.

class TriciumApi(RecipeApi):

TriciumApi provides basic support for Tricium.

def __init__(self, repository, ref, paths, **kwargs):

Sets up the API.

This assumes that the input is a Tricium GitFileDetails object, and the output is a Tricium Results object (see https://chromium.googlesource.com/infra/infra/+/master/go/src/infra/tricium/api/v1/data.proto for details and definitions).

def add_comment(self, category, message, path, url='', start_line=0, end_line=0, start_char=0, end_char=0, suggestions=None):

@property
def paths(self):

@property
def ref(self):

@property
def repository(self):

def write_comments(self, dump=False):

recipe_modules / url

DEPS: context, json, path, python, raw_io

Methods for interacting with HTTP(s) URLs.

class UrlApi(RecipeApi):

def get_file(self, url, path, step_name=None, headers=None, transient_retry=True, strip_prefix=None, timeout=None):

GET data at given URL and writes it to file.

Args:

  • url: URL to request.
  • path (Path): the Path where the content will be written.
  • step_name: optional step name, ‘fetch ’ by default.
  • headers: a {header_name: value} dictionary for HTTP headers.
  • transient_retry (bool or int): Determines how transient HTTP errorts (>500) will be retried. If True (default), errors will be retried up to 10 times. If False, no transient retries will occur. If an integer is supplied, this is the number of transient retries to perform. All retries have exponential backoff applied.
  • strip_prefix (str or None): If not None, this prefix must be present at the beginning of the response, and will be stripped from the resulting content (e.g., GERRIT_JSON_PREFIX).
  • timeout: Timeout (see step.call).

Returns (UrlApi.Response) - Response with “path” as its “output” value.

Raises:

  • HTTPError, InfraHTTPError: if the request failed.
  • ValueError: If the request was invalid.

def get_json(self, url, step_name=None, headers=None, transient_retry=True, strip_prefix=None, log=False, timeout=None, default_test_data=None):

GET data at given URL and writes it to file.

Args:

  • url: URL to request.
  • step_name: optional step name, ‘fetch ’ by default.
  • headers: a {header_name: value} dictionary for HTTP headers.
  • transient_retry (bool or int): Determines how transient HTTP errorts (>500) will be retried. If True (default), errors will be retried up to 10 times. If False, no transient retries will occur. If an integer is supplied, this is the number of transient retries to perform. All retries have exponential backoff applied.
  • strip_prefix (str or None): If not None, this prefix must be present at the beginning of the response, and will be stripped from the resulting content (e.g., GERRIT_JSON_PREFIX).
  • log (bool): If True, emit the JSON content as a log.
  • timeout: Timeout (see step.call).
  • default_test_data (jsonish): If provided, use this as the unmarshalled JSON result when testing if no overriding data is available.

Returns (UrlApi.Response) - Response with the JSON as its “output” value.

Raises:

  • HTTPError, InfraHTTPError: if the request failed.
  • ValueError: If the request was invalid.

def get_text(self, url, step_name=None, headers=None, transient_retry=True, timeout=None, default_test_data=None):

GET data at given URL and writes it to file.

Args:

  • url: URL to request.
  • step_name: optional step name, ‘fetch ’ by default.
  • headers: a {header_name: value} dictionary for HTTP headers.
  • transient_retry (bool or int): Determines how transient HTTP errorts (>500) will be retried. If True (default), errors will be retried up to 10 times. If False, no transient retries will occur. If an integer is supplied, this is the number of transient retries to perform. All retries have exponential backoff applied.
  • timeout: Timeout (see step.call).
  • default_test_data (str): If provided, use this as the text output when testing if no overriding data is available.

Returns (UrlApi.Response) - Response with the content as its output value.

Raises:

  • HTTPError, InfraHTTPError: if the request failed.
  • ValueError: If the request was invalid.

def join(self, *parts):

Constructs a URL path from composite parts.

Args:

  • parts (str...): Strings to concastenate. Any leading or trailing slashes will be stripped from intermediate strings to ensure that they join together. Trailing slashes will not be stripped from the last part.

def validate_url(self, v):

Validates that “v” is a valid URL.

A valid URL has a scheme and netloc, and must begin with HTTP or HTTPS.

Args:

  • v (str): The URL to validate.

Returns (bool) - True if the URL is considered secure, False if not.

Raises:

  • ValueError: if “v” is not valid.

recipe_modules / uuid

Allows test-repeatable access to a random UUID.

class UuidApi(RecipeApi):

def random(self):

Returns a random UUID string.

Recipes

recipes / archive:examples/full

DEPS: archive, context, file, json, path, platform, raw_io, step

def RunSteps(api):

recipes / assertions:tests/assert-raises

DEPS: assertions, properties, step

def RunSteps(api):

recipes / assertions:tests/assertions

DEPS: assertions, properties, step

def RunSteps(api):

recipes / assertions:tests/attribute_error

DEPS: assertions, properties, step

def RunSteps(api):

recipes / assertions:tests/long_message

DEPS: assertions, step

def RunSteps(api):

recipes / assertions:tests/max_diff

DEPS: assertions, properties, step

def RunSteps(api):

recipes / buildbucket:examples/full

DEPS: buildbucket, platform, properties, raw_io, step

This file is a recipe demonstrating the buildbucket recipe module.

def RunSteps(api):

recipes / buildbucket:run/multi

DEPS: buildbucket, properties

Launches multiple builds at the same revision.

def RunSteps(api, build_requests, collect_builds):

recipes / buildbucket:tests/build

DEPS: buildbucket, properties, step

def RunSteps(api):

recipes / buildbucket:tests/collect

DEPS: buildbucket, properties

def RunSteps(api):

recipes / buildbucket:tests/get

DEPS: buildbucket, json, step

def RunSteps(api):

recipes / buildbucket:tests/put

DEPS: buildbucket, properties, runtime

def RunSteps(api):

recipes / buildbucket:tests/schedule

DEPS: buildbucket, properties, runtime, step

def RunSteps(api):

recipes / buildbucket:tests/search

DEPS: buildbucket, json, properties, runtime, step

def RunSteps(api):

recipes / cipd:examples/full

DEPS: cipd, json, path, platform, properties, service_account, step

def RunSteps(api, use_pkg, pkg_files, pkg_dirs, pkg_vars, ver_files, install_mode, refs, tags):

recipes / commit_position:examples/full

DEPS: commit_position, step

def RunSteps(api):

recipes / context:examples/full

DEPS: context, path, raw_io, step

def RunSteps(api):

recipes / context:tests/cwd

DEPS: context, path, step

def RunSteps(api):

recipes / context:tests/env

DEPS: context, path, raw_io, step

def RunSteps(api):

recipes / context:tests/infra_step

DEPS: context, path, step

def RunSteps(api):

recipes / cq:tests/triggered_build_ids

DEPS: buildbucket, cq, step

def RunSteps(api):

recipes / cq:tests/type_of_run

DEPS: cq, properties, step

def RunSteps(api):

recipes / engine_tests/bad_subprocess

DEPS: python

Tests that daemons that hang on to STDOUT can't cause the engine to hang.

def RunSteps(api):

recipes / engine_tests/comprehensive_ui

DEPS: python, raw_io, step

A fast-running recipe which comprehensively covers all StepPresentation features available in the recipe engine.

def RunSteps(api):

def named_step(api, name):

recipes / engine_tests/config_operations

DEPS: json, step

Tests that recipes can modify configuration options in various ways.

def BaseConfig(**_kwargs):

def DumpRecipeEngineTestConfig(api, config):

def RunSteps(api):

@config_ctx()
def test1(c):

@config_ctx(includes=[‘test2a’])
def test2(c):

@config_ctx()
def test2a(c):

recipes / engine_tests/expect_exception

DEPS: step

Tests that step_data can accept multiple specs at once.

def RunSteps(api):

recipes / engine_tests/functools_partial

DEPS: step

Engine shouldn't explode when step_test_data gets functools.partial.

This is a regression test for a bug caused by this revision: http://src.chromium.org/viewvc/chrome?revision=298072&view=revision

When this recipe is run (by run_test.py), the _print_step code is exercised.

def RunSteps(api):

recipes / engine_tests/missing_start_dir

DEPS: path, step

Tests that deleting the current working directory doesn't immediately fail.

def RunSteps(api):

recipes / engine_tests/module_injection_site

DEPS: path, step

This test serves to demonstrate that the ModuleInjectionSite object on recipe modules (i.e. the .m) also contains a reference to the module which owns it.

This was implemented to aid in refactoring some recipes (crbug.com/782142).

def RunSteps(api):

recipes / engine_tests/multi_test_data

DEPS: raw_io, step

Tests that step_data can accept multiple specs at once.

def RunSteps(api):

recipes / engine_tests/multiple_placeholders

DEPS: assertions, json, step

Tests error checking around multiple placeholders in a single step.

def RunSteps(api):

recipes / engine_tests/nonexistent_command

DEPS: step

def RunSteps(api):

recipes / engine_tests/proto_properties

DEPS: assertions, properties

def RunSteps(api, properties, env_props):

recipes / engine_tests/recipe_paths

DEPS: path, python, step

Tests that recipes have access to names, resources and their repo.

def RunSteps(api):

recipes / engine_tests/sort_properties

DEPS: step

Tests that step presentation properties can be ordered.

def RunSteps(api):

recipes / engine_tests/step_stack_exhaustion

DEPS: step

Tests that placeholders can't wreck the world by exhausting the step stack.

def RunSteps(api):

recipes / engine_tests/undeclared_method

DEPS: properties, python, step

def RunSteps(api, from_recipe, attribute, module):

recipes / engine_tests/unicode

DEPS: properties, step

def RunSteps(api):

recipes / engine_tests/whitelist_steps

DEPS: context, properties, step

Tests that step_data can accept multiple specs at once.

def RunSteps(api, fakeit):

recipes / file:examples/copy

DEPS: file, json, path

def RunSteps(api):

recipes / file:examples/copytree

DEPS: file, path

def RunSteps(api):

recipes / file:examples/error

DEPS: file, path

def RunSteps(api):

recipes / file:examples/flatten_single_directories

DEPS: file, path

def RunSteps(api):

recipes / file:examples/glob

DEPS: file, json, path

def RunSteps(api):

recipes / file:examples/handle_json_file

DEPS: file, path

def RunSteps(api):

recipes / file:examples/listdir

DEPS: file, path

def RunSteps(api):

recipes / file:examples/raw_copy

DEPS: file, json, path

def RunSteps(api):

recipes / file:examples/symlink

DEPS: file, json, path

def RunSteps(api):

recipes / file:examples/truncate

DEPS: file, path

def RunSteps(api):

recipes / generator_script:examples/full

DEPS: generator_script, json, path, properties, step

def RunSteps(api, script_name):

recipes / isolated:examples/full

DEPS: file, isolated, json, path, runtime, step

def RunSteps(api):

recipes / json:examples/full

DEPS: json, path, properties, python, raw_io, step

def RunSteps(api):

recipes / json:tests/add_json_log

DEPS: json, step

def RunSteps(api):

recipes / led:tests/full

DEPS: json, led, step

def RunSteps(api):

recipes / path:examples/full

DEPS: path, platform, properties, step

def RunSteps(api):

recipes / platform:examples/full

DEPS: platform, step

def RunSteps(api):

recipes / properties:examples/full

DEPS: properties, step

def RunSteps(api, props, env_props):

recipes / python:examples/full

DEPS: path, python, raw_io, step

Launches the repo bundler.

def RunSteps(api):

recipes / python:tests/infra_failing_step

DEPS: python, step

Tests for api.python.infra_failing_step.

def RunSteps(api):

recipes / random:tests/full

DEPS: random, step

def RunSteps(api):

recipes / raw_io:examples/full

DEPS: path, properties, python, raw_io, step

def RunSteps(api):

recipes / runtime:tests/full

DEPS: runtime, step

def RunSteps(api):

recipes / scheduler:examples/emit_triggers

DEPS: json, properties, runtime, scheduler, time

This file is a recipe demonstrating emitting triggers to LUCI Scheduler.

def RunSteps(api):

recipes / scheduler:examples/host

DEPS: scheduler, step

This file is a recipe demonstrating reading/mocking scheduler host.

def RunSteps(api):

recipes / scheduler:examples/triggers

DEPS: scheduler, step

This file is a recipe demonstrating reading triggers of the current build.

def RunSteps(api):

recipes / service_account:examples/full

DEPS: path, platform, properties, raw_io, service_account

def RunSteps(api, key_path, scopes):

recipes / source_manifest:examples/simple

DEPS: python, source_manifest

def RunSteps(api):

recipes / step:examples/full

DEPS: context, json, path, properties, step

def RunSteps(api, bad_return, access_invalid_data, access_deep_invalid_data, assign_extra_junk, timeout):

recipes / step:tests/active_result

DEPS: step

def RunSteps(api):

recipes / step:tests/defer

DEPS: step

def RunSteps(api):

recipes / step:tests/inject_paths

DEPS: context, path, properties, step

def RunSteps(api):

recipes / step:tests/nested

DEPS: context, step

def RunSteps(api):

recipes / step:tests/stdio

DEPS: raw_io, step

def RunSteps(api):

recipes / step:tests/step_call_args

DEPS: step

def RunSteps(api):

recipes / step:tests/subannotations

DEPS: step

def RunSteps(api):

recipes / step:tests/timeout

DEPS: properties, step

def RunSteps(api, timeout):

recipes / step:tests/trigger

DEPS: properties, step

def RunSteps(api, command):

recipes / swarming:examples/full

DEPS: cipd, path, runtime, step, swarming

def RunSteps(api):

recipes / tempfile:examples/full

DEPS: tempfile

def RunSteps(api):

recipes / time:examples/full

DEPS: step, time

def RunSteps(api):

recipes / tricium:examples/full

DEPS: properties, tricium

def RunSteps(api):

recipes / url:examples/full

DEPS: context, path, step, url

def RunSteps(api):

recipes / url:tests/join

DEPS: step, url

def RunSteps(api):

recipes / url:tests/validate_url

DEPS: properties, step, url

def RunSteps(api):

recipes / uuid:examples/full

DEPS: step, uuid

def RunSteps(api):