DEPS: json, path, platform, step
Provides steps to manipulate archive files (tar, zip, etc.).
— def extract(self, step_name: str, archive_file: (config_types.Path | str), output: (config_types.Path | str), mode: str=‘safe’, include_files: Sequence[str]=(), archive_type: (str | None)=None):
Step to uncompress |archive_file| into |output| directory.
Archive will be unpacked to |output| so that root of an archive is in |output|, i.e. archive.tar/file.txt will become |output|/file.txt.
Step will FAIL if |output| already exists.
Args:
output
location, the extraction will fail (raise StepException) which contains a member StepException.archive_skipped_files
(all other files will be extracted normally). If ‘unsafe’, then tarfiles containing paths escaping output
will be extracted as-is.fnmatch
module. If a file “filename” in the archive exists, include_files with “file*” will match it. All paths for the matcher are converted to posix style (forward slash).— def package(self, root: config_types.Path):
Returns Package object that can be used to compress a set of files.
Usage:
# Archive root/file and root/directory/** (api.archive.package(root). with_file(root / 'file'). with_dir(root / 'directory'). archive('archive step', output, 'tbz')) # Archive root/** zip_path = ( api.archive.package(root). archive('archive step', api.path.start_dir / 'output.zip') )
Args:
Returns: Package object.
Provides access to the assertion methods of the python unittest module.
Asserting non-step aspects of code (return values, non-step side effects) is expressed more naturally by making assertions within the RunSteps function of the test recipe. This api provides access to the assertion methods of unittest.TestCase to be used within test recipes.
All non-deprecated assertion methods of unittest.TestCase can be used.
An enhancement to the assertion methods is that if a custom msg is used, values for the non-msg arguments can be substituted into the message using named substitution with the format method of strings. e.g. self.AssertEqual(0, 1, ‘{first} should be {second}’) will raise an AssertionError with the message: ‘0 should be 1’.
The attributes longMessage and maxDiff are supported and have the same behavior as the unittest module.
Example (.../recipe_modules/my_module/tests/foo.py):
DEPS = [ 'my_module', 'recipe_engine/assertions', 'recipe_engine/properties', 'recipe_engine/runtime', ] def RunSteps(api): '''Behavior of foo depends on whether build is experimental''' value = api.my_module.foo() expected_value = api.properties.get('expected_value') api.assertions.assertEqual(value, expected_value) def GenTests(api): yield ( api.test('basic') + api.properties(expected_value='normal value') ) yield ( api.test('experimental') + api.properties(expected_value='experimental value') + api.runtime(is_experimental=True) )
DEPS: cipd, path, properties, step
API for interacting with Provenance server using the broker tool.
@property
— def bcid_reporter_path(self):
Returns the path to the broker binary.
When the property is accessed the first time, the latest stable, released broker will be installed using cipd.
— def report_cipd(self, digest, pkg, iid, server_url=None):
Reports cipd digest to local provenance server.
This is used to report produced artifacts hash and metadata to provenance, it is used to generate provenance.
Args:
— def report_gcs(self, digest, guri, server_url=None):
Reports gcs digest to local provenance server.
This is used to report produced artifacts hash and metadata to provenance, it is used to generate provenance.
Args:
— def report_sbom(self, digest, guri, sbom_subject, server_url=None):
Reports SBOM gcs digest to local provenance server.
This is used to report the SBOM metadata to provenance, along with the hash of the artifact it represents. It is also used to generate provenance.
Args:
— def report_stage(self, stage, server_url=None):
Reports task stage to local provenance server.
Args:
DEPS: json, path, platform, raw_io, resultdb, runtime, step, uuid, warning
API for interacting with the buildbucket service.
Requires buildbucket
command in $PATH
: https://godoc.org/go.chromium.org/luci/buildbucket/client/cmd/buildbucket
url_title_fn
parameter used in this module is a function that accepts a build_pb2.Build
and returns a link title. If it returns None
, the link is not reported. Default link title is build ID.
A module for interacting with buildbucket.
— def add_tags_to_current_build(self, tags):
Adds arbitrary tags during the runtime of a build.
Args:
@property
— def backend_hostname(self):
Returns the backend hostname for the build. If it is legacy swarming build then the swarming hostname will be returned.
@property
— def backend_task_dimensions(self):
Returns the task dimensions used by the task for the build.
— def backend_task_dimensions_from_build(self, build=None):
Returns the task dimensions for the provided build. If no build is provided, then self.build will be used.
@property
— def backend_task_id(self):
Returns the task id of the task for the build.
— def backend_task_id_from_build(self, build=None):
Returns the task id of the task for the provided build. If no build is provided, then self.build will be used.
@property
— def bucket_v1(self):
Returns bucket name in v1 format.
Mostly useful for scheduling new builds using v1 API.
@property
— def build(self):
Returns current build as a buildbucket.v2.Build
protobuf message.
For value format, see Build
message in build.proto.
DO NOT MODIFY the returned value. Do not implement conditional logic on returned tags; they are for indexing. Use returned build.input
instead.
Pure Buildbot support: to simplify transition to buildbucket, returns a message even if the current build is not a buildbucket build. Provides as much information as possible. Some fields may be left empty, violating the rules described in the .proto files. If the current build is not a buildbucket build, returned build.id
is 0.
@property
— def build_id(self):
@property
— def build_input(self):
— def build_url(self, host=None, build_id=None):
Returns url to a build. Defaults to current build.
@property
— def builder_cache_path(self):
Path to the builder cache directory.
Such directory can be used to cache builder-specific data. It remains on the bot from build to build. See “Builder cache” in https://chromium.googlesource.com/infra/luci/luci-go/+/main/buildbucket/proto/project_config.proto
@property
— def builder_full_name(self):
Returns the full builder name: {project}/{bucket}/{builder}.
@property
— def builder_id(self):
@property
— def builder_name(self):
Returns builder name. Shortcut for .build.builder.builder
.
@property
— def builder_realm(self):
Returns the LUCI realm name of the current build.
Raises InfraFailure
if the build proto doesn‘t have project
or bucket
set. This can happen in tests that don’t properly mock build proto.
— def cancel_build(self, build_id, reason=' ', step_name=None):
Cancel the build associated with the provided build ID.
Args:
build_id
(int|str): a buildbucket build ID. It should be either an integer(e.g. 123456789 or ‘123456789’) or the numeric value in string format.reason
(str): reason for canceling the given build. Can't be None or Empty. Markdown is supported.Returns: None if build is successfully canceled. Otherwise, an InfraFailure will be raised
— def collect_build(self, build_id, **kwargs):
Shorthand for collect_builds
below, but for a single build only.
Args:
Returns: Build. for the ended build.
— def collect_builds(self, build_ids, interval=None, timeout=None, step_name=None, raise_if_unsuccessful=False, url_title_fn=None, mirror_status=False, fields=DEFAULT_FIELDS, cost=None, eager=False):
Waits for a set of builds to end and returns their details.
Args:
build_ids
: List of build IDs to wait for.interval
: Delay (in secs) between requests while waiting for build to end. Defaults to 1m.timeout
: Maximum time to wait for builds to end. Defaults to 1h.step_name
: Custom name for the generated step.raise_if_unsuccessful
: if any build being collected did not succeed, raise an exception.url_title_fn
: generates build URL title. See module docstring.mirror_status
: mark the step as failed/infra-failed if any of the builds did not succeed. Ignored if raise_if_unsuccessful is True.fields
: a list of fields to include in the response, names relative to build_pb2.Build
(e.g. [“tags”, “infra.swarming”]).cost
: A step.ResourceCost to override for the underlying bb invocation. If not specified, will use the recipe_engine's default values for ResourceCost.eager
: Whether stop upon getting the first build.Returns: A map from integer build IDs to the corresponding Build for all specified builds.
— def get(self, build_id, url_title_fn=None, step_name=None, fields=DEFAULT_FIELDS, test_data=None):
Gets a build.
Args:
build_id
: a buildbucket build ID.url_title_fn
: generates build URL title. See module docstring.step_name
: name for this step.fields
: a list of fields to include in the response, names relative to build_pb2.Build
(e.g. [“tags”, “infra.swarming”]).test_data
: a build_pb2.Build for use in testing.Returns: A build_pb2.Build.
— def get_multi(self, build_ids, url_title_fn=None, step_name=None, fields=DEFAULT_FIELDS, test_data=None):
Gets multiple builds.
Args:
build_ids
: a list of build IDs.url_title_fn
: generates build URL title. See module docstring.step_name
: name for this step.fields
: a list of fields to include in the response, names relative to build_pb2.Build
(e.g. [“tags”, “infra.swarming”]).test_data
: a sequence of build_pb2.Build objects for use in testing.Returns: A dict {build_id: build_pb2.Build}.
@property
— def gitiles_commit(self):
Returns input gitiles commit. Shortcut for .build.input.gitiles_commit
.
For value format, see GitilesCommit
message.
Never returns None, but sub-fields may be empty.
— def hide_current_build_in_gerrit(self):
Hides the build in UI
@host.setter
— def host(self, value):
— def is_critical(self, build=None):
Returns True if the build is critical. Build defaults to the current one.
— def list_builders(self, project, bucket, step_name=None):
Lists configured builders in a bucket.
Args:
Returns: A list of builder names, excluding the project and bucket (e.g. ‘betty-pi-arc-release-main’).
— def run(self, schedule_build_requests, collect_interval=None, timeout=None, url_title_fn=None, step_name=None, raise_if_unsuccessful=False, eager=False):
Runs builds and returns results.
A shortcut for schedule() and collect_builds(). See their docstrings.
Returns: A list of completed Builds in the same order as schedule_build_requests.
— def schedule(self, schedule_build_requests, url_title_fn=None, step_name=None, include_sub_invs=True):
Schedules a batch of builds.
Example:
req = api.buildbucket.schedule_request(builder='linux') api.buildbucket.schedule([req])
Hint: when scheduling builds for CQ, let CQ know about them:
api.cv.record_triggered_builds(*api.buildbucket.schedule([req1, req2]))
Args:
buildbucket.v2.ScheduleBuildRequest
protobuf messages. Create one by calling schedule_request
method.Returns: A list of Build
messages in the same order as requests.
Raises: InfraFailure
if any of the requests fail.
— def schedule_request(self, builder, project=INHERIT, bucket=INHERIT, properties=None, experimental=INHERIT, experiments=None, gitiles_commit=INHERIT, gerrit_changes=INHERIT, tags=None, inherit_buildsets=True, swarming_parent_run_id=None, dimensions=None, priority=INHERIT, critical=INHERIT, exe_cipd_version=None, fields=DEFAULT_FIELDS, can_outlive_parent=None, as_shadow_if_parent_is_led=False):
Creates a new ScheduleBuildRequest
message with reasonable defaults.
This is a convenience function to create a ScheduleBuildRequest
message.
Among args, messages can be passed as dicts of the same structure.
Example:
request = api.buildbucket.schedule_request( builder='linux', tags=api.buildbucket.tags(a='b'), ) build = api.buildbucket.schedule([request])[0]
Args:
experimental
field](https://cs.chromium.org/chromium/infra/go/src/go.chromium.org/luci/buildbucket/proto/build.proto?q=“bool experimental”).gitiles_commit
.gerrit_changes
.True
(default), the returned request will include buildset tags from the current build.None
meaning no association. If passed, must be a valid swarming run id (specific execution of a task) for the swarming instance on which build will execute. Typically, you'd want to set it to api.swarming.task_id
. Read more about parent_run_id
.[20..255]
. Defaults to the value of the current build. Pass None
to use the priority of the destination builder.None
to use the server configured one.build_pb2.Build
(e.g. [“tags”, “infra.swarming”]).luci.buildbucket.manage_parent_child_relationship
is not in the current build's experiments, can_outlive_parent is always True.shadow
. It's because Buildbucket needs the ‘original’ bucket to find the Builder config to generate the child build so it can then put it in ‘shadow’ bucket.— def search(self, predicate, limit=None, url_title_fn=None, report_build=True, step_name=None, fields=DEFAULT_FIELDS, timeout=None, test_data=None):
Searches builds with one predicate.
Example: find all builds of the current CL.
from PB.go.chromium.org.luci.buildbucket.proto import rpc as builds_service_pb2 related_builds = api.buildbucket.search(builds_service_pb2.BuildPredicate( gerrit_changes=list(api.buildbucket.build.input.gerrit_changes), ))
Underneath it calls bb batch
to perform the search, which should have a better performance and memory usage than bb ls
: since we could get the batch response as a whole and take advantage of the proto recipe for direct encoding/decoding. And the limit could be used as the page_size in SearchBuildsRequest.
— def search_with_multiple_predicates(self, predicate, limit=None, url_title_fn=None, report_build=True, step_name=None, fields=DEFAULT_FIELDS, timeout=None, test_data=None):
Searches for builds with multiple predicates.
Example: find all builds with one tag OR another.
from PB.go.chromium.org.luci.buildbucket.proto import rpc as builds_service_pb2 related_builds = api.buildbucket.search([ builds_service_pb2.BuildPredicate( tags=['one.tag'], ), builds_service_pb2.BuildPredicate( tags=['another.tag'], ), ])
Unlike search(), it still calls bb ls
to keep the overall limit working.
Args:
builds_service_pb2.BuildPredicate
objects. The predicates are connected with logical OR.build_pb2.Build
(e.g. [“tags”, “infra.swarming”]).Returns: A list of builds ordered newest-to-oldest.
— def set_buildbucket_host(self, host):
— def set_output_gitiles_commit(self, gitiles_commit):
Sets buildbucket.v2.Build.output.gitiles_commit
field.
This will tell other systems, consuming the build, what version of the code was actually used in this build and what is the position of this build relative to other builds of the same builder.
Args:
refs/
.Can be called at most once per build.
@property
— def shadowed_bucket(self):
@property
— def swarming_bot_dimensions(self):
Returns the swarming bot dimensions for the build.
— def swarming_bot_dimensions_from_build(self, build=None):
Returns the swarming bot dimensions for the provided build. If no build is provided, then self.build will be used.
@property
— def swarming_parent_run_id(self):
Returns the parent_run_id (swarming specific) used in the task.
@property
— def swarming_priority(self):
Returns the priority (swarming specific) of the task.
@property
— def swarming_task_service_account(self):
Returns the swarming specific service account used in the task.
@staticmethod
— def tags(**tags):
Alias for tags in util.py. See doc there.
— def use_service_account_key(self, key_path):
Tells this module to start using given service account key for auth.
Otherwise the module is using the default account (when running on LUCI or locally), or no auth at all (when running on Buildbot).
Exists mostly to support Buildbot environment. Recipe for LUCI environment should not use this.
Args:
@contextmanager
— def with_host(self, host):
Set the buildbucket host while in context, then reverts it.
DEPS: cipd, context, file, json, path, raw_io, runtime, step
API for interacting with cas client.
A module for interacting with cas client.
— def archive(self, step_name, root, *paths, log_level=‘info’, **kwargs):
Archives given paths to a cas server.
Args:
Returns: digest (str): digest of uploaded root directory.
— def download(self, step_name, digest, output_dir):
Downloads a directory tree from a cas server.
Args:
@property
— def instance(self):
— def viewer_url(self, digest):
Return URL of cas viewer.
@contextlib.contextmanager
— def with_instance(self, instance):
Sets the CAS instance while in context, then reverts it.
Simple API for handling CAS inputs to a recipe.
Recipes sometimes need files as part of their execution which don‘t live in source control (for example, they’re generated elsewhere but tested in the recipe). In that case, there needs to be an easy way to give these files as an input to a recipe, so that the recipe can use them somehow. This module makes this easy.
This module has input properties which contains a list of CAS inputs to download. These can easily be download to disk with the ‘download_caches’ method, and subsequently used by a recipe in whatever relevant manner.
A module for downloading CAS inputs to a recipe.
— def download_caches(self, output_dir, caches=None):
Downloads RBE-CAS caches and puts them in a given directory.
Args: output_dir: The output directory to download the caches to. If you're unsure of what directory to use, self.m.path.start_dir is a directory the recipe engine sets up for you that you can use. caches: A CasCache proto message containing the caches which should be downloaded. See properties.proto for the message definition. If unset, it uses the caches in this recipe module properties. Returns: The output directory as a Path object which contains all the cache data.
@property
— def input_caches(self):
Recipe API for LUCI Change Verifier.
LUCI Change Verifier is the pre-commit verification service that will replace CQ daemon. See: https://chromium.googlesource.com/infra/luci/luci-go/+/HEAD/cv
This recipe module depends on the prpc binary being available in $PATH: https://godoc.org/go.chromium.org/luci/grpc/cmd/prpc
his recipe module depends on experimental API provided by LUCI CV and may subject to change in the future. Please reach out to the LUCI team first if you want to use this recipe module; file a ticket at: https://bugs.chromium.org/p/chromium/issues/entry?components=Infra%3ELUCI%3EBuildService%3EPresubmit%3ECV
This module provides recipe API of LUCI Change Verifier.
— def search_runs(self, project, cls=None, limit=None, step_name=None, dev=False):
Searches for Runs.
Args:
Returns: A list of CV Runs ordered newest to oldest that match the given criteria.
DEPS: context, file, futures, json, path, platform, properties, raw_io, step, url
API for interacting with CIPD.
Depends on ‘cipd’ binary available in PATH: https://godoc.org/go.chromium.org/luci/cipd/client/cmd/cipd
CIPDApi provides basic support for CIPD.
This assumes that cipd
(or cipd.exe
or cipd.bat
on windows) has been installed somewhere in $PATH.
Attributes:
— def acl_check(self, pkg_path, reader=True, writer=False, owner=False):
Checks whether the caller has a given roles in a package.
Args:
Returns True if the caller has given roles, False otherwise.
— def add_instance_link(self, step_result):
— def build(self, input_dir, output_package, package_name, compression_level: (CompressionLevel | None)=None, install_mode: (InstallMode | None)=None, preserve_mtime: bool=False, preserve_writable: bool=False):
Builds, but does not upload, a cipd package from a directory.
Args:
Returns the CIPDApi.Pin instance.
— def build_from_pkg(self, pkg_def, output_package, compression_level: (CompressionLevel | None)=None):
Builds a package based on a PackageDefinition object.
Args:
Returns the CIPDApi.Pin instance.
— def build_from_yaml(self, pkg_def, output_package, pkg_vars=None, compression_level: (CompressionLevel | None)=None):
Builds a package based on on-disk YAML package definition file.
Args:
Returns the CIPDApi.Pin instance.
@contextlib.contextmanager
— def cache_dir(self, directory):
Sets the cache dir to use with CIPD by setting the $CIPD_CACHE_DIR environment variable.
If directory is “None”, will use no cache directory.
— def create_from_pkg(self, pkg_def, refs=None, tags=None, metadata=None, compression_level: (CompressionLevel | None)=None, verification_timeout=None):
Builds and uploads a package based on a PackageDefinition object.
This builds and uploads the package in one step.
Args:
Returns the CIPDApi.Pin instance.
— def create_from_yaml(self, pkg_def, refs=None, tags=None, metadata=None, pkg_vars=None, compression_level: (CompressionLevel | None)=None, verification_timeout=None):
Builds and uploads a package based on on-disk YAML package definition file.
This builds and uploads the package in one step.
Args:
Returns the CIPDApi.Pin instance.
— def describe(self, package_name, version, test_data_refs=None, test_data_tags=None):
Returns information about a package instance given its version: who uploaded the instance and when and a list of attached tags.
Args:
Returns the CIPDApi.Description instance describing the package.
— def ensure(self, root, ensure_file, name=‘ensure_installed’):
Ensures that packages are installed in a given root dir.
Args:
Returns the map of subdirectories to CIPDApi.Pin instances.
— def ensure_file_resolve(self, ensure_file, name=‘cipd ensure-file-resolve’):
Resolves versions of all packages for all verified platforms in an ensure file.
Args:
— def ensure_tool(self, package: str, version: str, executable_path: str=None):
Downloads an executable from CIPD.
Given a package named “name/of/some_exe/${platform}” and version “someversion”, this will install the package at the directory “[START_DIR]/cipd_tool/name/of/some_exe/someversion”. It will then return the absolute path to the executable within that directory.
This operation is idempotent, and will only run steps to download the package if it hasn't already been installed in the same build.
Args:
Returns a Path to the executable.
Future-safe; Multiple concurrent calls for the same (package, version) will block on a single ensure step.
@property
— def executable(self):
— def instances(self, package_name, limit=None):
Lists instances of a package, most recently uploaded first.
Args:
Returns the list of CIPDApi.Instance instance.
— def pkg_deploy(self, root, package_file):
Deploys the specified package to root.
ADVANCED METHOD: You shouldn‘t need this unless you’re doing advanced things with CIPD. Typically you should use the ensure
method here to fetch+install packages to the disk.
Args:
Returns a Pin for the deployed package.
— def pkg_fetch(self, destination, package_name, version):
Downloads the specified package to destination.
ADVANCED METHOD: You shouldn‘t need this unless you’re doing advanced things with CIPD. Typically you should use the ensure
method here to fetch+install packages to the disk.
Args:
Returns a Pin for the downloaded package.
— def register(self, package_name, package_path, refs=None, tags=None, metadata=None, verification_timeout=None):
Uploads and registers package instance in the package repository.
Args:
Returns: The CIPDApi.Pin instance.
— def search(self, package_name, tag, test_instances=None):
Searches for package instances by tag, optionally constrained by package name.
Args:
test_instances
number of testing IDs instance_id_%d
and returns pins for those.Returns the list of CIPDApi.Pin instances.
— def set_metadata(self, package_name, version, metadata):
Attaches metadata to a package instance.
Args:
Returns the CIPDApi.Pin instance.
— def set_ref(self, package_name, version, refs):
Moves a ref to point to a given version.
Args:
Returns the CIPDApi.Pin instance.
— def set_tag(self, package_name, version, tags):
Tags package of a specific version.
Args:
Returns the CIPDApi.Pin instance.
Recipe module providing commit position parsing and formatting.
@classmethod
— def format(cls, ref, revision_number):
Returns a commit position string.
ref must start with ‘refs/’.
@classmethod
— def parse(cls, value):
Returns (ref, revision_number) tuple.
The context module provides APIs for manipulating a few pieces of ‘ambient’ data that affect how steps are run.
The pieces of information which can be modified are:
The values here are all scoped using Python‘s with
statement; there’s no mechanism to make an open-ended adjustment to these values (i.e. there's no way to change the cwd permanently for a recipe, except by surrounding the entire recipe with a with statement). This is done to avoid the surprises that typically arise with things like os.environ or os.chdir in a normal python program.
Example:
with api.context(cwd=api.path.start_dir / 'subdir'): # this step is run inside of the subdir directory. api.step("cat subdir/foo", ['cat', './foo'])
@contextmanager
— def __call__(self, cwd=None, env_prefixes=None, env_suffixes=None, env=None, infra_steps=None, luciexe=None, realm=None, deadline=None):
Allows adjustment of multiple context values in a single call.
Args:
api.path.start_dir
.cache_dir
for all launched LUCI Executable (via api.step.sub_build(...)
).timeout
set.Environmental Variable Overrides:
Env is a mapping of environment variable name to the value you want that environment variable to have. The value is one of:
“env_prefix” and “env_suffix” are a list of Path or strings that get prefixed (or suffixed) to their respective environment variables, delimited with the system's path separator. This can be used to add entries to environment variables such as “PATH” and “PYTHONPATH”. If prefixes are specified and a value is also defined in “env”, the value will be installed as the last path component if it is not empty.
Look at the examples in “examples/” for examples of context module usage.
@property
— def cwd(self):
Returns the current working directory that steps will run in.
Returns (Path|None) - The current working directory. A value of None is equivalent to api.path.start_dir, though only occurs if no cwd has been set (e.g. in the outermost context of RunSteps).
@property
— def deadline(self):
Returns the current value (sections_pb2.Deadline) of deadline section in the current LUCI_CONTEXT. Returns {grace_period: 30}
if deadline is not defined, per LUCI_CONTEXT spec.
@property
— def env(self):
Returns modifications to the environment.
By default this is empty. If you want to observe the program's startup environment, see ENV_PROPERTIES
in https://chromium.googlesource.com/infra/luci/recipes-py/+/refs/heads/main/doc/user_guide.md#properties-and-env_properties
Returns (dict) - The env-key -> value mapping of current environment modifications.
@property
— def env_prefixes(self):
Returns Path prefix modifications to the environment.
This will return a mapping of environment key to Path tuple for Path prefixes registered with the environment.
Returns (dict) - The env-key -> value(Path) mapping of current environment prefix modifications.
@property
— def env_suffixes(self):
Returns Path suffix modifications to the environment.
This will return a mapping of environment key to Path tuple for Path suffixes registered with the environment.
Returns (dict) - The env-key -> value(Path) mapping of current environment suffix modifications.
@property
— def infra_step(self):
Returns the current value of the infra_step setting.
Returns (bool) - True iff steps are currently considered infra steps.
— def initialize(self):
@property
— def luci_context(self):
Returns the currently tracked LUCI_CONTEXT sections as a dict of proto messages.
Only contains luciexe
, realm
, ‘resultdb’ and deadline
.
@property
— def luciexe(self):
Returns the current value (sections_pb2.LUCIExe) of luciexe section in the current LUCI_CONTEXT. Returns None if luciexe is not defined.
@property
— def realm(self):
Returns the LUCI realm of the current context.
May return None if the task is not running in the realm-aware mode. This is a transitional period. Eventually all tasks will be associated with realms.
@property
— def resultdb_invocation_name(self):
Returns the ResultDB invocation name of the current context.
Returns None if resultdb is not defined.
DEPS: cv, properties, warning
Wrapper for CV API.
This module is a thin wrapper of the cv module.
— def initialize(self):
Apply non-default value cq module properties to the cv module.
DEPS: buildbucket, properties, step
Recipe API for LUCI CV, the pre-commit testing system.
This module provides recipe API of LUCI CV, a pre-commit testing system.
@property
— def active(self):
Returns whether CQ is active for this build.
— def allow_reuse_for(self, *modes):
Instructs CQ that this build can be reused in a future Run if and only if its mode is in the provided modes.
Overwrites all previously set values.
@property
— def allowed_reuse_modes(self):
@property
— def attempt_key(self):
Returns a string that is unique for a CV attempt.
The same attempt_key
will be used for all builds within an attempt.
Raises: CQInactive if CQ is not active for this build.
@property
— def cl_group_key(self):
Returns a string that is unique for a current set of Gerrit change patchsets (or, equivalently, buildsets).
The same cl_group_key
will be used if another Attempt is made for the same set of changes at a different time.
Raises: CQInactive if CQ is not active for this build.
@property
— def cl_owners(self):
Returns string(s) of the owner's email addresses used for the patchset.
Usually CLs only have one owner, but more than one is possible so a list will be returned.
Raises: CQInactive if CQ is not active for this build.
@property
— def do_not_retry_build(self):
@property
— def equivalent_cl_group_key(self):
Returns a string that is unique for a given set of Gerrit changes disregarding trivial patchset differences.
For example, when a new “trivial” patchset is uploaded, then the cl_group_key will change but the equivalent_cl_group_key will stay the same.
Raises: CQInactive if CQ is not active for this build.
@property
— def experimental(self):
Returns whether this build is triggered for a CQ experimental builder.
See Builder.experiment_percentage
doc in CQ config
Raises: CQInactive if CQ is not active for this build.
— def initialize(self):
@property
— def ordered_gerrit_changes(self):
Returns list[bb_common_pb2.GerritChange] in order in which CLs should be applied or submitted.
Raises: CQInactive if CQ is not active for this build.
@property
— def owner_is_googler(self):
Returns whether the Run/Attempt owner is a Googler.
DO NOT USE: this is a temporary workaround for crbug.com/1259887 that is supposed to be used by builders in Chrome project only. Raises: CQInactive if CQ is not active for this build. ValueError if the builder is not in Chrome project.
@property
— def props_for_child_build(self):
Returns properties dict meant to be passed to child builds.
These will preserve the CQ context of the current build in the about-to-be-triggered child build.
properties = {'foo': bar, 'protolike': proto_message} properties.update(api.cv.props_for_child_build) req = api.buildbucket.schedule_request( builder='child', gerrit_changes=list(api.buildbucket.build.input.gerrit_changes), properties=properties) child_builds = api.buildbucket.schedule([req]) api.cv.record_triggered_builds(*child_builds)
The contents of returned dict should be treated as opaque blob, it may be changed without notice.
— def record_triggered_build_ids(self, *build_ids):
Adds the given Buildbucket build IDs to the list of triggered build IDs.
Must be called after some step.
Args:
— def record_triggered_builds(self, *builds):
Adds IDs of given Buildbucket builds to the list of triggered build IDs.
Must be called after some step.
Expected usage:
api.cv.record_triggered_builds(*api.buildbucket.schedule([req1, req2]))
Args:
Build
objects, typically returned by api.buildbucket.schedule
. @property
— def run_mode(self):
Returns the mode(str) of the CQ Run that triggers this build.
Raises: CQInactive if CQ is not active for this build.
— def set_do_not_retry_build(self):
Instruct CQ to not retry this build.
This mechanism is used to reduce duration of CQ attempt and save testing capacity if retrying will likely return an identical result.
@property
— def top_level(self):
Returns whether CQ triggered this build directly.
Can be spoofed. DO NOT USE FOR SECURITY CHECKS.
Raises: CQInactive if CQ is not active for this build.
@property
— def triggered_build_ids(self):
Returns recorded Buildbucket build IDs as a list of integers.
Runs a function but defers the result until a later time.
Runs a function but defers the result until a later time.
Exceptions caught by api.defer() will show in MILO as they occur, but won't continue to propagate the exception until api.defer.collect() or DeferredResult.result() is called.
For StepFailures and InfraFailures, MILO already includes the failure output. For other exceptions, api.defer() will add a step showing the exception and continue.
If exceptions were caught and saved in DeferredResults, api.defer.collect() will raise an ExceptionGroup containing all deferred exceptions. ExceptionGroups containing specific kinds of exceptions can be handled using the “except*” syntax (for more details see https://docs.python.org/3/tutorial/errors.html#raising-and-handling-multiple-unrelated-exceptions).
If there are no failures, api.defer.collect() returns a Sequence of the return values of the functions passed into api.defer().
— def __call__(self, func: Callable[(..., T)], *args, **kwargs):
Calls func(*args, **kwargs) but catches all exceptions.
Returns a DeferredResult. If the call returns a value, the DeferredResult contains that value. If the call raises an exception, the DeferredResult contains that exception.
The DeferredResult is expected to be passed into api.defer.collect(), but DeferredResult.result() does similar processing.
— def collect(self, results: Sequence[DeferredResult], step_name: (str | None)=None):
Raise any exceptions in the given list of DeferredResults.
If there are no exceptions, do nothing. If there are one or more exceptions, reraise one of the worst of them.
Args: results: Results to check. step_name: Name for step including traceback logs if there are failures. If None, don't include a step with traceback logs.
@contextlib.contextmanager
— def context(self, collect_step_name: (str | None)=None):
Creates a context that tracks deferred calls.
Usage:
with api.defer.context() as defer: defer(api.step, ...) defer(api.step, ...) ...
DEPS: json, path, proto, raw_io, step
File manipulation (read/write/delete/glob) methods.
— def chmod(self, name, path, mode):
Set the access mode for a file or directory.
Args:
Raises: file.Error
— def compute_hash(self, name, paths, base_path, test_data=''):
Computes hash of contents of a directory/file.
This function will compute hash by including following info of a file:
Each of these components are separated by a newline character. For example, for file = “hello” and the contents “world” the hash would be over:
5 hello 5 world
Args:
start_dir
of a recipe execution can be used.Returns (str): Hex encoded hash of directory/file content.
Raises: file.Error and ValueError if passed paths input is not str or Path.
— def copy(self, name, source, dest):
Copies a file (including mode bits) from source to destination on the local filesystem.
Behaves identically to shutil.copy.
Args:
source
will be appended to derive a path to a destination file.Raises: file.Error
— def copytree(self, name, source, dest, symlinks=False):
Recursively copies a directory tree.
Behaves identically to shutil.copytree. dest
must not exist.
Args:
Raises: file.Error
— def ensure_directory(self, name, dest, mode=511):
Ensures that dest
exists and is a directory.
Args:
Raises: file.Error if the path exists but is not a directory.
— def file_hash(self, file_path, test_data=''):
Computes hash of contents of a single file.
Args:
Returns (str): Hex encoded hash of file content.
Raises: file.Error and ValueError if passed paths input is not str or Path.
— def filesizes(self, name, files, test_data=None):
Returns list of filesizes for the given files.
Args:
Returns list[int], size of each file in bytes.
— def flatten_single_directories(self, name, path):
Flattens singular directories, starting at path.
Example:
$ mkdir -p dir/which_has/some/singular/subdirs/ $ touch dir/which_has/some/singular/subdirs/with $ touch dir/which_has/some/singular/subdirs/files $ flatten_single_directories(dir) $ ls dir with files
This can be useful when you just want the ‘meat’ of a very sparse directory structure. For example, some tarballs like foo-1.2.tar.gz
extract all their contents into a subdirectory foo-1.2/
.
Using this function would essentially move all the actual contents of the extracted archive up to the top level directory, removing the need to e.g. hard-code/find the subfolder name after extraction (not all archives are even named after the subfolder they extract to).
Args:
Raises: file.Error
— def glob_paths(self, name, source, pattern, include_hidden=False, test_data=()):
Performs glob expansion on pattern
.
glob rules for pattern
follow the same syntax as for the stdlib glob
module with recursive=True
.
e.g. 'a/**/*.py' a/b/foo.py => MATCH a/b/c/foo.py => MATCH a/foo.py => MATCH a/b/c/d/e/f/g/h/i/j/foo.py => MATCH other/foo.py => NO MATCH
Args:
source
..
.Returns (list[Path]): All paths found.
Raises: file.Error.
— def listdir(self, name, source, recursive=False, test_data=(), include_log=True):
Lists all files inside a directory.
If the source dir contains non-unicode file or dir names, the corresponding bad characters will be replace with “?” mark.
Args:
source
. Doesn't follow symlinks. Very slow for large directories.Returns list[Path]
Raises: file.Error.
— def move(self, name, source, dest):
Moves a file or directory.
Behaves identically to shutil.move.
Args:
Raises: file.Error
— def read_json(self, name, source, test_data='', include_log=True):
Reads a file as UTF-8 encoded json.
Args:
Returns (object): The content of the file.
Raise file.Error
— def read_proto(self, name, source, msg_class, codec, test_proto=None, include_log=True, encoding_kwargs=None):
Reads a file into a proto message.
Args:
— def read_raw(self, name, source, test_data=''):
Reads a file as raw data.
Args:
Returns (str): The unencoded (binary) contents of the file.
Raises: file.Error
— def read_text(self, name, source, test_data='', include_log=True):
Reads a file as UTF-8 encoded text.
Args:
Returns (str): The content of the file.
Raises: file.Error
— def remove(self, name, source):
Removes a file.
Does not raise Error if the file doesn't exist.
Args:
Raises: file.Error.
— def rmcontents(self, name, source):
Similar to rmtree, but removes only contents not the directory.
This is useful e.g. when removing contents of current working directory. Deleting current working directory makes all further getcwd calls fail until chdir is called. chdir would be tricky in recipes, so we provide a call that doesn't delete the directory itself.
Args:
Raises: file.Error.
— def rmglob(self, name, source, pattern, recursive=True, include_hidden=True):
Removes all entries in source
matching the glob pattern
.
glob rules for pattern
follow the same syntax as for the stdlib glob
module with recursive=True
.
e.g. 'a/**/*.py' a/b/foo.py => MATCH a/b/c/foo.py => MATCH a/foo.py => MATCH a/b/c/d/e/f/g/h/i/j/foo.py => MATCH other/foo.py => NO MATCH
Args:
source
. Anything matching this pattern will be removed.source
. TODO: Remove this option. Use **
syntax instead..
. TODO: Set to False by default to be consistent with file.glob.Raises: file.Error.
— def rmtree(self, name, source):
Recursively removes a directory.
This uses a native python on Linux/Mac, and uses rd
on Windows to avoid issues w.r.t. path lengths and read-only attributes. If the directory is gone already, this returns without error.
Args:
Raises: file.Error.
— def symlink(self, name, source, linkname):
Creates a symlink on the local filesystem.
Behaves identically to os.symlink.
Args:
Raises: file.Error
— def symlink_tree(self, root):
Creates a SymlinkTree, given a root directory.
Args:
— def truncate(self, name, path, size_mb=100):
Creates an empty file with path and size_mb on the local filesystem.
Args:
Raises: file.Error
— def write_json(self, name, dest, data, indent=None, include_log=True, sort_keys=True):
Write the given json serializable data
to dest
.
Args:
data
. See api.json.input().Raises: file.Error.
— def write_proto(self, name, dest, proto_msg, codec, include_log=True, encoding_kwargs=None):
Writes the given proto message to dest
.
Args:
— def write_raw(self, name, dest, data):
Write the given data
to dest
.
Args:
Raises: file.Error.
— def write_text(self, name, dest, text_data, include_log=True):
Write the given UTF-8 encoded text_data
to dest
.
Args:
Raises: file.Error.
Implements in-recipe concurrency via green threads.
Provides access to the Recipe concurrency primitives.
@staticmethod
— def iwait(futures, timeout=None, count=None):
Iteratively yield up to count
Futures as they become done.
This is analogous to gevent.iwait
.
Usage:
for future in api.futures.iwait(futures): # consume future
If you are not planning to consume the entire iwait iterator, you can avoid the resource leak by doing, for example:
with api.futures.iwait(a, b, c) as iter: for future in iter: if future is a: break
You might want to use iwait
over wait
if you want to process a group of Futures in the order in which they complete. Compare:
for task in iwait(swarming_tasks): # task is done, do something with it
vs
while swarming_tasks: task = wait(swarming_tasks, count=1)[0] # some task is done swarming_tasks.remove(task) # do something with it
Args:
Yields futures in the order in which they complete until we hit the timeout or count. May also be used with a context manager to avoid leaking resources if you don't plan on consuming the entire iterable.
— def make_bounded_semaphore(self, value=1):
Returns a gevent.BoundedSemaphore with depth value
.
This can be used as a context-manager to create concurrency-limited sections like:
def worker(api, sem, i): with api.step.nest('worker %d' % i): with sem: api.step('one at a time', ...) api.step('unrestricted concurrency' , ...) sem = api.future.make_semaphore() for i in xrange(100): api.futures.spawn(fn, sem, i)
— def make_channel(self):
Returns a single-slot communication device for passing data and control between concurrent functions.
This is useful for running ‘background helper’ type concurrent processes.
See ./tests/background_helper.py for an example of how to use a Channel correctly.
It is VERY RARE to need to use a Channel. You should avoid using this unless you carefully consider and avoid the possibility of introducing deadlocks.
@escape_all_warnings
— def spawn(self, func, *args, **kwargs):
Prepares a Future to run func(*args, **kwargs)
concurrently.
Any steps executed in func
will only have manipulable StepPresentation within the scope of the executed function.
Because this will spawn a greenlet on the same OS thread (and not, for example a different OS thread or process), func
can easily be an inner function, closure, lambda, etc. In particular, func, args and kwargs do not need to be pickle-able.
This function does NOT switch to the greenlet (you'll have to block on a future/step for that to happen). In particular, this means that the following pattern is safe:
# self._my_future check + spawn + assignment is atomic because # no switch points occur. if not self._my_future: self._my_future = api.futures.spawn(func)
Kwargs:
Future.name
for more information.func
.Returns a Future of func
's result.
@escape_all_warnings
— def spawn_immediate(self, func, *args, **kwargs):
Returns a Future to the concurrently running func(*args, **kwargs)
.
This is like spawn
, except that it IMMEDIATELY switches to the new Greenlet. You may want to use this if you want to e.g. launch a background step and then another step which waits for the daemon.
Kwargs:
Future.name
for more information.func
.Returns a Future of func
's result.
@staticmethod
— def wait(futures, timeout=None, count=None):
Blocks until count
futures
are done (or timeout occurs) then returns the list of done futures.
This is analogous to gevent.wait
.
Args:
count
Futures yet.Returns the list of done Futures, in the order in which they were done.
DEPS: context, json, path, step
A simple method for running steps generated by an external script.
— def __call__(self, path_to_script, *args, checkout_dir=None, **_):
Run a script and generate the steps emitted by that script.
The script will be invoked with --output-json /path/to/file.json. The script is expected to exit 0 and write steps into that file. Once the script outputs all of the steps to that file, the recipe will read the steps from that file and execute them in order.
Any *args specified will be additionally passed to the script.
If path_to_script
ends with .py, it will be run with vpython3
.
The step data is formatted as a list of JSON objects. Each object corresponds to one step, and contains the following keys:
--presentation-json /path/to/file.json
. This file will be used to update the step's presentation on the build status page. The file will be expected to contain a single JSON object, with any of the following keys:DEPS: cipd, context, path, platform
@contextlib.contextmanager
— def __call__(self, version, path=None, cache=None):
Installs a Golang SDK and activates it in the environment.
Installs it under the given path
, defaulting to [CACHE]/golang
. Various cache directories used by Go are placed under cache
, defaulting to [CACHE]/gocache
.
version
will be used to construct CIPD package version for packages under https://chrome-infra-packages.appspot.com/p/infra/3pp/tools/go/.
To reuse the Go SDK deployment and caches across builds, declare the corresponding named caches in Buildbucket configs. E.g. when using defaults:
luci.builder( ... caches = [ swarming.cache("golang"), swarming.cache("gocache"), ], )
Note: CGO is disabled on Windows currently, since Windows doesn't have a C compiler available by default.
Args:
1.16.10
).Methods for producing and consuming JSON.
@staticmethod
— def dumps(*args, **kwargs):
Works like json.dumps
.
By default this sorts dictionary keys (see discussion in input()
), but you can pass sort_keys=False to override this behavior.
@returns_placeholder
— def input(self, data, sort_keys=True):
A placeholder which will expand to a file path containing .
By default this sorts dictionaries in data
to make this output deterministic. In python3, dictionary insertion order is preserved per-spec, so this is no longer necessary for determinism, and in some cases (such as SPDX), the ‘pretty’ output is in non-alphabetical order. The default remains True
, however, to avoid breaking all downstream tests.
— def is_serializable(self, obj):
Returns True if the object is JSON-serializable.
@staticmethod
— def loads(data, **kwargs):
Works like json.loads
, but:
@returns_placeholder
— def output(self, add_json_log=True, name=None, leak_to=None):
A placeholder which will expand to ‘/tmp/file’.
If leak_to is provided, it must be a Path object. This path will be used in place of a random temporary file, and the file will not be deleted at the end of the step.
Args:
name
. If this is ‘on_failure’, only create this log when the step has a non-SUCCESS status.— def read(self, name, path, add_json_log=True, output_name=None, **kwargs):
Returns a step that reads a JSON file.
DEPS: cipd, context, json, path, proto, step, swarming
An interface to call the led tool.
Interface to the led tool.
“led” stands for LUCI editor. It allows users to debug and modify LUCI jobs. It can be used to modify many aspects of a LUCI build, most commonly including the recipes used.
The main interface this module provides is a direct call to the led binary:
led_result = api.led( ‘get-builder’, [‘luci.chromium.try:chromium_presubmit’]) final_data = led_result.then(‘edit-recipe-bundle’).result
See the led binary for full documentation of commands.
— def __call__(self, *cmd):
Runs led with the given arguments. Wraps result in a LedResult
.
@property
— def cipd_input(self):
The versioned CIPD package containing the recipes code being run.
If set, it will be an InputProperties.CIPDInput
protobuf; otherwise None.
— def initialize(self):
— def inject_input_recipes(self, led_result):
Sets the version of recipes used by led to correspond to the version currently being used.
If neither the rbe_cas_input
nor the cipd_input
property is set, this is a no-op.
Args:
LedResult
whose job.Definition will be passed into the edit command. @property
— def launched_by_led(self):
Whether the current build is a led job.
@property
— def led_build(self):
Whether the current build is a led job as a real Buildbucket build.
@property
— def rbe_cas_input(self):
The location of the rbe-cas containing the recipes code being run.
If set, it will be a swarming.v1.CASReference
protobuf; otherwise, None.
@property
— def run_id(self):
A unique string identifier for this led job, if it's a raw swarming task.
If the current build is not a led job as raw swarming task, value will be an empty string.
@property
— def shadowed_bucket(self):
The bucket of the original build/builder the led build replicates from.
If set, it will be an InputProperties.ShadowedBucket
protobuf; otherwise None.
— def trigger_builder(self, project_name, bucket_name, builder_name, properties, real_build=False):
Trigger a builder using led.
This can be used by recipes instead of buildbucket or scheduler triggers in case the running build was triggered by led.
This is equivalent to: led get-builder project/bucket:builder | <inject_input_recipes> | led edit | led launch
Args:
Legacy Annotation module provides support for running a command emitting legacy @@@annotation@@@ in the new luciexe mode.
The output annotations is converted to a build proto and all steps in the build will appear as the child steps of the launched cmd/step in the current running build (using the Merge Step feature from luciexe protocol). This is the replacement for allow_subannotation feature in the legacy annotate mode.
— def __call__(self, name, cmd, timeout=None, step_test_data=None, cost=_ResourceCost(), legacy_global_namespace=False):
Runs cmd that is emitting legacy @@@annotation@@@.
Currently, it will run the command as sub_build if running in luciexe mode or simulation mode. Otherwise, it will fall back to launch a step with allow_subannotation set to true.
If legacy_global_namespace
is True, this enables an even more-legacy global namespace merging mode. Do not enable this. See crbug.com/1310155.
API for interacting with the LUCI Analysis RPCs
This API is for calling LUCI Analysis RPCs for various aggregated info about test results. See go/luci-analysis for more info.
— def lookup_bug(self, bug_id, system=‘monorail’):
Looks up the rule associated with a given bug.
This is a wrapper of luci.analysis.v1.Rules
LookupBug
API.
Args: bug_id (str): Bug Id is the bug tracking system-specific identity of the bug. For monorail, the scheme is {project}/{numeric_id}, for buganizer the scheme is {numeric_id}. system (str): System is the bug tracking system of the bug. This is either “monorail” or “buganizer”. Defaults to monorail.
Returns: list of rules (str), Format: projects/{project}/rules/{rule_id}
— def query_cluster_failures(self, cluster_name):
Queries examples of failures in the given cluster.
This is a wrapper of luci.analysis.v1.Clusters
QueryClusterFailures
API.
Args: cluster_name (str): The resource name of the cluster to retrieve. Format: projects/{project}/clusters/{cluster_algorithm}/{cluster_id}
Returns: list of DistinctClusterFailure
For value format, see [DistinctClusterFailure
message] (https://bit.ly/DistinctClusterFailure)
— def query_failure_rate(self, test_and_variant_list, project=‘chromium’):
Queries LUCI Analysis for failure rates
Args: test_and_variant_list list(Test): List of dicts containing testId and variantHash project (str): Optional. The LUCI project to query the failures from. Returns: List of TestVariantFailureRateAnalysis protos
— def query_stability(self, test_variant_position_list, project=‘chromium’):
Queries LUCI Analysis for test stability.
Args: test_variant_position_list list(TestVariantPosition): List of dicts containing testId, variant and source position project (str): Optional. The LUCI project to query the failures from. Returns: Tuple of (List(TestVariantStabilityAnalysis), TestStabilityCriteria) Raises: StepFailure if query is invalid or service returns unexpected responses.
— def query_test_history(self, test_id, project=‘chromium’, sub_realm=None, variant_predicate=None, partition_time_range=None, submitted_filter=None, page_size=1000, page_token=None):
A wrapper method to use luci.analysis.v1.TestHistory
Query
API.
Args: test_id (str): test ID to query. project (str): Optional. The LUCI project to query the history from. sub_realm (str): Optional. The realm without the “:” prefix. E.g. “try”. Default all test verdicts will be returned. variant_predicate (luci.analysis.v1.VariantPredicate): Optional. The subset of test variants to request history for. Default all will be returned. partition_time_range (luci.analysis.v1.common.TimeRange): Optional. A range of timestamps to query the test history from. Default all will be returned. (At most recent 90 days as TTL). submitted_filter (luci.analysis.v1.common.SubmittedFilter): Optional. Whether test verdicts generated by code with unsubmitted changes (e.g. Gerrit changes) should be included in the response. Default all will be returned. Default all will be returned. page_size (int): Optional. The number of results per page in the response. If the number of results satisfying the given configuration exceeds this number, only the page_size results will be available in the response. Defaults to 1000. page_token (str): Optional. For instances in which the results span multiple pages, each response will contain a page token for the next page, which can be passed in to the next request. Defaults to None, which returns the first page.
Returns: (list of parsed luci.analysis.v1.TestVerdict objects, next page token)
— def query_variants(self, test_id, project=‘chromium’, sub_realm=None, variant_predicate=None, page_size=1000, page_token=None):
A wrapper method to use luci.analysis.v1.TestHistory
QueryVariants
API.
Args:
test_id (str): test ID to query. project (str): Optional. The LUCI project to query the variants from. sub_realm (str): Optional. The realm without the “:” prefix. E.g. “try”. Default all test verdicts will be returned. variant_predicate (luci.analysis.v1.VariantPredicate): Optional. The subset of test variants to request history for. Default all will be returned. page_size (int): Optional. The number of results per page in the response. If the number of results satisfying the given configuration exceeds this number, only the page_size results will be available in the response. Defaults to 1000. page_token (str): Optional. For instances in which the results span multiple pages, each response will contain a page token for the next page, which can be passed in to the next request. Defaults to None, which returns the first page.
Returns: (list of VariantInfo { variant_hash: str, variant: { def: dict } }, next page token)
— def rule_name_to_cluster_name(self, rule):
Convert the resource name for a rule to its corresponding cluster. Args: rule (str): Format: projects/{project}/rules/{rule_id} Returns: cluster (str): Format: projects/{project}/clusters/{cluster_algorithm}/{cluster_id}.
DEPS: json, path, platform, raw_io, runtime, step, uuid
API for specifying Milo behavior.
A module for interacting with Milo.
— def show_blamelist_for(self, gitiles_commits):
Specifies which commits and repos Milo should show a blamelist for.
If not set, Milo will only show a blamelist for the main repo in which this build was run.
Args: gitiles_commits: A list of buildbucket.common_pb2.GitilesCommit messages or dicts of the same structure. Each commit must have host, project and id. ID must match r'^[0-9a-f]{40}$' (git revision).
DEPS: cipd, context, path, platform
@contextlib.contextmanager
— def __call__(self, version, path=None, cache=None):
Installs a Node.js toolchain and activates it in the environment.
Installs it under the given path
, defaulting to [CACHE]/nodejs
. Various cache directories used by npm are placed under cache
, defaulting to [CACHE]/npmcache
.
version
will be used to construct CIPD package version for packages under https://chrome-infra-packages.appspot.com/p/infra/3pp/tools/nodejs/.
To reuse the Node.js toolchain deployment and npm caches across builds, declare the corresponding named caches in Buildbucket configs. E.g. when using defaults:
luci.builder( ... caches = [ swarming.cache("nodejs"), swarming.cache("npmcache"), ], )
Args:
17.1.0
).All functions related to manipulating paths in recipes.
Recipes handle paths a bit differently than python does. All path manipulation in recipes revolves around Path objects. These objects store a base path (always absolute), plus a list of components to join with it. New paths can be derived by calling the .join method with additional components.
In this way, all paths in Recipes are absolute, and are constructed from a small collection of anchor points. The built-in anchor points are:
api.path.start_dir
- This is the directory that the recipe started in. it‘s similar to cwd
, except that it’s constant.api.path.cache_dir
- This directory is provided by whatever's running the recipe. Files and directories created under here /may/ be evicted in between runs of the recipe (i.e. to relieve disk pressure).api.path.cleanup_dir
- This directory is provided by whatever's running the recipe. Files and directories created under here /are guaranteed/ to be evicted in between runs of the recipe. Additionally, this directory is guaranteed to be empty when the recipe starts.api.path.tmp_base_dir
- This directory is the system-configured temp dir. This is a weaker form of ‘cleanup’, and its use should be avoided. This may be removed in the future (or converted to an alias of ‘cleanup’).api.path.checkout_dir
- This directory is set by various checkout modules in recipes. It was originally intended to make recipes easier to read and make code somewhat generic or homogeneous, but this was a mistake. New code should avoid ‘checkout’, and instead just explicitly pass paths around. This path may be removed in the future. @recipe_api.ignore_warnings(‘recipe_engine/CHECKOUT_DIR_DEPRECATED’)
— def __contains__(self, pathname: NamedBasePathsType):
This method is DEPRECATED.
If pathname
is “checkout”, returns True iff checkout_dir is set. If you want to check if checkout_dir is set, use api.path.checkout_dir is not None
or similar, instead.
Returns True for all other pathname
values in NamedBasePaths. Returns False for all other values.
In the past, the base paths that this module knew about were extensible via a very complicated ‘config’ system. All of that has been removed, but this method remains for now.
— def __getitem__(self, name: NamedBasePathsType):
Gets the base path named name
. See module docstring for more info.
DEPRECATED: Use the following @properties on this module instead:
@recipe_api.ignore_warnings(‘recipe_engine/CHECKOUT_DIR_DEPRECATED’)
— def __setitem__(self, pathname: CheckoutPathNameType, path: config_types.Path):
Sets the checkout path.
api.path.checkout_dir
instead.The only valid value of pathname
is the literal string CheckoutPathName.
@recipe_api.ignore_warnings(‘recipe_engine/CHECKOUT_DIR_DEPRECATED’)
— def abs_to_path(self, abs_string_path: str):
Converts an absolute path string abs_string_path
to a real Path object, using the most appropriate known base path.
This method will find the longest match in all the following:
Example:
# assume [START_DIR] == "/basis/dir/for/recipe" api.path.abs_to_path("/basis/dir/for/recipe/some/other/dir") -> Path("[START_DIR]/some/other/dir")
Raises an ValueError if the preconditions are not met, otherwise returns the Path object.
— def abspath(self, path: (config_types.Path | str)):
Equivalent to os.abspath.
— def assert_absolute(self, path: (config_types.Path | str)):
Raises AssertionError if the given path is not an absolute path.
Args:
— def basename(self, path: (config_types.Path | str)):
Equivalent to os.path.basename.
@property
— def cache_dir(self):
This directory is provided by whatever's running the recipe.
When the recipe executes via Buildbucket, directories under here map to ‘named caches’ which the Build has set. These caches would be preserved locally on the machine executing this recipe, and are restored for subsequent recipe exections on the same machine which request the same named cache.
By default, Buildbucket installs a cache named ‘builder’ which is an immediate subdirectory of cache_dir, and will attempt to be persisted between executions of recipes on the same Buildbucket builder which use the same machine. So, if you are just looking for a place to put files which may be persisted between builds, use:
api.path.cache_dir/‘builder’
As the base Path.
Note that directories created under here /may/ be evicted in between runs of the recipe (i.e. to relieve disk pressure).
— def cast_to_path(self, strpath: str):
This returns a Path for strpath which can be used anywhere a Path is required.
If strpath
is not an absolute path (e.g. rooted with a valid Windows drive or a ‘/’ for non-Windows paths), this will raise ValueError.
This implicitly tries abs_to_path prior to returning a drive-rooted Path. This means that if strpath is a subdirectory of a known path (say, cache_dir), the returned Path will be based on that known path. This is important for test compatibility.
@checkout_dir.setter
— def checkout_dir(self, path: config_types.Path):
Sets the global variable api.path.checkout_dir
to the given path.
@property
— def cleanup_dir(self):
This directory is guaranteed to be cleaned up (eventually) after the execution of this recipe.
This directory is guaranteed to be empty when the recipe starts.
— def dirname(self, path: (config_types.Path | str)):
For “foo/bar/baz”, return “foo/bar”.
This corresponds to os.path.dirname().
The type of the return value matches the type of the argument.
Args: path: path to take directory name of
Returns dirname of path
— def eq(self, path1: config_types.Path, path2: config_types.Path):
Check whether path1 points to the same path as path2.
==
.— def exists(self, path):
Equivalent to os.path.exists.
The presence or absence of paths can be mocked during the execution of the recipe by using the mock_* methods.
— def expanduser(self, path):
Do not use this, use api.path.home_dir
instead.
This ONLY handles path
== “~”, and returns str(api.path.home_dir)
.
@recipe_api.ignore_warnings(‘recipe_engine/CHECKOUT_DIR_DEPRECATED’)
— def get(self, name: NamedBasePathsType, *, skip_deprecation=False):
Gets the base path named name
. See module docstring for more info.
DEPRECATED: Use the following @properties on this module instead:
@property
— def home_dir(self):
This is the path to the current $HOME directory.
It is generally recommended to avoid using this, because it is an indicator that the recipe is non-hermetic.
— def initialize(self):
This is called by the recipe engine immediately after init(), but with self._paths_client
initialized.
— def is_parent_of(self, parent: config_types.Path, child: config_types.Path):
Check whether child is contained within parent.
parent in child.parents
.— def isdir(self, path):
Equivalent to os.path.isdir.
The presence or absence of paths can be mocked during the execution of the recipe by using the mock_* methods.
— def isfile(self, path):
Equivalent to os.path.isfile.
The presence or absence of paths can be mocked during the execution of the recipe by using the mock_* methods.
— def join(self, path, *paths):
Equivalent to os.path.join.
Note that Path objects returned from this module (e.g. api.path.start_dir) have a built-in join method (e.g. new_path = p.joinpath(‘some’, ‘name’)). Many recipe modules expect Path objects rather than strings. Using this join
method gives you raw path joining functionality and returns a string.
If your path is rooted in one of the path module's root paths (i.e. those retrieved with api.path.something), then you can convert from a string path back to a Path with the abs_to_path
method.
— def mkdtemp(self, prefix: str=tempfile.template):
Makes a new temporary directory, returns Path to it.
Args:
Returns a Path to the new directory.
— def mkstemp(self, prefix: str=tempfile.template):
Makes a new temporary file, returns Path to it.
Args:
Returns a Path to the new file.
— def mock_add_directory(self, path: config_types.Path):
For testing purposes, mark that file |path| exists.
— def mock_add_file(self, path: config_types.Path):
For testing purposes, mark that file |path| exists.
— def mock_add_paths(self, path: config_types.Path, kind: FileType=FileType.FILE):
For testing purposes, mark that |path| exists.
— def mock_copy_paths(self, source: config_types.Path, dest: config_types.Path):
For testing purposes, copy |source| to |dest|.
— def mock_remove_paths(self, path: config_types.Path, should_remove: Callable[([str], bool)]=(lambda p: True)):
For testing purposes, mark that |path| doesn't exist.
Args: path: The path to remove. should_remove: Called for every candidate path. Return True to remove this path.
— def normpath(self, path):
Equivalent to os.path.normpath.
@property
— def pardir(self):
Equivalent to os.pardir.
@property
— def pathsep(self):
Equivalent to os.pathsep.
— def realpath(self, path: (config_types.Path | str)):
Equivalent to os.path.realpath.
— def relpath(self, path, start):
Roughly equivalent to os.path.relpath.
Unlike os.path.relpath, start
is required. If you want the ‘current directory’, use the recipe_engine/context
module's cwd
property.
@property
— def sep(self):
Equivalent to os.sep.
— def split(self, path):
For “foo/bar/baz”, return (“foo/bar”, “baz”).
This corresponds to os.path.split().
The type of the first item in the return value matches the type of the argument.
Args: path (Path or str): path to split into directory name and basename
Returns (dirname(path), basename(path)).
— def splitext(self, path: (config_types.Path | str)):
For “foo/bar.baz”, return (“foo/bar”, “.baz”).
This corresponds to os.path.splitext().
The type of the first item in the return value matches the type of the argument.
Args: path: Path to split into name and extension
Returns: (name, extension_including_dot).
@property
— def start_dir(self):
This is the directory that the recipe started in. it‘s similar to cwd
, except that it’s constant for the duration of the entire program.
If you want to modify the current working directory for a set of steps, See the ‘recipe_engine/context’ module which allows modifying the cwd safely via a context manager.
@property
— def tmp_base_dir(self):
This directory is the system-configured temp dir.
This is a weaker form of ‘cleanup’, and its use should be avoided. This may be removed in the future (or converted to an alias of ‘cleanup’).
Mockable system platform identity functions.
Provides host-platform-detection properties.
Mocks:
@property
— def arch(self):
Returns the current CPU architecture.
Can return “arm” or “intel”.
@property
— def bits(self):
Returns the bitness of the userland for the current system (either 32 or 64 bit).
TODO: If anyone needs to query for the kernel bitness, another accessor should be added.
@property
— def cpu_count(self):
The number of logical CPU cores (i.e. including hyper-threaded cores), according to psutil.cpu_count(True)
.
— def initialize(self):
@property
— def is_linux(self):
Returns True iff the recipe is running on Linux.
@property
— def is_mac(self):
Returns True iff the recipe is running on OS X.
@property
— def is_win(self):
Returns True iff the recipe is running on Windows.
@property
— def name(self):
Returns the current platform name which will be in:
@staticmethod
— def normalize_platform_name(plat):
One of python's sys.platform values -> ‘win’, ‘linux’ or ‘mac’.
@property
— def total_memory(self):
The total physical memory in MiB.
Return type is int. This is equivalent to psutil.virtual_memory().total / (1024 ** 2)
.
Provides access to the recipes input properties.
Every recipe is run with a JSON object called “properties”. These contain all inputs to the recipe. Some common examples would be properties like “revision”, which the build scheduler sets to tell a recipe to build/test a certain revision.
The properties that affect a particular recipe are defined by the recipe itself, and this module provides access to them.
Recipe properties are read-only; the values obtained via this API reflect the values provided to the recipe engine at the beginning of execution. There is intentionally no API to write property values (lest they become a kind of random-access global variable).
PropertiesApi implements all the standard Mapping functions, so you can use it like a read-only dict.
— def legacy(self):
This excludes any recipe module-specific properties (i.e. those beginning with $
).
Instead of passing all of the properties as a blob, please consider passing specific arguments to scripts that need them. Doing this makes it much easier to debug and diagnose which scripts use which properties.
— def thaw(self):
Returns a read-write copy of all of the properties.
Methods for producing and consuming protobuf data to/from steps and the filesystem.
@staticmethod
— def decode(data, msg_class, codec, **decoding_kwargs):
Decodes a proto message from a string.
Args:
Returns the decoded proto object.
@staticmethod
— def encode(proto_msg, codec, **encoding_kwargs):
Encodes a proto message to a string.
Args:
Returns the encoded proto message.
@returns_placeholder
— def input(self, proto_msg, codec, **encoding_kwargs):
A placeholder which will expand to a file path containing the encoded proto_msg
.
Example: proto_msg = MyMessage(field=10) api.step(‘step name’, [‘some_cmd’, api.proto.input(proto_msg)])
Args:
Returns an InputPlaceholder.
@returns_placeholder
— def output(self, msg_class, codec, add_json_log=True, name=None, leak_to=None, **decoding_kwargs):
A placeholder which expands to a file path and then reads an encoded proto back from that location when the step finishes.
Args:
name
. If this is ‘on_failure’, only create this log when the step has a non-SUCCESS status.Allows randomness in recipes.
This module sets up an internal instance of ‘random.Random’. In tests, this is seeded with 1234
, or a seed of your choosing (using the test_api's seed()
method)
All members of random.Random
are exposed via this API with getattr.
random
module, and so all caveats which apply there also apply to this (i.e. don't use it for anything resembling crypto).Example:
def RunSteps(api): my_list = range(100) api.random.shuffle(my_list) # my_list is now random!
— def __getattr__(self, name):
Access a member of random.Random
.
Provides objects for reading and writing raw data to and from steps.
@returns_placeholder
@staticmethod
— def input(data, suffix='', name=None):
Returns a Placeholder for use as a step argument.
This placeholder can be used to pass data to steps. The recipe engine will dump the ‘data’ into a file, and pass the filename to the command line argument.
data MUST be either of type ‘bytes’ (recommended) or type ‘str’ in Python 3. Respectively, ‘str’ or ‘unicode’ in Python 2.
If the provided data is of type ‘str’, it is encoded to bytes assuming utf-8 encoding. Please switch to input_text(...)
instead in this case.
If ‘suffix’ is not '', it will be used when the engine calls tempfile.mkstemp.
See examples/full.py for usage example.
@returns_placeholder
@staticmethod
— def input_text(data, suffix='', name=None):
Returns a Placeholder for use as a step argument.
Similar to input(), but ensures that ‘data’ is valid utf-8 text. Any non-utf-8 characters will be replaced with �.
data MUST be either of type ‘bytes’ or type ‘str’ (recommended) in Python 3. Respectively, ‘str’ or ‘unicode’ in Python 2.
If the provided data is of type ‘bytes’, it is expected to be valid utf-8 encoded data. Note that, the support of type ‘bytes’ is for backwards compatibility to Python 2, we may drop this support in the future after recipe becomes Python 3 only.
@returns_placeholder
@staticmethod
— def output(suffix='', leak_to=None, name=None, add_output_log=False):
Returns a Placeholder for use as a step argument, or for std{out,err}.
If ‘leak_to’ is None, the placeholder is backed by a temporary file with a suffix ‘suffix’. The file is deleted when the step finishes.
If ‘leak_to’ is not None, then it should be a Path and placeholder redirects IO to a file at that path. Once step finishes, the file is NOT deleted (i.e. it's ‘leaking’). ‘suffix’ is ignored in that case.
Args:
name
. If this is ‘on_failure’, only create this log when the step has a non-SUCCESS status. @returns_placeholder
— def output_dir(self, leak_to=None, name=None):
Returns a directory Placeholder for use as a step argument.
If leak_to
is None, the placeholder is backed by a temporary dir.
Otherwise leak_to
must be a Path; if the path doesn't exist, it will be created.
The placeholder value attached to the step will be a dictionary-like mapping of relative paths to the contents of the file. The actual reading of the file data is done lazily (i.e. on first access).
Relative paths are stored with the native slash delimitation (i.e. forward slash on *nix, backslash on Windows).
Example:
result = api.step('name', [..., api.raw_io.output_dir()]) # some time later; The read of 'some/file' happens now: some_file = api.path.join('some', 'file') assert result.raw_io.output_dir[some_file] == 'contents of some/file' # data for 'some/file' is cached now; To free it from memory (and make # all further reads of 'some/file' an error): del result.raw_io.output_dir[some_file] result.raw_io.output_dir[some_file] -> raises KeyError
@returns_placeholder
@staticmethod
— def output_text(suffix='', leak_to=None, name=None, add_output_log=False):
Returns a Placeholder for use as a step argument, or for std{out,err}.
Similar to output(), but uses an OutputTextPlaceholder, which expects utf-8 encoded text. Similar to input(), but tries to decode the resulting data as utf-8 text, replacing any decoding errors with �.
Args:
name
. If this is ‘on_failure’, only create this log when the step has a non-SUCCESS status.DEPS: context, futures, json, raw_io, step, time, uuid
API for interacting with the ResultDB service.
Requires rdb
command in $PATH
: https://godoc.org/go.chromium.org/luci/resultdb/cmd/rdb
A module for interacting with ResultDB.
— def assert_enabled(self):
— def config_test_presentation(self, column_keys=(), grouping_keys=(‘status’,)):
Specifies how the test results should be rendered.
Args: column_keys: A list of keys that will be rendered as ‘columns’. status is always the first column and name is always the last column (you don't need to specify them). A key must be one of the following: 1. ‘v.{variant_key}’: variant.def[variant_key] of the test variant (e.g. v.gpu).
grouping_keys: A list of keys that will be used for grouping tests. A key must be one of the following: 1. ‘status’: status of the test variant. 2. ‘name’: name of the test variant. 3. ‘v.{variant_key}’: variant.def[variant_key] of the test variant (e.g. v.gpu). Caveat: test variants with only expected results are not affected by this setting and are always in their own group.
@property
— def current_invocation(self):
@property
— def enabled(self):
— def exclude_invocations(self, invocations, step_name=None):
Shortcut for resultdb.update_included_invocations().
— def exonerate(self, test_exonerations, step_name=None):
Exonerates test variants in the current invocation.
Args: test_exonerations (list): A list of test_result_pb2.TestExoneration. step_name (str): name of the step.
— def get_included_invocations(self, inv_name=None, step_name=None):
Returns names of included invocations of the input invocation.
Args: inv_name (str): the name of the input invocation. If input is None, will use current invocation. step_name (str): name of the step.
Returns: A list of invocation name strs.
— def include_invocations(self, invocations, step_name=None):
Shortcut for resultdb.update_included_invocations().
— def invocation_ids(self, inv_names):
Returns invocation IDs by parsing invocation names.
Args: inv_names (list of str): ResultDB invocation names.
Returns: A list of invocation_ids.
— def query(self, inv_ids, variants_with_unexpected_results=False, merge=False, limit=None, step_name=None, tr_fields=None, test_invocations=None, test_regex=None):
Returns test results in the invocations.
Most users will be interested only in results of test variants that had unexpected results. This can be achieved by passing variants_with_unexpected_results=True. This significantly reduces output size and latency.
Example: results = api.resultdb.query( [ # Invocation ID for a Swarming task. ‘task-chromium-swarm.appspot.com-deadbeef’, # Invocation ID for a Buildbucket build. ‘build-234298374982’ ], variants_with_unexpected_results=True, )
Args: inv_ids (list of str): IDs of the invocations. variants_with_unexpected_results (bool): if True, return only test results from variants that have unexpected results. merge (bool): if True, return test results as if all invocations are one, otherwise, results will be ordered by invocation. limit (int): maximum number of test results to return. Unlimited if 0. Defaults to 1000. step_name (str): name of the step. tr_fields (list of str): test result fields in the response. Test result name will always be included regardless of this param value. test_invocations (dict {invocation_id: api.Invocation}): Default test data to be used to simulate the step in tests. The format is the same as what this method returns. test_regex (str): A regular expression of the relevant test variants to query for.
Returns: A dict {invocation_id: api.Invocation}.
— def query_new_test_variants(self, invocation: str, baseline: str, step_name: str=None, step_test_data: dict=None):
Query ResultDB for new tests.
Makes a QueryNewTestVariants rpc.
Args: inovcation: Name of the invocation, e.g. “invocations/{id}”. baseline: The baseline to compare test variants against, to determine if they are new. e.g. “projects/{project}/baselines/{baseline_id}”.
Returns: A QueryNewTestVariantsResponse proto message with is_baseline_ready and new_test_variants.
— def query_test_result_statistics(self, invocations=None, step_name=None):
Retrieve stats of test results for the given invocations.
Makes a call to the QueryTestResultStatistics API. Returns stats for all given invocations, including those included indirectly.
Args: invocations (list): A list of the invocations to query statistics for. If None, the current invocation will be used. step_name (str): name of the step.
Returns: A QueryTestResultStatisticsResponse proto message with statistics for the queried invocations.
— def query_test_results(self, invocations, test_id_regexp=None, variant_predicate=None, field_mask_paths=None, page_size=100, page_token=None, step_name=None):
Retrieve test results from an invocation, recursively.
Makes a call to QueryTestResults rpc. Returns a list of test results for the invocations and matching the given filters.
Args: invocations (list of str): retrieve the test results included in these invocations. test_id_regexp (str): the subset of test IDs to request history for. Default to None. variant_predicate (resultdb.proto.v1.predicate.VariantPredicate): the subset of test variants to request history for. Defaults to None, but specifying will improve runtime. field_mask_paths (list of str): test result fields in the response. Test result name will always be included regardless of this param value. page_size (int): the maximum number of variants to return. The service may return fewer than this value. The maximum value is 1000; values above 1000 will be coerced to 1000. Defaults to 100. page_token (str): for instances in which the results span multiple pages, each response will contain a page token for the next page, which can be passed in to the next request. Defaults to None, which returns the first page. step_name (str): name of the step.
Returns: A QueryTestResultsResponse proto message with test_results and next_page_token.
For value format, see [QueryTestResultsResponse
message] (https://bit.ly/3dsChbo)
— def query_test_variants(self, invocations, test_variant_status=None, field_mask_paths=None, page_size=100, page_token=None, step_name=None):
Retrieve test variants from an invocation, recursively.
Makes a call to QueryTestVariants rpc. Returns a list of test variants for the invocations and matching the given filters.
Args: invocations (list of str): retrieve the test results included in these invocations. test_variant_status (resultdb.proto.v1.test_variant.TestVariantStatus): Use the UNEXPECTED_MASK status to retrieve only variants with non-EXPECTED status. field_mask_paths (list of str): test variant fields in the response. Test id, variantHash and status will always be included. Example: use [“test_id”, “variant”, “status”, “sources_id”] to exclude results from the response. (Note that test_id and status are still specified for clarity.) page_size (int): the maximum number of variants to return. The service may return fewer than this value. The maximum value is 1000; values above 1000 will be coerced to 1000. Defaults to 100. page_token (str): for instances in which the results span multiple pages, each response will contain a page token for the next page, which can be passed in to the next request. Defaults to None, which returns the first page. step_name (str): name of the step.
Returns: A QueryTestVariantsResponse proto message with test_results and next_page_token.
For value format, see [QueryTestVariantsResponse
message] (http://shortn/_hv3edsXidO)
— def update_included_invocations(self, add_invocations=None, remove_invocations=None, step_name=None):
Add and/or remove included invocations to/from the current invocation.
Args: add_invocations (list of str): invocation IDs to add to the current invocation. remove_invocations (list of str): invocation IDs to remove from the current invocation.
This updates the inclusions of the current invocation specified in the LUCI_CONTEXT.
— def update_invocation(self, parent_inv='', step_name=None, source_spec=None, baseline_id=None):
Makes a call to the UpdateInvocation API to update the invocation
Args: parent_inv (str): the name of the invocation to be updated. step_name (str): name of the step. source_spec (luci.resultdb.v1.SourceSpec): The source information to apply to the given invocation. baseline_id (str): Baseline identifier for this invocation, usually of the format {buildbucket bucket}:{buildbucket builder name}. For example, ‘try:linux-rel’. Baselines are used to detect new tests in invocations.
— def upload_invocation_artifacts(self, artifacts, parent_inv=None, step_name=None):
Create artifacts with the given content type and contents or gcs_uri.
Makes a call to the BatchCreateArtifacts API. Returns the created artifacts.
Args: artifacts (dict): a collection of artifacts to create. Each key is an artifact ID, with the corresponding value being a dict containing: ‘content_type’ (optional) one of ‘contents’ (binary string) or ‘gcs_uri’ (str) parent_inv (str): the name of the invocation to create the artifacts under. If None, the current invocation will be used. step_name (str): name of the step.
Returns: A BatchCreateArtifactsResponse proto message listing the artifacts that were created.
— def wrap(self, cmd, test_id_prefix=‘‘, base_variant=None, test_location_base=’’, base_tags=None, coerce_negative_duration=False, include=False, realm=‘‘, location_tags_file=’’, require_build_inv=True, exonerate_unexpected_pass=False, inv_properties=‘‘, inv_properties_file=’’, inherit_sources=False, sources=‘‘, sources_file=’’, baseline_id=''):
Wraps the command with ResultSink.
Returns a command that, when executed, runs cmd in a go/result-sink environment. For example:
api.step(‘test’, api.resultdb.wrap([‘./my_test’]))
Args: cmd (list of strings): the command line to run. test_id_prefix (str): a prefix to prepend to test IDs of test results reported by cmd. base_variant (dict): variant key-value pairs to attach to all test results reported by cmd. If both base_variant and a reported variant have a value for the same key, the reported one wins. Example: base_variant={ ‘bucket’: api.buildbucket.build.builder.bucket, ‘builder’: api.buildbucket.builder_name, } test_location_base (str): the base path to prepend to the test location file name with a relative path. The value must start with “//”. base_tags (list of (string, string)): tags to attach to all test results reported by cmd. Each element is a tuple of (key, value), and a key may be repeated. coerce_negative_duration (bool): If true, negative duration values will be coerced to 0. If false, tests results with negative duration values will be rejected with an error. include (bool): If true, a new invocation will be created and included in the parent invocation. realm (str): realm used for the new invocation created if include=True
. Default is the current realm used in buildbucket. location_tags_file (str): path to the file that contains test location tags in JSON format. require_build_inv(bool): flag to control if the build is required to have an invocation. exonerate_unexpected_pass(bool): flag to control if automatically exonerate unexpected passes. inv_properties(str): stringified JSON object that contains structured, domain-specific properties of the invocation. When not specified, invocation-level properties will not be updated. inv_properties_file(string): Similar to inv_properties but takes a path to the file that contains the JSON object. Cannot be used when inv_properties is specified. inherit_sources(bool): flag to enable inheriting sources from the parent invocation. sources(string): JSON-serialized luci.resultdb.v1.Sources object that contains information about the code sources tested by the invocation. Cannot be used when inherit_sources or sources_file is specified. sources_file(string): Similar to sources, but takes a path to the file that contains the JSON object. Cannot be used when inherit_sources or sources is specified. baseline_id(string): Baseline identifier for this invocation, usually of the format {buildbucket bucket}:{buildbucket builder name}. For example, ‘try:linux-rel’.
This module assists in experimenting with production recipes.
For example, when migrating builders from Buildbot to pure LUCI stack.
@property
— def in_global_shutdown(self):
True iff this recipe is currently in the ‘grace_period’ specified by LUCI_CONTEXT['deadline']
.
This can occur when:
As of 2021Q2, while the recipe is in the grace_period, it can do anything except for starting new steps (but it can e.g. update presentation of open steps, or return RawResult from RunSteps). Attempting to start a step while in the grace_period will cause the step to skip execution. When a signal is received or the soft_deadline is hit, all currently running steps will be signaled in turn (according to the LUCI_CONTEXT['deadline']
protocol).
It is good practice to ensure that recipes exit cleanly when canceled or time out, and this could be used anywhere to skip ‘cleanup’ behavior in ‘finally’ clauses or context managers.
https://chromium.googlesource.com/infra/luci/luci-py/+/HEAD/client/LUCI_CONTEXT.md
@property
— def is_experimental(self):
True if this recipe is currently running in experimental mode.
Typical usage is to modify steps which produce external side-effects so that non-production runs of the recipe do not affect production data.
Examples:
DEPS: buildbucket, json, platform, raw_io, step, time
API for interacting with the LUCI Scheduler service.
Depends on ‘prpc’ binary available in $PATH: https://godoc.org/go.chromium.org/luci/grpc/cmd/prpc Documentation for scheduler API is in https://chromium.googlesource.com/infra/luci/luci-go/+/main/scheduler/api/scheduler/v1/scheduler.proto RPCExplorer available at https://luci-scheduler.appspot.com/rpcexplorer/services/scheduler.Scheduler
A module for interacting with LUCI Scheduler service.
— def emit_trigger(self, trigger, project, jobs, step_name=None):
Emits trigger to one or more jobs of a given project.
Args: trigger (Trigger): defines payload to trigger jobs with. project (str): name of the project in LUCI Config service, which is used by LUCI Scheduler instance. See https://luci-config.appspot.com/. jobs (iterable of str): job names per LUCI Scheduler config for the given project. These typically are the same as builder names.
— def emit_triggers(self, trigger_project_jobs, timestamp_usec=None, step_name=None):
Emits a batch of triggers spanning one or more projects.
Up to date documentation is at https://chromium.googlesource.com/infra/luci/luci-go/+/main/scheduler/api/scheduler/v1/scheduler.proto
Args: trigger_project_jobs (iterable of tuples(trigger, project, jobs)): each tuple corresponds to parameters of emit_trigger
API above. timestamp_usec (int): unix timestamp in microseconds. Useful for idempotency of calls if your recipe is doing its own retries. https://chromium.googlesource.com/infra/luci/luci-go/+/main/scheduler/api/scheduler/v1/triggers.proto
@property
— def host(self):
Returns the backend hostname used by this module.
@property
— def invocation_id(self):
Returns the invocation ID of the current build as an int64 integer.
Returns None if the current build was not triggered by the scheduler.
@property
— def job_id(self):
Returns the job ID of the current build as “/”.
Returns None if the current build was not triggered by the scheduler.
— def set_host(self, host):
Changes the backend hostname used by this module.
Args: host (str): server host (e.g. ‘luci-scheduler.appspot.com’).
@property
— def triggers(self):
Returns a list of triggers that triggered the current build.
A trigger is an instance of triggers_pb2.Trigger.
DEPS: path, platform, raw_io, step
API for getting OAuth2 access tokens for LUCI tasks or private keys.
This is a thin wrapper over the luci-auth go executable ( https://godoc.org/go.chromium.org/luci/auth/client/cmd/luci-auth).
Depends on luci-auth to be in PATH.
— def default(self):
Returns an account associated with the task.
On LUCI, this is default account exposed through LUCI_CONTEXT[“local_auth”] protocol. When running locally this is an account the user logged in via “luci-auth login ...” command prior to running the recipe.
— def from_credentials_json(self, key_path):
Returns a service account based on a JSON credentials file.
This is the file generated by Cloud Console when creating a service account key. It contains the private key inside.
Args: key_path: (str|Path) object pointing to a service account JSON key.
DEPS: context, path, platform, proto, warning
Step is the primary API for running steps (external programs, etc.)
@property
— def InfraFailure(self):
InfraFailure is a subclass of StepFailure, and will translate to a purple build.
This exception is raised from steps which are marked as infra_step
s when they fail.
@property
— def MAX_CPU(self):
Returns the maximum number of millicores this system has.
@property
— def MAX_MEMORY(self):
Returns the maximum amount of memory on the system in MB.
— def ResourceCost(self, cpu=500, memory=50, disk=0, net=0):
A structure defining the resources that a given step may need.
The four resources are:
disk
cost. At 0, the step will run regardless of other steps with disk cost.net
cost. At 0, the step will run regardless of other steps with net cost.A step will run when ALL of the resources are simultaneously available. The Recipe Engine currently uses a greedy scheduling algorithm for picking the next step to run. If multiple steps are waiting for resources, this will pick the largest (cpu, memory, disk, net) step which fits the currently available resources and run that. The theory is that, assuming:
It's therefore optimal to run steps as quickly as possible, to avoid wasting the timeout attached to the build.
Note that bool(ResourceCost(...))
is defined to be True if the ResourceCost has at least one non-zero cost, and False otherwise.
Args:
MAX_CPU
helper. A value higher than the maximum number of millicores on the system is equivalent to MAX_CPU
.MAX_MEMORY
as a helper. A value higher than the maximum amount of memory on the system is equivalent to MAX_MEMORY
.Returns: a ResourceCost suitable for use with api.step(...)
's cost kwarg. Note that passing None
to api.step for the cost kwarg is equivalent to ResourceCost(0, 0, 0, 0)
.
@property
— def StepFailure(self):
This is the base Exception class for all step failures.
It can be manually raised from recipe code to cause the build to turn red.
Usage:
raise api.StepFailure("some reason")
except api.StepFailure:
@property
— def StepWarning(self):
StepWarning is a subclass of StepFailure, and will translate to a yellow build.
— def __call__(self, name: str, cmd: (list[(((int | str) | Placeholder) | Path)] | None), ok_ret: ((Sequence[int] | Literal[‘any’]) | Literal[‘all’])=(0,), infra_step: bool=False, raise_on_failure: bool=True, wrapper: Sequence[(((int | str) | Placeholder) | Path)]=(), timeout: ((int | timedelta) | None)=None, stdout: (Placeholder | None)=None, stderr: (Placeholder | None)=None, stdin: (Placeholder | None)=None, step_test_data: (Callable[([], StepTestData)] | None)=None, cost: _ResourceCost=_ResourceCost()):
Runs a step (subprocess).
Args:
name (string): The name of this step.
cmd (None|List[int|string|Placeholder|Path]): The program arguments to run.
If None or an empty list, then this step just shows up in the UI but doesn't run anything (and always has a retcode of 0). See the empty()
method on this module for a more useful version of this mode.
Otherwise:
api.json.input()
or api.raw_io.output()
. Typically rendering these turns into an absolute path to a file on disk, which the program is expected to read from/write to.ok_ret (tuple or set of ints, ‘any’, ‘all’): allowed return codes. Any unexpected return codes will cause an exception to be thrown. If you pass in the value ‘any’ or ‘all’, the engine will allow any return code to be returned. Defaults to {0}.
infra_step: Whether or not this is an infrastructure step. Failing infrastructure steps will place the step in an EXCEPTION state and if raise_on_failure is True an InfraFailure will be raised.
raise_on_failure: Whether or not the step will raise on failure. If True, a StepFailure will be raised if the step‘s status is FAILURE, an InfraFailure will be raised if the step’s status is EXCEPTION and a StepWarning will be raised if the step's status is WARNING. Regardless of the value of this argument, an InfraFailure will be raised if the step is canceled.
wrapper: If supplied, a command to prepend to the executed step as a command wrapper.
timeout: If supplied, the recipe engine will kill the step after the specified number of seconds. Also accepts a datetime.timedelta.
stdout: Placeholder to put step stdout into. If used, stdout won‘t appear in annotator’s stdout.
stderr: Placeholder to put step stderr into. If used, stderr won‘t appear in annotator’s stderr.
stdin: Placeholder to read step stdin from.
step_test_data (func -> recipe_test_api.StepTestData): A factory which returns a StepTestData object that will be used as the default test data for this step. The recipe author can override/augment this object in the GenTests function.
cost (None|ResourceCost): The estimated system resource cost of this step. See ResourceCost()
. The recipe_engine will prevent more than the machine‘s maximum resources worth of steps from running at once (i.e. steps will wait until there’s enough resource available before starting). Waiting subprocesses are unblocked in capacity-available order. This means it's possible for pending tasks with large requirements to ‘starve’ temporarily while other smaller cost tasks run in parallel. Equal-weight tasks will start in FIFO order. Steps with a cost of None will NEVER wait (which is the equivalent of ResourceCost()
). Defaults to ResourceCost(cpu=500, memory=50)
.
Returns a step_data.StepData
for the running step.
@property
— def active_result(self):
The currently active (open) result from the last step that was run. This is a step_data.StepData
object.
Allows you to do things like:
try: api.step('run test', [..., api.json.output()]) finally: result = api.step.active_result if result.json.output: new_step_text = result.json.output['step_text'] api.step.active_result.presentation.step_text = new_step_text
This will update the step_text of the test, even if the test fails. Without this api, the above code would look like:
try: result = api.step('run test', [..., api.json.output()]) except api.StepFailure as f: result = f.result raise finally: if result.json.output: new_step_text = result.json.output['step_text'] api.step.active_result.presentation.step_text = new_step_text
— def close_non_nest_step(self):
Call this to explicitly terminate the currently open non-nest step.
After calling this, api.step.active_step will return the current nest step context (if any).
No-op if there's no currently active non-nest step.
— def empty(self, name, status=‘SUCCESS’, step_text=None, log_text=None, log_name=‘stdout’, raise_on_failure=True):
Runs an “empty” step (one without any command).
This can be useful to insert a status step/message in the UI, or summarize some computation which occurred inside the recipe logic.
Args: name (str) - The name of the step. status step.(INFRA_FAILURE|FAILURE|SUCCESS) - The initial status for this step. step_text (str) - Some text to set for the “step_text” on the presentation of this step. log_text (str|list(str)) - Some text to set for the log of this step. If this is a list(str), will be treated as separate lines of the log. Otherwise newlines will be respected. log_name (str) - The name of the log to output log_text
to. raise_on_failure (bool) - If set, and status
is not SUCCESS, raise the appropriate exception.
Returns step_data.StepData.
— def funcall(self, name, func, *args, **kwargs):
Call a function and store the results and exception in a step.
Sample usage:
api.step.funcall(None, some_function, 4, json=True)
@contextlib.contextmanager
— def nest(self, name, status=‘worst’):
Nest allows you to nest steps hierarchically on the build UI.
This generates a dummy step with the provided name in the current namespace. All other steps run within this with
statement will be nested inside of this dummy step. Nested steps can also nest within each other.
The presentation for the dummy step can be updated (e.g. to add step_text, step_links, etc.) or set the step‘s status. If you do not set the status, it will be calculated from the status’ of all the steps run within this one according to the status
algorithm selected.
with
statement, the status will be one of FAILURE, WARNING or EXCEPTION (depending on the type of exception).Example:
with api.step.nest('run shards'): # status='worst' is the default. with api.defer.context() as defer: for shard in shards: defer(run_shard, shard) # status='last' with api.step.nest('do upload', status='last'): for attempt in range(num_attempts): try: do_upload() # first one fails, but second succeeds. except api.step.StepFailure: if range >= num_attempts - 1: raise pass # manually adjust status with api.step.nest('custom thing') as presentation: # stuff! presentation.status = 'FAILURE' # or whatever
Args:
presentation.status
if the recipe doesn't set one explicitly.Yields a StepPresentation for this dummy step, which you may update as you please.
— def raise_on_failure(self, result, status_override=None):
Raise an appropriate exception if a step is not successful.
Arguments:
Returns: If the step's status is SUCCESS, the step result will be returned.
Raises:
— def sub_build(self, name: str, cmd: (((int | str) | Placeholder) | Path), build: build_pb2.Build, raise_on_failure: bool=True, output_path: ((str | Path) | None)=None, legacy_global_namespace=False, timeout=None, step_test_data=None, cost=_ResourceCost()):
Launch a sub-build by invoking a LUCI executable. All steps in the sub-build will appear as child steps of this step (Merge Step).
See protocol: https://go.chromium.org/luci/luciexe
Example:
run_exe = api.cipd.ensure_tool(...) # Install LUCI executable `run_exe` # Basic Example: launch `run_exe` with empty initial build and # default options. ret = api.sub_build("launch sub build", [run_exe], build_pb2.Build()) sub_build = ret.step.sub_build # access final build proto result # Example: launch `run_exe` with input build to recipe and customized # output path, cwd and cache directory. with api.context( # Change the cwd of the launched LUCI executable cwd=api.path.start_dir / 'subdir', # Change the cache_dir of the launched LUCI executable. Defaults to # api.path.cache_dir if unchanged. luciexe=sections_pb2.LUCIExe(cache_dir=api.path.cache_dir / 'sub'), ): # Command executed: # `/path/to/run_exe --output [CLEANUP]/build.json --foo bar baz` ret = api.sub_build("launch sub build", [run_exe, '--foo', 'bar', 'baz'], api.buildbucket.build, output_path=api.path.cleanup_dir / 'build.json') sub_build = ret.step.sub_build # access final build proto result
Args:
cmd
parameter in __call__
method except that None is NOT allowed. cmd[0] MUST denote a LUCI executable. The --output
flag and its value should NOT be provided in the list. It should be provided via keyword arg output_path
instead.--output
flag. If provided, it should be a path to a non-existent file (its directory MUST exist). The extension of the path dictates the encoding format of final build proto (See EXT_TO_CODEC
). If not provided, the output will be a temp file with binary encoding.timeout
parameter in __call__
method.step_test_data
parameter in __call__
method.cost
parameter in __call__
method.Returns a step_data.StepData
for the finished step. The final build proto object can be accessed via ret.step.sub_build
. The build is guaranteed to be present (i.e. not None) with a terminal build status.
Raises StepFailure
if the sub-build reports FAILURE status. Raises InfraFailure
if the sub-build reports INFRA_FAILURE or CANCELED status.
DEPS: buildbucket, cas, cipd, context, json, path, properties, raw_io, step
API for interacting with swarming.
The tool's source lives at http://go.chromium.org/luci/client/cmd/swarming.
This module will deploy the client to [CACHE]/swarming_client/; users should add this path to the named cache for their builder.
@property
— def bot_id(self):
Swarming bot ID executing this task.
— def collect(self, name, tasks, output_dir=None, task_output_stdout=‘json’, timeout=None, eager=False, verbose=False):
Waits on a set of Swarming tasks.
Args: name (str): The name of the step. tasks (Iterable(str|TaskRequestMetadata)): A list of task IDs or metadata objects corresponding to tasks to wait for. output_dir (Path|None): Where to download the tasks' isolated outputs. If set to None, they will not be downloaded; else, a given task‘s outputs will be downloaded to output_dir//. task_output_stdout (str|Path|Iterable(str|Path)): Where to output each task’s text output. If given an iterable, will output it into multiple locations. Supported values are ‘none’, ‘json’, ‘console’ or a Path. At most one output Path is allowed. Accepts ‘all’ as a legacy alias for [‘json’, ‘console’]. timeout (str|None): The duration for which to wait on the tasks to finish. If set to None, there will be no timeout; else, timeout follows the format described by https://golang.org/pkg/time/#ParseDuration. eager (bool): Whether to return as soon as the first task finishes, instead of waiting for all tasks to finish. verbose (bool): Whether to use verbose logs.
Returns: A list of TaskResult objects.
@property
— def current_server(self):
Swarming server executing this task.
— def ensure_client(self):
— def initialize(self):
— def list_bots(self, step_name, dimensions=None, fields=None):
List bots matching the given options.
Args: step_name (str): The name of the step. dimensions (None|Dict[str, str]): Select bots that match the given dimensions. fields (None|List[str]): Fields to include in the response. If not specified, all fields will be included.
Returns: A list of BotMetadata objects.
@contextlib.contextmanager
— def on_path(self):
This context manager ensures the go swarming client is available on $PATH.
Example:
with api.swarming.on_path(): # do your steps which require the swarming binary on path
— def show_request(self, name, task):
Retrieve the TaskRequest for a Swarming task.
Args: name (str): The name of the step. task (str|TaskRequestMetadata): Task ID or metadata objects of the swarming task to be retrieved.
Returns: TaskRequest objects.
@property
— def task_id(self):
This task's Swarming ID.
— def task_request(self):
Creates a new TaskRequest object.
See documentation for TaskRequest/TaskSlice to see how to build this up into a full task.
Once your TaskRequest is complete, you can pass it to trigger
in order to have it start running on the swarming server.
— def task_request_from_jsonish(self, json_d):
Creates a new TaskRequest object from a JSON-serializable dict.
The input argument should match the schema as the output of TaskRequest.to_jsonish().
— def trigger(self, step_name, requests, verbose=False):
Triggers a set of Swarming tasks.
Args: step_name (str): The name of the step. requests (seq[TaskRequest]): A sequence of task request objects representing the tasks we want to trigger. verbose (bool): Whether to use verbose logs.
Returns: A list of TaskRequestMetadata objects.
Allows mockable access to the current time.
— def exponential_retry(self, retries, delay, condition=None):
Adds exponential retry to a function.
Decorator which retries the function with exponential backoff.
Each time the decorated function throws an exception, we sleep for some amount of time. We increase the amount of time exponentially to prevent cascading failures from overwhelming systems. We also add a jitter to avoid the thundering herd problem.
Example usage:
def RunSteps(api): @api.time.exponential_retry(5, datetime.timedelta(seconds=1)) def test_retries(): api.step('running', None) raise Exception() test_retries() # Executes 6 steps with 'running' as a common prefix of their step names.
When writing a recipe module whose method needs to be retried, you won't have access to the time module in the class body, but you can import a class-method decorator like:
from RECIPE_MODULES.recipe_engine.time.api import exponential_retry
This decorator can be used on class methods or on functions (for example, functions in a recipe file).
Example usage 1 (class method decorator):
from recipe_engine.recipe_api import RecipeApi from RECIPE_MODULES.recipe_engine.time.api import exponential_retry # NOTE: Don't forget to put "recipe_engine/time" in the module DEPS. class MyRecipeModule(RecipeApi): @exponential_retry(5, datetime.timedelta(seconds=1)) def my_retriable_function(self, ...): self.m.step('running', None)
Example usage 2 (function with api as first arg):
from RECIPE_MODULES.recipe_engine.time.api import exponential_retry # NOTE: Don't forget to put "recipe_engine/time" in DEPS. @exponential_retry(5, datetime.timedelta(seconds=1)) def helper_function(api): api.step('running', None) def RunSteps(api): helper_funciton(api)
— def ms_since_epoch(self):
Returns current timestamp as an int number of milliseconds since epoch.
— def sleep(self, secs, with_step=None, step_result=None):
Suspend execution of |secs| (float) seconds, waiting for GLOBAL_SHUTDOWN. Does nothing in testing.
Args:
— def time(self):
Returns current timestamp as a float number of seconds since epoch.
— def timeout(self, seconds: float):
Provides a context that times out after the given number of seconds.
Usage: with api.time.timeout(45):
Look at the “deadline” section of https://chromium.googlesource.com/infra/luci/luci-py/+/HEAD/client/LUCI_CONTEXT.md to see how this works.
— def utcnow(self):
Returns current UTC time as a datetime.datetime.
DEPS: cipd, context, file, json, path, properties, proto, step
API for Tricium analyzers to use.
This recipe module is intended to support different kinds of analyzer recipes, including:
TriciumApi provides basic support for Tricium.
— def __init__(self, **kwargs):
Sets up the API.
Initializes an empty list of comments for use with add_comment and write_comments.
— def add_comment(self, category, message, path, start_line=0, end_line=0, start_char=0, end_char=0, suggestions=()):
Adds one comment to accumulate.
For semantics of start_line, start_char, end_line, end_char, see Gerrit doc https://gerrit-review.googlesource.com/Documentation/rest-api-changes.html#comment-range
— def run_legacy(self, analyzers, input_base, affected_files, commit_message, emit=True):
Runs legacy analyzers.
This function internally accumulates the comments from the analyzers it runs to the same global storage used by add_comment()
. By default it emits comments from legacy analyzers to the tricium output property, along with any comments previously created by calling add_comment()
directly, after running all the specified analyzers.
Args:
write_comments
to emit the comments added by the legacy analyzers. This is useful for recipes that need to run a mixture of custom analyzers (using add_comment()
to store comments) and legacy analyzers. @staticmethod
— def validate_comment(comment):
Validates comment to comply with Tricium/Gerrit requirements.
Raise ValueError on the first detected problem.
— def write_comments(self):
Emit the results accumulated by add_comment
and run_legacy
.
DEPS: context, json, path, raw_io, step
Methods for interacting with HTTP(s) URLs.
— def get_file(self, url, path, step_name=None, headers=None, transient_retry=True, strip_prefix=None):
GET data at given URL and writes it to file.
Args:
Returns (UrlApi.Response): Response with “path” as its “output” value.
Raises:
— def get_json(self, url, step_name=None, headers=None, transient_retry=True, strip_prefix=None, log=False, default_test_data=None):
GET data at given URL and writes it to file.
Args:
Returns (UrlApi.Response): Response with the JSON as its “output” value.
Raises:
— def get_raw(self, url, step_name=None, headers=None, transient_retry=True, default_test_data=None):
GET data at given URL and writes it to file.
Args:
Returns (UrlApi.Response): Response with the content as its output value.
Raises:
— def get_text(self, url, step_name=None, headers=None, transient_retry=True, default_test_data=None):
GET data at given URL and writes it to file.
Args:
Returns (UrlApi.Response): Response with the content as its output value.
Raises:
— def join(self, *parts):
Constructs a URL path from composite parts.
Args:
— def validate_url(self, v):
Validates that “v” is a valid URL.
A valid URL has a scheme and netloc, and must begin with HTTP or HTTPS.
Args:
Returns (bool): True if the URL is considered secure, False if not.
Raises: ValueError: if “v” is not valid.
Allows test-repeatable access to a random UUID.
— def random(self):
Returns a random UUID string.
Thin API for parsing semver strings into comparable object.
@staticmethod
— def parse(version):
Parse implements PEP 440 parsing for semvers.
If version
is strictly parseable as PEP 440, this returns a Version object. Otherwise it does a ‘loose’ parse, just extracting numerals from version.
You can read more about how this works at: https://setuptools.readthedocs.io/en/latest/pkg_resources.html#parsing-utilities (for strict parsing) and https://github.com/di/packaging_legacy (for the fallback behavior).
Allows recipe modules to issue warnings in simulation test.
@recipe_api.escape_all_warnings
— def issue(self, name):
Issues an execution warning.
name
MAY either be a fully qualified “repo_name/WARNING_NAME” or a short “WARNING_NAME”. If it's a short name, then the “repo_name” will be determined from the location of the file issuing the warning (i.e. if the issue() comes from a file in repo_X, then “WARNING_NAME” will be transformed to “repo_X/WARNING_NAME”).
It is recommended to use the short name if the warning is defined in the same repo as the issue() call.
DEPS: archive, context, file, json, path, platform, raw_io, step
— def RunSteps(api):
DEPS: assertions, properties, step
— def RunSteps(api):
— def RunSteps(api):
DEPS: assertions, properties, step
— def RunSteps(api):
DEPS: assertions, properties, step
— def RunSteps(api):
— def RunSteps(api):
DEPS: assertions, properties, step
— def RunSteps(api):
— def RunSteps(api):
DEPS: buildbucket, json, platform, properties, raw_io, runtime, step
This file is a recipe demonstrating the buildbucket recipe module.
@recipe_api.ignore_warnings(‘recipe_engine/SET_BUILDBUCKET_HOST_DEPRECATED’)
— def RunSteps(api):
DEPS: buildbucket, properties, swarming
Launches multiple builds at the same revision.
— def RunSteps(api, build_requests, collect_builds):
— def RunSteps(api):
— def RunSteps(api):
DEPS: assertions, buildbucket, properties, step
— def RunSteps(api):
— def RunSteps(api):
DEPS: assertions, buildbucket, properties, step
— def RunSteps(api):
— def RunSteps(api):
DEPS: buildbucket, properties, step
— def RunSteps(api):
— def RunSteps(api):
— def RunSteps(api):
DEPS: buildbucket, platform, properties, raw_io, step
This recipe tests the buildbucket.set_output_gitiles_commit function.
— def RunSteps(api):
DEPS: buildbucket, json, properties, runtime, step
— def RunSteps(api):
DEPS: buildbucket, properties, raw_io, runtime, step
— def RunSteps(api, props):
DEPS: cas, file, path, properties, runtime, step
— def RunSteps(api):
DEPS: cas_input, path, properties
— def RunSteps(api):
— def RunSteps(api):
— def make_runs(count=1):
Generates response Runs for a test.
DEPS: cipd, json, path, platform, properties, step
— def RunSteps(api, use_pkg, pkg_files, pkg_dirs, pkg_vars, ver_files, install_mode, refs, tags, metadata, max_threads):
— def RunSteps(api):
DEPS: context, path, raw_io, step, time
— def RunSteps(api):
— def RunSteps(api):
DEPS: context, path, raw_io, step
— def RunSteps(api):
— def RunSteps(api):
— def RunSteps(api):
DEPS: assertions, context, path, step
— def RunSteps(api):
DEPS: assertions, buildbucket, cq, properties, step
@recipe_api.ignore_warnings(‘recipe_engine/CQ_MODULE_DEPRECATED’)
— def RunSteps(api):
DEPS: assertions, buildbucket, cq, json, properties, step
@recipe_api.ignore_warnings(‘recipe_engine/CQ_MODULE_DEPRECATED’)
— def RunSteps(api):
@recipe_api.ignore_warnings(‘recipe_engine/CQ_MODULE_DEPRECATED’)
— def RunSteps(api):
DEPS: buildbucket, cq, step
@recipe_api.ignore_warnings(‘recipe_engine/CQ_MODULE_DEPRECATED’)
— def RunSteps(api):
DEPS: assertions, cq, properties, step
@recipe_api.ignore_warnings(‘recipe_engine/CQ_MODULE_DEPRECATED’)
— def RunSteps(api):
DEPS: assertions, cq, properties
@recipe_api.ignore_warnings(‘recipe_engine/CQ_MODULE_DEPRECATED’)
— def RunSteps(api):
DEPS: cq, properties, step
@recipe_api.ignore_warnings(‘recipe_engine/CQ_MODULE_DEPRECATED’)
— def RunSteps(api):
DEPS: assertions, buildbucket, cq, properties
@recipe_api.ignore_warnings(‘recipe_engine/CQ_MODULE_DEPRECATED’)
— def RunSteps(api):
DEPS: assertions, cq, step
@recipe_api.ignore_warnings(‘recipe_engine/CQ_MODULE_DEPRECATED’)
— def RunSteps(api):
DEPS: buildbucket, cq, step
@recipe_api.ignore_warnings(‘recipe_engine/CQ_MODULE_DEPRECATED’)
— def RunSteps(api):
DEPS: assertions, buildbucket, cv, properties, step
— def RunSteps(api):
DEPS: assertions, buildbucket, cv, json, properties, step
— def RunSteps(api):
— def RunSteps(api):
— def RunSteps(api):
— def RunSteps(api):
DEPS: buildbucket, cv, step
— def RunSteps(api):
DEPS: assertions, cv, properties, step
— def RunSteps(api):
DEPS: assertions, cv, properties
— def RunSteps(api):
DEPS: cv, properties, step
— def RunSteps(api):
DEPS: assertions, buildbucket, cv, properties
— def RunSteps(api):
DEPS: assertions, cv, step
— def RunSteps(api):
DEPS: buildbucket, cv, step
— def RunSteps(api):
DEPS: context, defer, properties, step
— def RunSteps(api, props):
DEPS: context, defer, properties, step
— def RunSteps(api, props):
DEPS: context, defer, properties, step
— def RunSteps(api, props):
DEPS: context, defer, properties, step
— def RunSteps(api, props):
DEPS: defer, properties, step
— def RunSteps(api: recipe_api.RecipeApi, props: properties_pb2.SuppressedInputProps):
Tests that daemons that hang on to STDOUT can't cause the engine to hang.
— def RunSteps(api):
A fast-running recipe which comprehensively covers all StepPresentation features available in the recipe engine.
— def RunSteps(api):
— def named_step(api, name):
Tests that recipes can modify configuration options in various ways.
— def BaseConfig(**_kwargs):
— def DumpRecipeEngineTestConfig(api, config):
— def RunSteps(api):
@config_ctx()
— def test1(c):
@config_ctx(includes=[‘test2a’])
— def test2(c):
@config_ctx()
— def test2a(c):
DEPS: file, futures, path, platform, step
Simple recipe which runs a bunch of subprocesses which react to early termination in different ways.
— def RunSteps(api, props):
Tests that tests with a single exception are handled correctly.
— def RunSteps(api):
— def my_function():
Tests that tests with multiple exceptions are handled correctly.
— def RunSteps(api):
— def my_function():
Tests that run_steps is handling recipe failures correctly.
— def RunSteps(api):
Engine shouldn't explode when step_test_data gets functools.partial.
This is a regression test for a bug caused by this revision: http://src.chromium.org/viewvc/chrome?revision=298072&view=revision
When this recipe is run (by run_test.py), the _print_step code is exercised.
— def RunSteps(api):
DEPS: json, properties, step
Tests that engine.py can handle unknown recipe results.
— def RunSteps(api, props):
DEPS: futures, properties, step
Simple recipe which sleeps in a subprocess forever to facilitate early termination tests.
— def RunSteps(api, props):
Tests that deleting the current working directory doesn't immediately fail
— def RunSteps(api):
This test serves to demonstrate that the ModuleInjectionSite object on recipe modules (i.e. the .m
) also contains a reference to the module which owns it.
This was implemented to aid in refactoring some recipes (crbug.com/782142).
— def RunSteps(api):
Tests that step_data can accept multiple specs at once.
— def RunSteps(api):
DEPS: assertions, json, step
Tests error checking around multiple placeholders in a single step.
— def RunSteps(api):
— def RunSteps(api):
Tests that placeholders can't wreck the world by exhausting the step stack.
— def RunSteps(api):
Tests that output properties can be a proto message.
— def RunSteps(api):
— def RunSteps(api, properties, env_props):
Tests that recipes have access to names, resources and their repo.
— def RunSteps(api):
Tests that step presentation properties can be ordered.
— def RunSteps(api):
DEPS: cipd, properties, step
— def RunSteps(api, from_recipe, attribute, module):
— def RunSteps(api):
DEPS: context, properties, step
Tests that step_data can accept multiple specs at once.
— def RunSteps(api, fakeit):
— def RunSteps(api):
DEPS: assertions, file, path
— def RunSteps(api):
— def RunSteps(api):
— def RunSteps(api):
— def RunSteps(api):
DEPS: assertions, file, path
— def RunSteps(api):
— def RunSteps(api):
— def RunSteps(api):
— def RunSteps(api):
— def RunSteps(api):
— def RunSteps(api):
— def RunSteps(api):
— def RunSteps(api):
— def RunSteps(api):
DEPS: futures, json, path, raw_io, step
— def RunSteps(api):
— def manage_helper(api, chn):
@contextmanager
— def run_helper(api):
Runs the background helper.
Yields control once helper is ready. Kills helper once leaving the context manager.
This is an example of what your recipe module code would look like. Note that we don't pass the channel to the ‘user’ code (i.e. RunSteps).
DEPS: context, futures, path, step
— def Level1(api, i):
— def Level2(api, i):
— def RunSteps(api):
— def RunSteps(api):
— def RunSteps(api):
— def RunSteps(api):
DEPS: futures, properties, step
This tests the engine's ability to handle many simultaneously-started steps.
Prior to this, logdog butler and the recipe engine would run out of file handles, because every spawn_immediate would immediately generate all log handles for the step, instead of waiting for the step's cost to be available.
— def RunSteps(api, props):
This tests metadata features of the Future object.
— def RunSteps(api):
— def RunSteps(api):
— def RunSteps(api):
— def worker(api, sem, i, N):
DEPS: generator_script, json, path, properties, step
— def RunSteps(api, script_name):
— def RunSteps(api):
DEPS: json, path, properties, raw_io, step
@recipe_api.ignore_warnings(‘recipe_engine/JSON_READ_DEPRECATED’)
— def RunSteps(api):
— def RunSteps(api):
Test to assert that sort_keys=False preserves insertion order.
— def RunSteps(api):
DEPS: buildbucket, led, properties, proto, step
— def RunSteps(api, get_cmd, child_properties, sloppy_child_properties, do_bogus_edits):
DEPS: buildbucket, led, properties, proto, step
— def RunSteps(api, get_cmd):
— def RunSteps(api):
DEPS: led, properties, step
— def RunSteps(api):
DEPS: legacy_annotation, raw_io, step
— def RunSteps(api):
DEPS: assertions, json, luci_analysis, properties, raw_io
Tests for query_failure_rate.
— def RunSteps(api, input_list):
DEPS: assertions, json, luci_analysis, properties, raw_io
Tests for query_stability.
— def RunSteps(api, input_list):
DEPS: json, luci_analysis, raw_io, step
Tests for generate_analysis.
— def RunSteps(api):
DEPS: json, luci_analysis, raw_io, step
Tests for query_failure_rate.
— def RunSteps(api):
DEPS: assertions, json, luci_analysis, step
Tests for lookup_bug.
— def RunSteps(api):
DEPS: assertions, luci_analysis, step
Tests for query_cluster_failres.
— def RunSteps(api):
Tests for query_variants.
— def RunSteps(api):
— def RunSteps(api):
— def RunSteps(api):
DEPS: json, path, platform, properties, step
@recipe_api.ignore_warnings(‘recipe_engine/CHECKOUT_DIR_DEPRECATED’, ‘recipe_engine/PATH_GETITEM_DEPRECATED’, ‘recipe_engine/PATH_IS_PARENT_OF_DEPRECATED’)
— def RunSteps(api):
— def RunSteps(api):
@recipe_api.ignore_warnings(‘recipe_engine/CHECKOUT_DIR_DEPRECATED’, ‘recipe_engine/PATH_GETITEM_DEPRECATED’)
— def RunSteps(api):
@recipe_api.ignore_warnings(‘recipe_engine/CHECKOUT_DIR_DEPRECATED’)
— def RunSteps(api):
@recipe_api.ignore_warnings(‘recipe_engine/CHECKOUT_DIR_DEPRECATED’)
— def RunSteps(api):
Test to cover legacy aspects of PathTestApi.
@recipe_api.ignore_warnings(‘recipe_engine/CHECKOUT_DIR_DEPRECATED’)
— def RunSteps(api):
DEPS: buildbucket, properties, step, swarming, time
— def RunSteps(api, properties):
— def RunSteps(api):
DEPS: json, properties, step
— def RunSteps(api, props, env_props):
DEPS: assertions, path, proto, step
— def RunSteps(api):
— def RunSteps(api):
— def RunSteps(api):
DEPS: path, platform, properties, raw_io, step
— def RunSteps(api):
DEPS: assertions, raw_io, step
— def RunSteps(api):
DEPS: context, json, properties, resultdb, step
— def RunSteps(api):
— def RunSteps(api):
— def RunSteps(api):
DEPS: buildbucket, resultdb, step
— def RunSteps(api):
— def RunSteps(api, invocation, baseline):
— def RunSteps(api):
— def RunSteps(api, invocation, test_id_regexp):
— def RunSteps(api, invocation, test_variant_status, field_mask_paths):
— def RunSteps(api):
— def RunSteps(api):
— def RunSteps(api):
— def RunSteps(api, invocation, gitiles_commit, gerrit_changes):
— def RunSteps(api):
— def RunSteps(api):
DEPS: buildbucket, json, runtime, scheduler, time
This file is a recipe demonstrating emitting triggers to LUCI Scheduler.
— def RunSteps(api):
This file is a recipe demonstrating reading/mocking scheduler host.
— def RunSteps(api):
This file is a recipe demonstrating reading triggers of the current build.
— def RunSteps(api):
DEPS: path, platform, properties, raw_io, service_account
— def RunSteps(api, key_path, scopes):
DEPS: context, json, path, properties, step
— def RunSteps(api, bad_return, access_invalid_data, access_deep_invalid_data, assign_extra_junk, timeout):
— def RunSteps(api):
— def RunSteps(api):
DEPS: context, path, properties, step
— def RunSteps(api):
@recipe_api.ignore_warnings(‘recipe_engine/STEP_NEST_PRESENTATION_DEPRECATED’)
— def RunSteps(api):
— def RunSteps(api, infra_step, set_status_to_exception):
— def RunSteps(api):
— def RunSteps(api):
— def RunSteps(api):
DEPS: assertions, context, json, path, properties, step
— def RunSteps(api, props):
— def RunSteps(api, timeout):
DEPS: buildbucket, cipd, json, path, properties, step, swarming
— def RunSteps(api):
— def RunSteps(api):
DEPS: assertions, path, swarming
— def RunSteps(api):
— def RunSteps(api):
— def RunSteps(api):
DEPS: assertions, buildbucket, context, step, swarming
— def RunSteps(api):
— def RunSteps(api):
DEPS: assertions, path, swarming
— def RunSteps(api):
DEPS: assertions, properties, runtime, step, time
— def RunSteps(api):
@exponential_retry(5, datetime.timedelta(seconds=1))
— def helper_fn_that_needs_retries(api):
DEPS: assertions, properties, step, time
— def RunSteps(api, properties):
— def RunSteps(api, trigger_type_error):
An example of a recipe wrapping legacy analyzers.
— def RunSteps(api):
— def RunSteps(api, case):
DEPS: assertions, properties, proto, tricium
— def RunSteps(api, props):
DEPS: context, path, step, url
— def RunSteps(api):
— def RunSteps(api):
DEPS: properties, step, url
— def RunSteps(api):
— def RunSteps(api):
— def RunSteps(api):
This is a fake recipe to trick the simulation and make it believes that this module has tests. The actual test for this module is done via unit test because the issue
method can only be used from recipe_modules, not recipes.
— def RunSteps(api):