Clone this repo:
  1. a3d5a2b Bump version to 0.3.24. by Andrii Shyshkalov · 3 weeks ago master
  2. e19706b Detect tests which had started but whose result wasn't mentioned. by Andrii Shyshkalov · 3 weeks ago
  3. 5ea679d Keep track of terminating processes and infer likely crashed ones. by Andrii Shyshkalov · 3 weeks ago
  4. 9e58acc CQ config: add gerrit CQAbility verifier. by Andrii Shyshkalov · 4 months ago
  5. 7b492a7 Make testing/expect_tests upload reviews to Gerrit by default by Aaron Gable · 8 months ago

Expect Tests

Expect Tests is a test framework which:

  • Is parallel by default
  • Collects coverage information by default
  • Allows easy test-case generation
  • Is compatible with unittest
  • Provides easy test globbing and debugging

You can run the test suite with nosetests expect_tests/test in the root directory.

Quick user manual

Writing tests

Tests are subclasses of unittests.TestCase only. expect_tests looks for tests in files named like * The coverage information for file is only collected from tests located in test/

If a test returns a value, an expectation file for this test is created, and contents of this file are compared against the return value. Any python object that can be unambiguously serialized into JSON or into a string using python's repr() function can be used as expectations.

The expectation files should be checked into your repository along with the code (otherwise you‘ll break tests on other developer’s machines and on bots). Expectations can be used as diff-able change detectors, and can help you review changes in your code's behavior.


The simplest expect_tests invocation is:

expect_tests (list|test|train) <path>

where can point either to a Python (sub)package's directory, or to a directory containing Python packages. In the latter case, all tests in all packages in the directory will be considered.

  • list: just output the full list of tests on stdout
  • test: run the tests
  • train: run the test and update their expectations instead of checking against them.

Filtering tests

It is possible to run an action on a subset of test instead of all of them. This is achieved by appending a filter after the path specification:

expect_tests (list|test|train) <path>:<filter glob>

applies to the full test names, as output by ‘list’. It does not apply to the package path.

Example: Suppose you have the following structure:

root/package1/test/  # contains test TestFoo.test\_feature
root/package1/subpackage/test/  # contains TestSubFoo.test\_feature
root/package2/... # with same structure as package1

Then (supposing the current directory is the parent of root/)

$ expect_tests list root

$ expect_tests list root/package1

$ expect_tests list 'root:package1*'  # less efficient than root/package1

$ expect_tests list 'root/package1:*TestSubFoo*'

Fine-tuning and advanced topics

Having trouble debugging a test? You can use the ‘debug’ action instead of ‘test’ to get a debugging prompt when entering tests. That way you can step through the code if necessary.

You can make expect_tests ignore a subpackage by adding a .expect_tests.cfg file in the directory containing the package, with the following content:


Some Python code, like the Appengine sdk, requires some special setup to be able to work. In order to support that, you can create a file in the directory containing the top-level package containing tests. This code will be execfile'd just before any operation (list/run/train) in this directory.