Clone this repo:
  1. 9e58acc CQ config: add gerrit CQAbility verifier. by Andrii Shyshkalov · 5 weeks ago master
  2. 7b492a7 Make testing/expect_tests upload reviews to Gerrit by default by Aaron Gable · 4 months ago
  3. 5e37a72 Add git_repo_url to expect_tests cq config by agable · 5 months ago
  4. 7043c73 Enable gerrit cq for expect_tests by agable · 5 months ago
  5. cb7238e Bump version to 0.3.23 by David Sansome · 8 months ago

Expect Tests

Expect Tests is a test framework which:

  • Is parallel by default
  • Collects coverage information by default
  • Allows easy test-case generation
  • Is compatible with unittest
  • Provides easy test globbing and debugging

You can run the test suite with nosetests expect_tests/test in the root directory.

Quick user manual

Writing tests

Tests are subclasses of unittests.TestCase only. expect_tests looks for tests in files named like *_test.py. The coverage information for file foo.py is only collected from tests located in test/foo_test.py.

If a test returns a value, an expectation file for this test is created, and contents of this file are compared against the return value. Any python object that can be unambiguously serialized into JSON or into a string using python's repr() function can be used as expectations.

The expectation files should be checked into your repository along with the code (otherwise you‘ll break tests on other developer’s machines and on bots). Expectations can be used as diff-able change detectors, and can help you review changes in your code's behavior.

Invocation

The simplest expect_tests invocation is:

expect_tests (list|test|train) <path>

where can point either to a Python (sub)package's directory, or to a directory containing Python packages. In the latter case, all tests in all packages in the directory will be considered.

  • list: just output the full list of tests on stdout
  • test: run the tests
  • train: run the test and update their expectations instead of checking against them.

Filtering tests

It is possible to run an action on a subset of test instead of all of them. This is achieved by appending a filter after the path specification:

expect_tests (list|test|train) <path>:<filter glob>

applies to the full test names, as output by ‘list’. It does not apply to the package path.

Example: Suppose you have the following structure:

root/
root/package1
root/package1/__init__.py
root/package1/foo.py
root/package1/test/__init__.py
root/package1/test/foo_test.py  # contains test TestFoo.test\_feature
root/package1/subpackage
root/package1/subpackage/__init__.py
root/package1/subpackage/subfoo.py
root/package1/subpackage/test/__init__.py
root/package1/subpackage/test/subfoo_test.py  # contains TestSubFoo.test\_feature
root/package2/... # with same structure as package1

Then (supposing the current directory is the parent of root/)

$ expect_tests list root
package1.tests.foo_test.TestFoo.test_feature
package1.subpackage.tests.subfoo_test.TestSubFoo.test_feature
package2.tests.foo_test.TestFoo.test_feature
package2.subpackage.tests.subfoo_test.TestSubFoo.test_feature

$ expect_tests list root/package1
package1.tests.foo_test.TestFoo.test_feature
package1.subpackage.tests.subfoo_test.TestSubFoo.test_feature

$ expect_tests list 'root:package1*'  # less efficient than root/package1
package1.tests.foo_test.TestFoo.test_feature
package1.subpackage.tests.subfoo_test.TestSubFoo.test_feature

$ expect_tests list 'root/package1:*TestSubFoo*'
package1.subpackage.tests.subfoo_test.TestSubFoo.test_feature

Fine-tuning and advanced topics

Having trouble debugging a test? You can use the ‘debug’ action instead of ‘test’ to get a debugging prompt when entering tests. That way you can step through the code if necessary.

You can make expect_tests ignore a subpackage by adding a .expect_tests.cfg file in the directory containing the package, with the following content:

[expect_tests]
skip=packagetoignore1
     packagetoignore2

Some Python code, like the Appengine sdk, requires some special setup to be able to work. In order to support that, you can create a .expect_tests_pretest.py file in the directory containing the top-level package containing tests. This code will be execfile'd just before any operation (list/run/train) in this directory.