Clone this repo:
  1. c3766ac Release version 0.3.25 by Andrii Shyshkalov · 3 months ago master
  2. bddcc2b if run in verbose mode, set unittest.TestCase.maxDiff to None. by Andrii Shyshkalov · 3 months ago
  3. b1f49b7 Fix failing tests and improve the other 2 special case tests. by Andrii Shyshkalov · 3 months ago
  4. a812772 Add PRUSUBMIT that actually runs tests. by Andrii Shyshkalov · 3 months ago
  5. 9a7d1c8 Add Expect-Tests presubmit builder to CQ. by Andrii Shyshkalov · 3 months ago

Expect Tests

Expect Tests is a test framework which:

  • Is parallel by default
  • Collects coverage information by default
  • Allows easy test-case generation
  • Is compatible with unittest
  • Provides easy test globbing and debugging

You can run the test suite with nosetests expect_tests/test in the root directory.

Quick user manual

Writing tests

Tests are subclasses of unittests.TestCase only. expect_tests looks for tests in files named like *_test.py. The coverage information for file foo.py is only collected from tests located in test/foo_test.py.

If a test returns a value, an expectation file for this test is created, and contents of this file are compared against the return value. Any python object that can be unambiguously serialized into JSON or into a string using python's repr() function can be used as expectations.

The expectation files should be checked into your repository along with the code (otherwise you‘ll break tests on other developer’s machines and on bots). Expectations can be used as diff-able change detectors, and can help you review changes in your code's behavior.

Invocation

The simplest expect_tests invocation is:

expect_tests (list|test|train) <path>

where can point either to a Python (sub)package's directory, or to a directory containing Python packages. In the latter case, all tests in all packages in the directory will be considered.

  • list: just output the full list of tests on stdout
  • test: run the tests
  • train: run the test and update their expectations instead of checking against them.

Filtering tests

It is possible to run an action on a subset of test instead of all of them. This is achieved by appending a filter after the path specification:

expect_tests (list|test|train) <path>:<filter glob>

applies to the full test names, as output by ‘list’. It does not apply to the package path.

Example: Suppose you have the following structure:

root/
root/package1
root/package1/__init__.py
root/package1/foo.py
root/package1/test/__init__.py
root/package1/test/foo_test.py  # contains test TestFoo.test\_feature
root/package1/subpackage
root/package1/subpackage/__init__.py
root/package1/subpackage/subfoo.py
root/package1/subpackage/test/__init__.py
root/package1/subpackage/test/subfoo_test.py  # contains TestSubFoo.test\_feature
root/package2/... # with same structure as package1

Then (supposing the current directory is the parent of root/)

$ expect_tests list root
package1.tests.foo_test.TestFoo.test_feature
package1.subpackage.tests.subfoo_test.TestSubFoo.test_feature
package2.tests.foo_test.TestFoo.test_feature
package2.subpackage.tests.subfoo_test.TestSubFoo.test_feature

$ expect_tests list root/package1
package1.tests.foo_test.TestFoo.test_feature
package1.subpackage.tests.subfoo_test.TestSubFoo.test_feature

$ expect_tests list 'root:package1*'  # less efficient than root/package1
package1.tests.foo_test.TestFoo.test_feature
package1.subpackage.tests.subfoo_test.TestSubFoo.test_feature

$ expect_tests list 'root/package1:*TestSubFoo*'
package1.subpackage.tests.subfoo_test.TestSubFoo.test_feature

Fine-tuning and advanced topics

Having trouble debugging a test? You can use the ‘debug’ action instead of ‘test’ to get a debugging prompt when entering tests. That way you can step through the code if necessary.

You can make expect_tests ignore a subpackage by adding a .expect_tests.cfg file in the directory containing the package, with the following content:

[expect_tests]
skip=packagetoignore1
     packagetoignore2

Some Python code, like the Appengine sdk, requires some special setup to be able to work. In order to support that, you can create a .expect_tests_pretest.py file in the directory containing the top-level package containing tests. This code will be execfile'd just before any operation (list/run/train) in this directory.