| =========================== |
| Testing Django applications |
| =========================== |
| |
| .. module:: django.test |
| :synopsis: Testing tools for Django applications. |
| |
| Automated testing is an extremely useful bug-killing tool for the modern |
| Web developer. You can use a collection of tests -- a **test suite** -- to |
| solve, or avoid, a number of problems: |
| |
| * When you're writing new code, you can use tests to validate your code |
| works as expected. |
| |
| * When you're refactoring or modifying old code, you can use tests to |
| ensure your changes haven't affected your application's behavior |
| unexpectedly. |
| |
| Testing a Web application is a complex task, because a Web application is made |
| of several layers of logic -- from HTTP-level request handling, to form |
| validation and processing, to template rendering. With Django's test-execution |
| framework and assorted utilities, you can simulate requests, insert test data, |
| inspect your application's output and generally verify your code is doing what |
| it should be doing. |
| |
| The best part is, it's really easy. |
| |
| This document is split into two primary sections. First, we explain how to |
| write tests with Django. Then, we explain how to run them. |
| |
| Writing tests |
| ============= |
| |
| There are two primary ways to write tests with Django, corresponding to the |
| two test frameworks that ship in the Python standard library. The two |
| frameworks are: |
| |
| * **Unit tests** -- tests that are expressed as methods on a Python class |
| that subclasses ``unittest.TestCase`` or Django's customized |
| :class:`TestCase`. For example:: |
| |
| import unittest |
| |
| class MyFuncTestCase(unittest.TestCase): |
| def testBasic(self): |
| a = ['larry', 'curly', 'moe'] |
| self.assertEqual(my_func(a, 0), 'larry') |
| self.assertEqual(my_func(a, 1), 'curly') |
| |
| * **Doctests** -- tests that are embedded in your functions' docstrings and |
| are written in a way that emulates a session of the Python interactive |
| interpreter. For example:: |
| |
| def my_func(a_list, idx): |
| """ |
| >>> a = ['larry', 'curly', 'moe'] |
| >>> my_func(a, 0) |
| 'larry' |
| >>> my_func(a, 1) |
| 'curly' |
| """ |
| return a_list[idx] |
| |
| We'll discuss choosing the appropriate test framework later, however, most |
| experienced developers prefer unit tests. You can also use any *other* Python |
| test framework, as we'll explain in a bit. |
| |
| Writing unit tests |
| ------------------ |
| |
| Django's unit tests use a Python standard library module: unittest_. This |
| module defines tests in class-based approach. |
| |
| .. admonition:: unittest2 |
| |
| .. versionchanged:: 1.3 |
| |
| Python 2.7 introduced some major changes to the unittest library, |
| adding some extremely useful features. To ensure that every Django |
| project can benefit from these new features, Django ships with a |
| copy of unittest2_, a copy of the Python 2.7 unittest library, |
| backported for Python 2.4 compatibility. |
| |
| To access this library, Django provides the |
| ``django.utils.unittest`` module alias. If you are using Python |
| 2.7, or you have installed unittest2 locally, Django will map the |
| alias to the installed version of the unittest library. Otherwise, |
| Django will use it's own bundled version of unittest2. |
| |
| To use this alias, simply use:: |
| |
| from django.utils import unittest |
| |
| wherever you would have historically used:: |
| |
| import unittest |
| |
| If you want to continue to use the base unittest library, you can -- |
| you just won't get any of the nice new unittest2 features. |
| |
| .. _unittest2: http://pypi.python.org/pypi/unittest2 |
| |
| For a given Django application, the test runner looks for unit tests in two |
| places: |
| |
| * The ``models.py`` file. The test runner looks for any subclass of |
| ``unittest.TestCase`` in this module. |
| |
| * A file called ``tests.py`` in the application directory -- i.e., the |
| directory that holds ``models.py``. Again, the test runner looks for any |
| subclass of ``unittest.TestCase`` in this module. |
| |
| Here is an example ``unittest.TestCase`` subclass:: |
| |
| from django.utils import unittest |
| from myapp.models import Animal |
| |
| class AnimalTestCase(unittest.TestCase): |
| def setUp(self): |
| self.lion = Animal.objects.create(name="lion", sound="roar") |
| self.cat = Animal.objects.create(name="cat", sound="meow") |
| |
| def testSpeaking(self): |
| self.assertEqual(self.lion.speak(), 'The lion says "roar"') |
| self.assertEqual(self.cat.speak(), 'The cat says "meow"') |
| |
| When you :ref:`run your tests <running-tests>`, the default behavior of the |
| test utility is to find all the test cases (that is, subclasses of |
| ``unittest.TestCase``) in ``models.py`` and ``tests.py``, automatically build a |
| test suite out of those test cases, and run that suite. |
| |
| There is a second way to define the test suite for a module: if you define a |
| function called ``suite()`` in either ``models.py`` or ``tests.py``, the |
| Django test runner will use that function to construct the test suite for that |
| module. This follows the `suggested organization`_ for unit tests. See the |
| Python documentation for more details on how to construct a complex test |
| suite. |
| |
| For more details about ``unittest``, see the `standard library unittest |
| documentation`_. |
| |
| .. _unittest: http://docs.python.org/library/unittest.html |
| .. _standard library unittest documentation: unittest_ |
| .. _suggested organization: http://docs.python.org/library/unittest.html#organizing-tests |
| |
| Writing doctests |
| ---------------- |
| |
| Doctests use Python's standard doctest_ module, which searches your docstrings |
| for statements that resemble a session of the Python interactive interpreter. |
| A full explanation of how doctest works is out of the scope of this document; |
| read Python's official documentation for the details. |
| |
| .. admonition:: What's a **docstring**? |
| |
| A good explanation of docstrings (and some guidelines for using them |
| effectively) can be found in :pep:`257`: |
| |
| A docstring is a string literal that occurs as the first statement in |
| a module, function, class, or method definition. Such a docstring |
| becomes the ``__doc__`` special attribute of that object. |
| |
| For example, this function has a docstring that describes what it does:: |
| |
| def add_two(num): |
| "Return the result of adding two to the provided number." |
| return num + 2 |
| |
| Because tests often make great documentation, putting tests directly in |
| your docstrings is an effective way to document *and* test your code. |
| |
| As with unit tests, for a given Django application, the test runner looks for |
| doctests in two places: |
| |
| * The ``models.py`` file. You can define module-level doctests and/or a |
| doctest for individual models. It's common practice to put |
| application-level doctests in the module docstring and model-level |
| doctests in the model docstrings. |
| |
| * A file called ``tests.py`` in the application directory -- i.e., the |
| directory that holds ``models.py``. This file is a hook for any and all |
| doctests you want to write that aren't necessarily related to models. |
| |
| This example doctest is equivalent to the example given in the unittest section |
| above:: |
| |
| # models.py |
| |
| from django.db import models |
| |
| class Animal(models.Model): |
| """ |
| An animal that knows how to make noise |
| |
| # Create some animals |
| >>> lion = Animal.objects.create(name="lion", sound="roar") |
| >>> cat = Animal.objects.create(name="cat", sound="meow") |
| |
| # Make 'em speak |
| >>> lion.speak() |
| 'The lion says "roar"' |
| >>> cat.speak() |
| 'The cat says "meow"' |
| """ |
| name = models.CharField(max_length=20) |
| sound = models.CharField(max_length=20) |
| |
| def speak(self): |
| return 'The %s says "%s"' % (self.name, self.sound) |
| |
| When you :ref:`run your tests <running-tests>`, the test runner will find this |
| docstring, notice that portions of it look like an interactive Python session, |
| and execute those lines while checking that the results match. |
| |
| In the case of model tests, note that the test runner takes care of creating |
| its own test database. That is, any test that accesses a database -- by |
| creating and saving model instances, for example -- will not affect your |
| production database. However, the database is not refreshed between doctests, |
| so if your doctest requires a certain state you should consider flushing the |
| database or loading a fixture. (See the section on fixtures, below, for more |
| on this.) Note that to use this feature, the database user Django is connecting |
| as must have ``CREATE DATABASE`` rights. |
| |
| For more details about how doctest works, see the `standard library |
| documentation for doctest`_. |
| |
| .. _doctest: http://docs.python.org/library/doctest.html |
| .. _standard library documentation for doctest: doctest_ |
| |
| |
| Which should I use? |
| ------------------- |
| |
| Because Django supports both of the standard Python test frameworks, it's up to |
| you and your tastes to decide which one to use. You can even decide to use |
| *both*. |
| |
| For developers new to testing, however, this choice can seem confusing. Here, |
| then, are a few key differences to help you decide which approach is right for |
| you: |
| |
| * If you've been using Python for a while, ``doctest`` will probably feel |
| more "pythonic". It's designed to make writing tests as easy as possible, |
| so it requires no overhead of writing classes or methods. You simply put |
| tests in docstrings. This has the added advantage of serving as |
| documentation (and correct documentation, at that!). However, while |
| doctests are good for some simple example code, they are not very good if |
| you want to produce either high quality, comprehensive tests or high |
| quality documentation. Test failures are often difficult to debug |
| as it can be unclear exactly why the test failed. Thus, doctests should |
| generally be avoided and used primarily for documentation examples only. |
| |
| * The ``unittest`` framework will probably feel very familiar to developers |
| coming from Java. ``unittest`` is inspired by Java's JUnit, so you'll |
| feel at home with this method if you've used JUnit or any test framework |
| inspired by JUnit. |
| |
| * If you need to write a bunch of tests that share similar code, then |
| you'll appreciate the ``unittest`` framework's organization around |
| classes and methods. This makes it easy to abstract common tasks into |
| common methods. The framework also supports explicit setup and/or cleanup |
| routines, which give you a high level of control over the environment |
| in which your test cases are run. |
| |
| * If you're writing tests for Django itself, you should use ``unittest``. |
| |
| .. _running-tests: |
| |
| Running tests |
| ============= |
| |
| Once you've written tests, run them using the :djadmin:`test` command of |
| your project's ``manage.py`` utility:: |
| |
| $ ./manage.py test |
| |
| By default, this will run every test in every application in |
| :setting:`INSTALLED_APPS`. If you only want to run tests for a particular |
| application, add the application name to the command line. For example, if your |
| :setting:`INSTALLED_APPS` contains ``'myproject.polls'`` and |
| ``'myproject.animals'``, you can run the ``myproject.animals`` unit tests alone |
| with this command:: |
| |
| $ ./manage.py test animals |
| |
| Note that we used ``animals``, not ``myproject.animals``. |
| |
| You can be even *more* specific by naming an individual test case. To |
| run a single test case in an application (for example, the |
| ``AnimalTestCase`` described in the "Writing unit tests" section), add |
| the name of the test case to the label on the command line:: |
| |
| $ ./manage.py test animals.AnimalTestCase |
| |
| And it gets even more granular than that! To run a *single* test |
| method inside a test case, add the name of the test method to the |
| label:: |
| |
| $ ./manage.py test animals.AnimalTestCase.testFluffyAnimals |
| |
| .. versionadded:: 1.2 |
| The ability to select individual doctests was added. |
| |
| You can use the same rules if you're using doctests. Django will use the |
| test label as a path to the test method or class that you want to run. |
| If your ``models.py`` or ``tests.py`` has a function with a doctest, or |
| class with a class-level doctest, you can invoke that test by appending the |
| name of the test method or class to the label:: |
| |
| $ ./manage.py test animals.classify |
| |
| If you want to run the doctest for a specific method in a class, add the |
| name of the method to the label:: |
| |
| $ ./manage.py test animals.Classifier.run |
| |
| If you're using a ``__test__`` dictionary to specify doctests for a |
| module, Django will use the label as a key in the ``__test__`` dictionary |
| for defined in ``models.py`` and ``tests.py``. |
| |
| .. versionadded:: 1.2 |
| You can now trigger a graceful exit from a test run by pressing ``Ctrl-C``. |
| |
| If you press ``Ctrl-C`` while the tests are running, the test runner will |
| wait for the currently running test to complete and then exit gracefully. |
| During a graceful exit the test runner will output details of any test |
| failures, report on how many tests were run and how many errors and failures |
| were encountered, and destroy any test databases as usual. Thus pressing |
| ``Ctrl-C`` can be very useful if you forget to pass the :djadminopt:`--failfast` |
| option, notice that some tests are unexpectedly failing, and want to get details |
| on the failures without waiting for the full test run to complete. |
| |
| If you do not want to wait for the currently running test to finish, you |
| can press ``Ctrl-C`` a second time and the test run will halt immediately, |
| but not gracefully. No details of the tests run before the interruption will |
| be reported, and any test databases created by the run will not be destroyed. |
| |
| .. admonition:: Test with warnings enabled |
| |
| It's a good idea to run your tests with Python warnings enabled: |
| ``python -Wall manage.py test``. The ``-Wall`` flag tells Python to |
| display deprecation warnings. Django, like many other Python libraries, |
| uses these warnings to flag when features are going away. It also might |
| flag areas in your code that aren't strictly wrong but could benefit |
| from a better implementation. |
| |
| Running tests outside the test runner |
| ------------------------------------- |
| |
| If you want to run tests outside of ``./manage.py test`` -- for example, |
| from a shell prompt -- you will need to set up the test |
| environment first. Django provides a convenience method to do this:: |
| |
| >>> from django.test.utils import setup_test_environment |
| >>> setup_test_environment() |
| |
| This convenience method sets up the test database, and puts other |
| Django features into modes that allow for repeatable testing. |
| |
| The call to :meth:`~django.test.utils.setup_test_environment` is made |
| automatically as part of the setup of `./manage.py test`. You only |
| need to manually invoke this method if you're not using running your |
| tests via Django's test runner. |
| |
| The test database |
| ----------------- |
| |
| Tests that require a database (namely, model tests) will not use your "real" |
| (production) database. Separate, blank databases are created for the tests. |
| |
| Regardless of whether the tests pass or fail, the test databases are destroyed |
| when all the tests have been executed. |
| |
| By default the test databases get their names by prepending ``test_`` |
| to the value of the :setting:`NAME` settings for the databases |
| defined in :setting:`DATABASES`. When using the SQLite database engine |
| the tests will by default use an in-memory database (i.e., the |
| database will be created in memory, bypassing the filesystem |
| entirely!). If you want to use a different database name, specify |
| :setting:`TEST_NAME` in the dictionary for any given database in |
| :setting:`DATABASES`. |
| |
| Aside from using a separate database, the test runner will otherwise |
| use all of the same database settings you have in your settings file: |
| :setting:`ENGINE`, :setting:`USER`, :setting:`HOST`, etc. The test |
| database is created by the user specified by :setting:`USER`, so you'll need |
| to make sure that the given user account has sufficient privileges to |
| create a new database on the system. |
| |
| For fine-grained control over the character encoding of your test |
| database, use the :setting:`TEST_CHARSET` option. If you're using |
| MySQL, you can also use the :setting:`TEST_COLLATION` option to |
| control the particular collation used by the test database. See the |
| :doc:`settings documentation </ref/settings>` for details of these |
| advanced settings. |
| |
| .. _topics-testing-masterslave: |
| |
| Testing master/slave configurations |
| ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ |
| |
| .. versionadded:: 1.2 |
| |
| If you're testing a multiple database configuration with master/slave |
| replication, this strategy of creating test databases poses a problem. |
| When the test databases are created, there won't be any replication, |
| and as a result, data created on the master won't be seen on the |
| slave. |
| |
| To compensate for this, Django allows you to define that a database is |
| a *test mirror*. Consider the following (simplified) example database |
| configuration:: |
| |
| DATABASES = { |
| 'default': { |
| 'ENGINE': 'django.db.backends.mysql', |
| 'NAME': 'myproject', |
| 'HOST': 'dbmaster', |
| # ... plus some other settings |
| }, |
| 'slave': { |
| 'ENGINE': 'django.db.backends.mysql', |
| 'NAME': 'myproject', |
| 'HOST': 'dbslave', |
| 'TEST_MIRROR': 'default' |
| # ... plus some other settings |
| } |
| } |
| |
| In this setup, we have two database servers: ``dbmaster``, described |
| by the database alias ``default``, and ``dbslave`` described by the |
| alias ``slave``. As you might expect, ``dbslave`` has been configured |
| by the database administrator as a read slave of ``dbmaster``, so in |
| normal activity, any write to ``default`` will appear on ``slave``. |
| |
| If Django created two independent test databases, this would break any |
| tests that expected replication to occur. However, the ``slave`` |
| database has been configured as a test mirror (using the |
| :setting:`TEST_MIRROR` setting), indicating that under testing, |
| ``slave`` should be treated as a mirror of ``default``. |
| |
| When the test environment is configured, a test version of ``slave`` |
| will *not* be created. Instead the connection to ``slave`` |
| will be redirected to point at ``default``. As a result, writes to |
| ``default`` will appear on ``slave`` -- but because they are actually |
| the same database, not because there is data replication between the |
| two databases. |
| |
| .. _topics-testing-creation-dependencies: |
| |
| Controlling creation order for test databases |
| ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ |
| |
| .. versionadded:: 1.3 |
| |
| By default, Django will always create the ``default`` database first. |
| However, no guarantees are made on the creation order of any other |
| databases in your test setup. |
| |
| If your database configuration requires a specific creation order, you |
| can specify the dependencies that exist using the |
| :setting:`TEST_DEPENDENCIES` setting. Consider the following |
| (simplified) example database configuration:: |
| |
| DATABASES = { |
| 'default': { |
| # ... db settings |
| 'TEST_DEPENDENCIES': ['diamonds'] |
| }, |
| 'diamonds': { |
| # ... db settings |
| }, |
| 'clubs': { |
| # ... db settings |
| 'TEST_DEPENDENCIES': ['diamonds'] |
| }, |
| 'spades': { |
| # ... db settings |
| 'TEST_DEPENDENCIES': ['diamonds','hearts'] |
| }, |
| 'hearts': { |
| # ... db settings |
| 'TEST_DEPENDENCIES': ['diamonds','clubs'] |
| } |
| } |
| |
| Under this configuration, the ``diamonds`` database will be created first, |
| as it is the only database alias without dependencies. The ``default`` and |
| ``clubs`` alias will be created next (although the order of creation of this |
| pair is not guaranteed); then ``hearts``; and finally ``spades``. |
| |
| If there are any circular dependencies in the |
| :setting:`TEST_DEPENDENCIES` definition, an ``ImproperlyConfigured`` |
| exception will be raised. |
| |
| Other test conditions |
| --------------------- |
| |
| Regardless of the value of the :setting:`DEBUG` setting in your configuration |
| file, all Django tests run with :setting:`DEBUG`\=False. This is to ensure that |
| the observed output of your code matches what will be seen in a production |
| setting. |
| |
| Understanding the test output |
| ----------------------------- |
| |
| When you run your tests, you'll see a number of messages as the test runner |
| prepares itself. You can control the level of detail of these messages with the |
| ``verbosity`` option on the command line:: |
| |
| Creating test database... |
| Creating table myapp_animal |
| Creating table myapp_mineral |
| Loading 'initial_data' fixtures... |
| No fixtures found. |
| |
| This tells you that the test runner is creating a test database, as described |
| in the previous section. |
| |
| Once the test database has been created, Django will run your tests. |
| If everything goes well, you'll see something like this:: |
| |
| ---------------------------------------------------------------------- |
| Ran 22 tests in 0.221s |
| |
| OK |
| |
| If there are test failures, however, you'll see full details about which tests |
| failed:: |
| |
| ====================================================================== |
| FAIL: Doctest: ellington.core.throttle.models |
| ---------------------------------------------------------------------- |
| Traceback (most recent call last): |
| File "/dev/django/test/doctest.py", line 2153, in runTest |
| raise self.failureException(self.format_failure(new.getvalue())) |
| AssertionError: Failed doctest test for myapp.models |
| File "/dev/myapp/models.py", line 0, in models |
| |
| ---------------------------------------------------------------------- |
| File "/dev/myapp/models.py", line 14, in myapp.models |
| Failed example: |
| throttle.check("actor A", "action one", limit=2, hours=1) |
| Expected: |
| True |
| Got: |
| False |
| |
| ---------------------------------------------------------------------- |
| Ran 2 tests in 0.048s |
| |
| FAILED (failures=1) |
| |
| A full explanation of this error output is beyond the scope of this document, |
| but it's pretty intuitive. You can consult the documentation of Python's |
| ``unittest`` library for details. |
| |
| Note that the return code for the test-runner script is 1 for any number of |
| failed and erroneous tests. If all the tests pass, the return code is 0. This |
| feature is useful if you're using the test-runner script in a shell script and |
| need to test for success or failure at that level. |
| |
| Testing tools |
| ============= |
| |
| Django provides a small set of tools that come in handy when writing tests. |
| |
| .. _test-client: |
| |
| The test client |
| --------------- |
| |
| .. module:: django.test.client |
| :synopsis: Django's test client. |
| |
| The test client is a Python class that acts as a dummy Web browser, allowing |
| you to test your views and interact with your Django-powered application |
| programmatically. |
| |
| Some of the things you can do with the test client are: |
| |
| * Simulate GET and POST requests on a URL and observe the response -- |
| everything from low-level HTTP (result headers and status codes) to |
| page content. |
| |
| * Test that the correct view is executed for a given URL. |
| |
| * Test that a given request is rendered by a given Django template, with |
| a template context that contains certain values. |
| |
| Note that the test client is not intended to be a replacement for Twill_, |
| Selenium_, or other "in-browser" frameworks. Django's test client has |
| a different focus. In short: |
| |
| * Use Django's test client to establish that the correct view is being |
| called and that the view is collecting the correct context data. |
| |
| * Use in-browser frameworks such as Twill and Selenium to test *rendered* |
| HTML and the *behavior* of Web pages, namely JavaScript functionality. |
| |
| A comprehensive test suite should use a combination of both test types. |
| |
| .. _Twill: http://twill.idyll.org/ |
| .. _Selenium: http://seleniumhq.org/ |
| |
| Overview and a quick example |
| ~~~~~~~~~~~~~~~~~~~~~~~~~~~~ |
| |
| To use the test client, instantiate ``django.test.client.Client`` and retrieve |
| Web pages:: |
| |
| >>> from django.test.client import Client |
| >>> c = Client() |
| >>> response = c.post('/login/', {'username': 'john', 'password': 'smith'}) |
| >>> response.status_code |
| 200 |
| >>> response = c.get('/customer/details/') |
| >>> response.content |
| '<!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 ...' |
| |
| As this example suggests, you can instantiate ``Client`` from within a session |
| of the Python interactive interpreter. |
| |
| Note a few important things about how the test client works: |
| |
| * The test client does *not* require the Web server to be running. In fact, |
| it will run just fine with no Web server running at all! That's because |
| it avoids the overhead of HTTP and deals directly with the Django |
| framework. This helps make the unit tests run quickly. |
| |
| * When retrieving pages, remember to specify the *path* of the URL, not the |
| whole domain. For example, this is correct:: |
| |
| >>> c.get('/login/') |
| |
| This is incorrect:: |
| |
| >>> c.get('http://www.example.com/login/') |
| |
| The test client is not capable of retrieving Web pages that are not |
| powered by your Django project. If you need to retrieve other Web pages, |
| use a Python standard library module such as urllib_ or urllib2_. |
| |
| * To resolve URLs, the test client uses whatever URLconf is pointed-to by |
| your :setting:`ROOT_URLCONF` setting. |
| |
| * Although the above example would work in the Python interactive |
| interpreter, some of the test client's functionality, notably the |
| template-related functionality, is only available *while tests are |
| running*. |
| |
| The reason for this is that Django's test runner performs a bit of black |
| magic in order to determine which template was loaded by a given view. |
| This black magic (essentially a patching of Django's template system in |
| memory) only happens during test running. |
| |
| * By default, the test client will disable any CSRF checks |
| performed by your site. |
| |
| .. versionadded:: 1.2.2 |
| |
| If, for some reason, you *want* the test client to perform CSRF |
| checks, you can create an instance of the test client that |
| enforces CSRF checks. To do this, pass in the |
| ``enforce_csrf_checks`` argument when you construct your |
| client:: |
| |
| >>> from django.test import Client |
| >>> csrf_client = Client(enforce_csrf_checks=True) |
| |
| |
| .. _urllib: http://docs.python.org/library/urllib.html |
| .. _urllib2: http://docs.python.org/library/urllib2.html |
| |
| Making requests |
| ~~~~~~~~~~~~~~~ |
| |
| Use the ``django.test.client.Client`` class to make requests. It requires no |
| arguments at time of construction: |
| |
| .. class:: Client() |
| |
| Once you have a ``Client`` instance, you can call any of the following |
| methods: |
| |
| .. method:: Client.get(path, data={}, follow=False, **extra) |
| |
| |
| Makes a GET request on the provided ``path`` and returns a ``Response`` |
| object, which is documented below. |
| |
| The key-value pairs in the ``data`` dictionary are used to create a GET |
| data payload. For example:: |
| |
| >>> c = Client() |
| >>> c.get('/customers/details/', {'name': 'fred', 'age': 7}) |
| |
| ...will result in the evaluation of a GET request equivalent to:: |
| |
| /customers/details/?name=fred&age=7 |
| |
| The ``extra`` keyword arguments parameter can be used to specify |
| headers to be sent in the request. For example:: |
| |
| >>> c = Client() |
| >>> c.get('/customers/details/', {'name': 'fred', 'age': 7}, |
| ... HTTP_X_REQUESTED_WITH='XMLHttpRequest') |
| |
| ...will send the HTTP header ``HTTP_X_REQUESTED_WITH`` to the |
| details view, which is a good way to test code paths that use the |
| :meth:`django.http.HttpRequest.is_ajax()` method. |
| |
| .. admonition:: CGI specification |
| |
| The headers sent via ``**extra`` should follow CGI_ specification. |
| For example, emulating a different "Host" header as sent in the |
| HTTP request from the browser to the server should be passed |
| as ``HTTP_HOST``. |
| |
| .. _CGI: http://www.w3.org/CGI/ |
| |
| If you already have the GET arguments in URL-encoded form, you can |
| use that encoding instead of using the data argument. For example, |
| the previous GET request could also be posed as:: |
| |
| >>> c = Client() |
| >>> c.get('/customers/details/?name=fred&age=7') |
| |
| If you provide a URL with both an encoded GET data and a data argument, |
| the data argument will take precedence. |
| |
| If you set ``follow`` to ``True`` the client will follow any redirects |
| and a ``redirect_chain`` attribute will be set in the response object |
| containing tuples of the intermediate urls and status codes. |
| |
| If you had an url ``/redirect_me/`` that redirected to ``/next/``, that |
| redirected to ``/final/``, this is what you'd see:: |
| |
| >>> response = c.get('/redirect_me/', follow=True) |
| >>> response.redirect_chain |
| [(u'http://testserver/next/', 302), (u'http://testserver/final/', 302)] |
| |
| .. method:: Client.post(path, data={}, content_type=MULTIPART_CONTENT, follow=False, **extra) |
| |
| Makes a POST request on the provided ``path`` and returns a |
| ``Response`` object, which is documented below. |
| |
| The key-value pairs in the ``data`` dictionary are used to submit POST |
| data. For example:: |
| |
| >>> c = Client() |
| >>> c.post('/login/', {'name': 'fred', 'passwd': 'secret'}) |
| |
| ...will result in the evaluation of a POST request to this URL:: |
| |
| /login/ |
| |
| ...with this POST data:: |
| |
| name=fred&passwd=secret |
| |
| If you provide ``content_type`` (e.g., ``text/xml`` for an XML |
| payload), the contents of ``data`` will be sent as-is in the POST |
| request, using ``content_type`` in the HTTP ``Content-Type`` header. |
| |
| If you don't provide a value for ``content_type``, the values in |
| ``data`` will be transmitted with a content type of |
| ``multipart/form-data``. In this case, the key-value pairs in ``data`` |
| will be encoded as a multipart message and used to create the POST data |
| payload. |
| |
| To submit multiple values for a given key -- for example, to specify |
| the selections for a ``<select multiple>`` -- provide the values as a |
| list or tuple for the required key. For example, this value of ``data`` |
| would submit three selected values for the field named ``choices``:: |
| |
| {'choices': ('a', 'b', 'd')} |
| |
| Submitting files is a special case. To POST a file, you need only |
| provide the file field name as a key, and a file handle to the file you |
| wish to upload as a value. For example:: |
| |
| >>> c = Client() |
| >>> f = open('wishlist.doc') |
| >>> c.post('/customers/wishes/', {'name': 'fred', 'attachment': f}) |
| >>> f.close() |
| |
| (The name ``attachment`` here is not relevant; use whatever name your |
| file-processing code expects.) |
| |
| Note that if you wish to use the same file handle for multiple |
| ``post()`` calls then you will need to manually reset the file |
| pointer between posts. The easiest way to do this is to |
| manually close the file after it has been provided to |
| ``post()``, as demonstrated above. |
| |
| You should also ensure that the file is opened in a way that |
| allows the data to be read. If your file contains binary data |
| such as an image, this means you will need to open the file in |
| ``rb`` (read binary) mode. |
| |
| The ``extra`` argument acts the same as for :meth:`Client.get`. |
| |
| If the URL you request with a POST contains encoded parameters, these |
| parameters will be made available in the request.GET data. For example, |
| if you were to make the request:: |
| |
| >>> c.post('/login/?visitor=true', {'name': 'fred', 'passwd': 'secret'}) |
| |
| ... the view handling this request could interrogate request.POST |
| to retrieve the username and password, and could interrogate request.GET |
| to determine if the user was a visitor. |
| |
| If you set ``follow`` to ``True`` the client will follow any redirects |
| and a ``redirect_chain`` attribute will be set in the response object |
| containing tuples of the intermediate urls and status codes. |
| |
| .. method:: Client.head(path, data={}, follow=False, **extra) |
| |
| Makes a HEAD request on the provided ``path`` and returns a ``Response`` |
| object. Useful for testing RESTful interfaces. Acts just like |
| :meth:`Client.get` except it does not return a message body. |
| |
| If you set ``follow`` to ``True`` the client will follow any redirects |
| and a ``redirect_chain`` attribute will be set in the response object |
| containing tuples of the intermediate urls and status codes. |
| |
| .. method:: Client.options(path, data={}, follow=False, **extra) |
| |
| Makes an OPTIONS request on the provided ``path`` and returns a |
| ``Response`` object. Useful for testing RESTful interfaces. |
| |
| If you set ``follow`` to ``True`` the client will follow any redirects |
| and a ``redirect_chain`` attribute will be set in the response object |
| containing tuples of the intermediate urls and status codes. |
| |
| The ``extra`` argument acts the same as for :meth:`Client.get`. |
| |
| .. method:: Client.put(path, data={}, content_type=MULTIPART_CONTENT, follow=False, **extra) |
| |
| Makes a PUT request on the provided ``path`` and returns a |
| ``Response`` object. Useful for testing RESTful interfaces. Acts just |
| like :meth:`Client.post` except with the PUT request method. |
| |
| If you set ``follow`` to ``True`` the client will follow any redirects |
| and a ``redirect_chain`` attribute will be set in the response object |
| containing tuples of the intermediate urls and status codes. |
| |
| .. method:: Client.delete(path, follow=False, **extra) |
| |
| Makes an DELETE request on the provided ``path`` and returns a |
| ``Response`` object. Useful for testing RESTful interfaces. |
| |
| If you set ``follow`` to ``True`` the client will follow any redirects |
| and a ``redirect_chain`` attribute will be set in the response object |
| containing tuples of the intermediate urls and status codes. |
| |
| The ``extra`` argument acts the same as for :meth:`Client.get`. |
| |
| .. method:: Client.login(**credentials) |
| |
| If your site uses Django's :doc:`authentication system</topics/auth>` |
| and you deal with logging in users, you can use the test client's |
| ``login()`` method to simulate the effect of a user logging into the |
| site. |
| |
| After you call this method, the test client will have all the cookies |
| and session data required to pass any login-based tests that may form |
| part of a view. |
| |
| The format of the ``credentials`` argument depends on which |
| :ref:`authentication backend <authentication-backends>` you're using |
| (which is configured by your :setting:`AUTHENTICATION_BACKENDS` |
| setting). If you're using the standard authentication backend provided |
| by Django (``ModelBackend``), ``credentials`` should be the user's |
| username and password, provided as keyword arguments:: |
| |
| >>> c = Client() |
| >>> c.login(username='fred', password='secret') |
| |
| # Now you can access a view that's only available to logged-in users. |
| |
| If you're using a different authentication backend, this method may |
| require different credentials. It requires whichever credentials are |
| required by your backend's ``authenticate()`` method. |
| |
| ``login()`` returns ``True`` if it the credentials were accepted and |
| login was successful. |
| |
| Finally, you'll need to remember to create user accounts before you can |
| use this method. As we explained above, the test runner is executed |
| using a test database, which contains no users by default. As a result, |
| user accounts that are valid on your production site will not work |
| under test conditions. You'll need to create users as part of the test |
| suite -- either manually (using the Django model API) or with a test |
| fixture. Remember that if you want your test user to have a password, |
| you can't set the user's password by setting the password attribute |
| directly -- you must use the |
| :meth:`~django.contrib.auth.models.User.set_password()` function to |
| store a correctly hashed password. Alternatively, you can use the |
| :meth:`~django.contrib.auth.models.UserManager.create_user` helper |
| method to create a new user with a correctly hashed password. |
| |
| .. method:: Client.logout() |
| |
| If your site uses Django's :doc:`authentication system</topics/auth>`, |
| the ``logout()`` method can be used to simulate the effect of a user |
| logging out of your site. |
| |
| After you call this method, the test client will have all the cookies |
| and session data cleared to defaults. Subsequent requests will appear |
| to come from an AnonymousUser. |
| |
| Testing responses |
| ~~~~~~~~~~~~~~~~~ |
| |
| The ``get()`` and ``post()`` methods both return a ``Response`` object. This |
| ``Response`` object is *not* the same as the ``HttpResponse`` object returned |
| Django views; the test response object has some additional data useful for |
| test code to verify. |
| |
| Specifically, a ``Response`` object has the following attributes: |
| |
| .. class:: Response() |
| |
| .. attribute:: client |
| |
| The test client that was used to make the request that resulted in the |
| response. |
| |
| .. attribute:: content |
| |
| The body of the response, as a string. This is the final page content as |
| rendered by the view, or any error message. |
| |
| .. attribute:: context |
| |
| The template ``Context`` instance that was used to render the template that |
| produced the response content. |
| |
| If the rendered page used multiple templates, then ``context`` will be a |
| list of ``Context`` objects, in the order in which they were rendered. |
| |
| Regardless of the number of templates used during rendering, you can |
| retrieve context values using the ``[]`` operator. For example, the |
| context variable ``name`` could be retrieved using:: |
| |
| >>> response = client.get('/foo/') |
| >>> response.context['name'] |
| 'Arthur' |
| |
| .. attribute:: request |
| |
| The request data that stimulated the response. |
| |
| .. attribute:: status_code |
| |
| The HTTP status of the response, as an integer. See RFC2616_ for a full |
| list of HTTP status codes. |
| |
| .. versionadded:: 1.3 |
| |
| .. attribute:: templates |
| |
| A list of ``Template`` instances used to render the final content, in |
| the order they were rendered. For each template in the list, use |
| ``template.name`` to get the template's file name, if the template was |
| loaded from a file. (The name is a string such as |
| ``'admin/index.html'``.) |
| |
| You can also use dictionary syntax on the response object to query the value |
| of any settings in the HTTP headers. For example, you could determine the |
| content type of a response using ``response['Content-Type']``. |
| |
| .. _RFC2616: http://www.w3.org/Protocols/rfc2616/rfc2616-sec10.html |
| |
| Exceptions |
| ~~~~~~~~~~ |
| |
| If you point the test client at a view that raises an exception, that exception |
| will be visible in the test case. You can then use a standard ``try...except`` |
| block or ``unittest.TestCase.assertRaises()`` to test for exceptions. |
| |
| The only exceptions that are not visible to the test client are ``Http404``, |
| ``PermissionDenied`` and ``SystemExit``. Django catches these exceptions |
| internally and converts them into the appropriate HTTP response codes. In these |
| cases, you can check ``response.status_code`` in your test. |
| |
| Persistent state |
| ~~~~~~~~~~~~~~~~ |
| |
| The test client is stateful. If a response returns a cookie, then that cookie |
| will be stored in the test client and sent with all subsequent ``get()`` and |
| ``post()`` requests. |
| |
| Expiration policies for these cookies are not followed. If you want a cookie |
| to expire, either delete it manually or create a new ``Client`` instance (which |
| will effectively delete all cookies). |
| |
| A test client has two attributes that store persistent state information. You |
| can access these properties as part of a test condition. |
| |
| .. attribute:: Client.cookies |
| |
| A Python ``SimpleCookie`` object, containing the current values of all the |
| client cookies. See the `Cookie module documentation`_ for more. |
| |
| .. attribute:: Client.session |
| |
| A dictionary-like object containing session information. See the |
| :doc:`session documentation</topics/http/sessions>` for full details. |
| |
| To modify the session and then save it, it must be stored in a variable |
| first (because a new ``SessionStore`` is created every time this property |
| is accessed):: |
| |
| def test_something(self): |
| session = self.client.session |
| session['somekey'] = 'test' |
| session.save() |
| |
| .. _Cookie module documentation: http://docs.python.org/library/cookie.html |
| |
| Example |
| ~~~~~~~ |
| |
| The following is a simple unit test using the test client:: |
| |
| from django.utils import unittest |
| from django.test.client import Client |
| |
| class SimpleTest(unittest.TestCase): |
| def setUp(self): |
| # Every test needs a client. |
| self.client = Client() |
| |
| def test_details(self): |
| # Issue a GET request. |
| response = self.client.get('/customer/details/') |
| |
| # Check that the response is 200 OK. |
| self.assertEqual(response.status_code, 200) |
| |
| # Check that the rendered context contains 5 customers. |
| self.assertEqual(len(response.context['customers']), 5) |
| |
| The request factory |
| ------------------- |
| |
| .. Class:: RequestFactory |
| |
| .. versionadded:: 1.3 |
| |
| The :class:`~django.test.client.RequestFactory` shares the same API as |
| the test client. However, instead of behaving like a browser, the |
| RequestFactory provides a way to generate a request instance that can |
| be used as the first argument to any view. This means you can test a |
| view function the same way as you would test any other function -- as |
| a black box, with exactly known inputs, testing for specific outputs. |
| |
| The API for the :class:`~django.test.client.RequestFactory` is a slightly |
| restricted subset of the test client API: |
| |
| * It only has access to the HTTP methods :meth:`~Client.get()`, |
| :meth:`~Client.post()`, :meth:`~Client.put()`, |
| :meth:`~Client.delete()`, :meth:`~Client.head()` and |
| :meth:`~Client.options()`. |
| |
| * These methods accept all the same arguments *except* for |
| ``follows``. Since this is just a factory for producing |
| requests, it's up to you to handle the response. |
| |
| * It does not support middleware. Session and authentication |
| attributes must be supplied by the test itself if required |
| for the view to function properly. |
| |
| Example |
| ~~~~~~~ |
| |
| The following is a simple unit test using the request factory:: |
| |
| from django.utils import unittest |
| from django.test.client import RequestFactory |
| |
| class SimpleTest(unittest.TestCase): |
| def setUp(self): |
| # Every test needs access to the request factory. |
| self.factory = RequestFactory() |
| |
| def test_details(self): |
| # Create an instance of a GET request. |
| request = self.factory.get('/customer/details') |
| |
| # Test my_view() as if it were deployed at /customer/details |
| response = my_view(request) |
| self.assertEqual(response.status_code, 200) |
| |
| TestCase |
| -------- |
| |
| .. currentmodule:: django.test |
| |
| Normal Python unit test classes extend a base class of ``unittest.TestCase``. |
| Django provides an extension of this base class: |
| |
| .. class:: TestCase() |
| |
| This class provides some additional capabilities that can be useful for testing |
| Web sites. |
| |
| Converting a normal ``unittest.TestCase`` to a Django ``TestCase`` is easy: |
| just change the base class of your test from ``unittest.TestCase`` to |
| ``django.test.TestCase``. All of the standard Python unit test functionality |
| will continue to be available, but it will be augmented with some useful |
| additions, including: |
| |
| * Automatic loading of fixtures. |
| |
| * Wraps each test in a transaction. |
| |
| * Creates a TestClient instance. |
| |
| * Django-specific assertions for testing for things |
| like redirection and form errors. |
| |
| .. class:: TransactionTestCase() |
| |
| Django ``TestCase`` classes make use of database transaction facilities, if |
| available, to speed up the process of resetting the database to a known state |
| at the beginning of each test. A consequence of this, however, is that the |
| effects of transaction commit and rollback cannot be tested by a Django |
| ``TestCase`` class. If your test requires testing of such transactional |
| behavior, you should use a Django ``TransactionTestCase``. |
| |
| ``TransactionTestCase`` and ``TestCase`` are identical except for the manner |
| in which the database is reset to a known state and the ability for test code |
| to test the effects of commit and rollback. A ``TransactionTestCase`` resets |
| the database before the test runs by truncating all tables and reloading |
| initial data. A ``TransactionTestCase`` may call commit and rollback and |
| observe the effects of these calls on the database. |
| |
| A ``TestCase``, on the other hand, does not truncate tables and reload initial |
| data at the beginning of a test. Instead, it encloses the test code in a |
| database transaction that is rolled back at the end of the test. It also |
| prevents the code under test from issuing any commit or rollback operations |
| on the database, to ensure that the rollback at the end of the test restores |
| the database to its initial state. In order to guarantee that all ``TestCase`` |
| code starts with a clean database, the Django test runner runs all ``TestCase`` |
| tests first, before any other tests (e.g. doctests) that may alter the |
| database without restoring it to its original state. |
| |
| When running on a database that does not support rollback (e.g. MySQL with the |
| MyISAM storage engine), ``TestCase`` falls back to initializing the database |
| by truncating tables and reloading initial data. |
| |
| |
| .. note:: |
| The ``TestCase`` use of rollback to un-do the effects of the test code |
| may reveal previously-undetected errors in test code. For example, |
| test code that assumes primary keys values will be assigned starting at |
| one may find that assumption no longer holds true when rollbacks instead |
| of table truncation are being used to reset the database. Similarly, |
| the reordering of tests so that all ``TestCase`` classes run first may |
| reveal unexpected dependencies on test case ordering. In such cases a |
| quick fix is to switch the ``TestCase`` to a ``TransactionTestCase``. |
| A better long-term fix, that allows the test to take advantage of the |
| speed benefit of ``TestCase``, is to fix the underlying test problem. |
| |
| |
| Default test client |
| ~~~~~~~~~~~~~~~~~~~ |
| |
| .. attribute:: TestCase.client |
| |
| Every test case in a ``django.test.TestCase`` instance has access to an |
| instance of a Django test client. This client can be accessed as |
| ``self.client``. This client is recreated for each test, so you don't have to |
| worry about state (such as cookies) carrying over from one test to another. |
| |
| This means, instead of instantiating a ``Client`` in each test:: |
| |
| from django.utils import unittest |
| from django.test.client import Client |
| |
| class SimpleTest(unittest.TestCase): |
| def test_details(self): |
| client = Client() |
| response = client.get('/customer/details/') |
| self.assertEqual(response.status_code, 200) |
| |
| def test_index(self): |
| client = Client() |
| response = client.get('/customer/index/') |
| self.assertEqual(response.status_code, 200) |
| |
| ...you can just refer to ``self.client``, like so:: |
| |
| from django.test import TestCase |
| |
| class SimpleTest(TestCase): |
| def test_details(self): |
| response = self.client.get('/customer/details/') |
| self.assertEqual(response.status_code, 200) |
| |
| def test_index(self): |
| response = self.client.get('/customer/index/') |
| self.assertEqual(response.status_code, 200) |
| |
| Customizing the test client |
| ~~~~~~~~~~~~~~~~~~~~~~~~~~~ |
| |
| .. versionadded:: 1.3 |
| |
| .. attribute:: TestCase.client_class |
| |
| If you want to use a different ``Client`` class (for example, a subclass |
| with customized behavior), use the :attr:`~TestCase.client_class` class |
| attribute:: |
| |
| from django.test import TestCase |
| from django.test.client import Client |
| |
| class MyTestClient(Client): |
| # Specialized methods for your environment... |
| |
| class MyTest(TestCase): |
| client_class = MyTestClient |
| |
| def test_my_stuff(self): |
| # Here self.client is an instance of MyTestClient... |
| |
| .. _topics-testing-fixtures: |
| |
| Fixture loading |
| ~~~~~~~~~~~~~~~ |
| |
| .. attribute:: TestCase.fixtures |
| |
| A test case for a database-backed Web site isn't much use if there isn't any |
| data in the database. To make it easy to put test data into the database, |
| Django's custom ``TestCase`` class provides a way of loading **fixtures**. |
| |
| A fixture is a collection of data that Django knows how to import into a |
| database. For example, if your site has user accounts, you might set up a |
| fixture of fake user accounts in order to populate your database during tests. |
| |
| The most straightforward way of creating a fixture is to use the |
| :djadmin:`manage.py dumpdata <dumpdata>` command. This assumes you |
| already have some data in your database. See the :djadmin:`dumpdata |
| documentation<dumpdata>` for more details. |
| |
| .. note:: |
| If you've ever run :djadmin:`manage.py syncdb<syncdb>`, you've |
| already used a fixture without even knowing it! When you call |
| :djadmin:`syncdb` in the database for the first time, Django |
| installs a fixture called ``initial_data``. This gives you a way |
| of populating a new database with any initial data, such as a |
| default set of categories. |
| |
| Fixtures with other names can always be installed manually using |
| the :djadmin:`manage.py loaddata<loaddata>` command. |
| |
| .. admonition:: Initial SQL data and testing |
| |
| Django provides a second way to insert initial data into models -- |
| the :ref:`custom SQL hook <initial-sql>`. However, this technique |
| *cannot* be used to provide initial data for testing purposes. |
| Django's test framework flushes the contents of the test database |
| after each test; as a result, any data added using the custom SQL |
| hook will be lost. |
| |
| Once you've created a fixture and placed it in a ``fixtures`` directory in one |
| of your :setting:`INSTALLED_APPS`, you can use it in your unit tests by |
| specifying a ``fixtures`` class attribute on your :class:`django.test.TestCase` |
| subclass:: |
| |
| from django.test import TestCase |
| from myapp.models import Animal |
| |
| class AnimalTestCase(TestCase): |
| fixtures = ['mammals.json', 'birds'] |
| |
| def setUp(self): |
| # Test definitions as before. |
| call_setup_methods() |
| |
| def testFluffyAnimals(self): |
| # A test that uses the fixtures. |
| call_some_test_code() |
| |
| Here's specifically what will happen: |
| |
| * At the start of each test case, before ``setUp()`` is run, Django will |
| flush the database, returning the database to the state it was in |
| directly after :djadmin:`syncdb` was called. |
| |
| * Then, all the named fixtures are installed. In this example, Django will |
| install any JSON fixture named ``mammals``, followed by any fixture named |
| ``birds``. See the :djadmin:`loaddata` documentation for more |
| details on defining and installing fixtures. |
| |
| This flush/load procedure is repeated for each test in the test case, so you |
| can be certain that the outcome of a test will not be affected by another test, |
| or by the order of test execution. |
| |
| URLconf configuration |
| ~~~~~~~~~~~~~~~~~~~~~ |
| |
| .. attribute:: TestCase.urls |
| |
| If your application provides views, you may want to include tests that use the |
| test client to exercise those views. However, an end user is free to deploy the |
| views in your application at any URL of their choosing. This means that your |
| tests can't rely upon the fact that your views will be available at a |
| particular URL. |
| |
| In order to provide a reliable URL space for your test, |
| ``django.test.TestCase`` provides the ability to customize the URLconf |
| configuration for the duration of the execution of a test suite. If your |
| ``TestCase`` instance defines an ``urls`` attribute, the ``TestCase`` will use |
| the value of that attribute as the :setting:`ROOT_URLCONF` for the duration |
| of that test. |
| |
| For example:: |
| |
| from django.test import TestCase |
| |
| class TestMyViews(TestCase): |
| urls = 'myapp.test_urls' |
| |
| def testIndexPageView(self): |
| # Here you'd test your view using ``Client``. |
| call_some_test_code() |
| |
| This test case will use the contents of ``myapp.test_urls`` as the |
| URLconf for the duration of the test case. |
| |
| .. _emptying-test-outbox: |
| |
| Multi-database support |
| ~~~~~~~~~~~~~~~~~~~~~~ |
| |
| .. attribute:: TestCase.multi_db |
| |
| .. versionadded:: 1.2 |
| |
| Django sets up a test database corresponding to every database that is |
| defined in the :setting:`DATABASES` definition in your settings |
| file. However, a big part of the time taken to run a Django TestCase |
| is consumed by the call to ``flush`` that ensures that you have a |
| clean database at the start of each test run. If you have multiple |
| databases, multiple flushes are required (one for each database), |
| which can be a time consuming activity -- especially if your tests |
| don't need to test multi-database activity. |
| |
| As an optimization, Django only flushes the ``default`` database at |
| the start of each test run. If your setup contains multiple databases, |
| and you have a test that requires every database to be clean, you can |
| use the ``multi_db`` attribute on the test suite to request a full |
| flush. |
| |
| For example:: |
| |
| class TestMyViews(TestCase): |
| multi_db = True |
| |
| def testIndexPageView(self): |
| call_some_test_code() |
| |
| This test case will flush *all* the test databases before running |
| ``testIndexPageView``. |
| |
| Emptying the test outbox |
| ~~~~~~~~~~~~~~~~~~~~~~~~ |
| |
| If you use Django's custom ``TestCase`` class, the test runner will clear the |
| contents of the test e-mail outbox at the start of each test case. |
| |
| For more detail on e-mail services during tests, see `E-mail services`_. |
| |
| Assertions |
| ~~~~~~~~~~ |
| |
| .. versionchanged:: 1.2 |
| Addded ``msg_prefix`` argument. |
| |
| As Python's normal ``unittest.TestCase`` class implements assertion methods |
| such as ``assertTrue`` and ``assertEqual``, Django's custom ``TestCase`` class |
| provides a number of custom assertion methods that are useful for testing Web |
| applications: |
| |
| The failure messages given by the assertion methods can be customized |
| with the ``msg_prefix`` argument. This string will be prefixed to any |
| failure message generated by the assertion. This allows you to provide |
| additional details that may help you to identify the location and |
| cause of an failure in your test suite. |
| |
| .. method:: TestCase.assertContains(response, text, count=None, status_code=200, msg_prefix='') |
| |
| Asserts that a ``Response`` instance produced the given ``status_code`` and |
| that ``text`` appears in the content of the response. If ``count`` is |
| provided, ``text`` must occur exactly ``count`` times in the response. |
| |
| .. method:: TestCase.assertNotContains(response, text, status_code=200, msg_prefix='') |
| |
| Asserts that a ``Response`` instance produced the given ``status_code`` and |
| that ``text`` does not appears in the content of the response. |
| |
| .. method:: TestCase.assertFormError(response, form, field, errors, msg_prefix='') |
| |
| Asserts that a field on a form raises the provided list of errors when |
| rendered on the form. |
| |
| ``form`` is the name the ``Form`` instance was given in the template |
| context. |
| |
| ``field`` is the name of the field on the form to check. If ``field`` |
| has a value of ``None``, non-field errors (errors you can access via |
| ``form.non_field_errors()``) will be checked. |
| |
| ``errors`` is an error string, or a list of error strings, that are |
| expected as a result of form validation. |
| |
| .. method:: TestCase.assertTemplateUsed(response, template_name, msg_prefix='') |
| |
| Asserts that the template with the given name was used in rendering the |
| response. |
| |
| The name is a string such as ``'admin/index.html'``. |
| |
| .. method:: TestCase.assertTemplateNotUsed(response, template_name, msg_prefix='') |
| |
| Asserts that the template with the given name was *not* used in rendering |
| the response. |
| |
| .. method:: TestCase.assertRedirects(response, expected_url, status_code=302, target_status_code=200, msg_prefix='') |
| |
| Asserts that the response return a ``status_code`` redirect status, it |
| redirected to ``expected_url`` (including any GET data), and the final |
| page was received with ``target_status_code``. |
| |
| If your request used the ``follow`` argument, the ``expected_url`` and |
| ``target_status_code`` will be the url and status code for the final |
| point of the redirect chain. |
| |
| .. method:: TestCase.assertQuerysetEqual(qs, values, transform=repr) |
| |
| .. versionadded:: 1.3 |
| |
| Asserts that a queryset ``qs`` returns a particular list of values ``values``. |
| |
| The comparison of the contents of ``qs`` and ``values`` is performed using |
| the function ``transform``; by default, this means that the ``repr()`` of |
| each value is compared. Any other callable can be used if ``repr()`` doesn't |
| provide a unique or helpful comparison. |
| |
| The comparison is also ordering dependent. If ``qs`` doesn't provide an |
| implicit ordering, you will need to apply a ``order_by()`` clause to your |
| queryset to ensure that the test will pass reliably. |
| |
| .. method:: TestCase.assertNumQueries(num, func, *args, **kwargs) |
| |
| .. versionadded:: 1.3 |
| |
| Asserts that when ``func`` is called with ``*args`` and ``**kwargs`` that |
| ``num`` database queries are executed. |
| |
| If a ``"using"`` key is present in ``kwargs`` it is used as the database |
| alias for which to check the number of queries. If you wish to call a |
| function with a ``using`` parameter you can do it by wrapping the call with |
| a ``lambda`` to add an extra parameter:: |
| |
| self.assertNumQueries(7, lambda: my_function(using=7)) |
| |
| If you're using Python 2.5 or greater you can also use this as a context |
| manager:: |
| |
| # This is necessary in Python 2.5 to enable the with statement, in 2.6 |
| # and up it is no longer necessary. |
| from __future__ import with_statement |
| |
| with self.assertNumQueries(2): |
| Person.objects.create(name="Aaron") |
| Person.objects.create(name="Daniel") |
| |
| |
| .. _topics-testing-email: |
| |
| E-mail services |
| --------------- |
| |
| If any of your Django views send e-mail using :doc:`Django's e-mail |
| functionality </topics/email>`, you probably don't want to send e-mail each time |
| you run a test using that view. For this reason, Django's test runner |
| automatically redirects all Django-sent e-mail to a dummy outbox. This lets you |
| test every aspect of sending e-mail -- from the number of messages sent to the |
| contents of each message -- without actually sending the messages. |
| |
| The test runner accomplishes this by transparently replacing the normal |
| email backend with a testing backend. |
| (Don't worry -- this has no effect on any other e-mail senders outside of |
| Django, such as your machine's mail server, if you're running one.) |
| |
| .. currentmodule:: django.core.mail |
| |
| .. data:: django.core.mail.outbox |
| |
| During test running, each outgoing e-mail is saved in |
| ``django.core.mail.outbox``. This is a simple list of all |
| :class:`~django.core.mail.EmailMessage` instances that have been sent. |
| The ``outbox`` attribute is a special attribute that is created *only* when |
| the ``locmem`` e-mail backend is used. It doesn't normally exist as part of the |
| :mod:`django.core.mail` module and you can't import it directly. The code |
| below shows how to access this attribute correctly. |
| |
| Here's an example test that examines ``django.core.mail.outbox`` for length |
| and contents:: |
| |
| from django.core import mail |
| from django.test import TestCase |
| |
| class EmailTest(TestCase): |
| def test_send_email(self): |
| # Send message. |
| mail.send_mail('Subject here', 'Here is the message.', |
| 'from@example.com', ['to@example.com'], |
| fail_silently=False) |
| |
| # Test that one message has been sent. |
| self.assertEqual(len(mail.outbox), 1) |
| |
| # Verify that the subject of the first message is correct. |
| self.assertEqual(mail.outbox[0].subject, 'Subject here') |
| |
| As noted :ref:`previously <emptying-test-outbox>`, the test outbox is emptied |
| at the start of every test in a Django ``TestCase``. To empty the outbox |
| manually, assign the empty list to ``mail.outbox``:: |
| |
| from django.core import mail |
| |
| # Empty the test outbox |
| mail.outbox = [] |
| |
| Skipping tests |
| -------------- |
| |
| .. versionadded:: 1.3 |
| |
| The unittest library provides the ``@skipIf`` and ``@skipUnless`` |
| decorators to allow you to skip tests if you know ahead of time that |
| those tests are going to fail under certain conditions. |
| |
| For example, if your test requires a particular optional library in |
| order to succeed, you could decorate the test case with ``@skipIf``. |
| Then, the test runner will report that the test wasn't executed and |
| why, instead of failing the test or omitting the test altogether. |
| |
| To supplement these test skipping behaviors, Django provides two |
| additional skip decorators. Instead of testing a generic boolean, |
| these decorators check the capabilities of the database, and skip the |
| test if the database doesn't support a specific named feature. |
| |
| The decorators use a string identifier to describe database features. |
| This string corresponds to attributes of the database connection |
| features class. See :class:`~django.db.backends.BaseDatabaseFeatures` |
| class for a full list of database features that can be used as a basis |
| for skipping tests. |
| |
| skipIfDBFeature |
| ~~~~~~~~~~~~~~~ |
| |
| Skip the decorated test if the named database feature is supported. |
| |
| For example, the following test will not be executed if the database |
| supports transactions (e.g., it would *not* run under PostgreSQL, but |
| it would under MySQL with MyISAM tables):: |
| |
| class MyTests(TestCase): |
| @skipIfDBFeature('supports_transactions') |
| def test_transaction_behavior(self): |
| # ... conditional test code |
| |
| skipUnlessDBFeature |
| ~~~~~~~~~~~~~~~~~~~ |
| |
| Skip the decorated test if the named database feature is *not* |
| supported. |
| |
| For example, the following test will not be executed if the database |
| supports transactions (e.g., it would run under PostgreSQL, but *not* |
| under MySQL with MyISAM tables):: |
| |
| class MyTests(TestCase): |
| @skipUnlessDBFeature('supports_transactions') |
| def test_transaction_behavior(self): |
| # ... conditional test code |
| |
| |
| Using different testing frameworks |
| ================================== |
| |
| Clearly, ``doctest`` and ``unittest`` are not the only Python testing |
| frameworks. While Django doesn't provide explicit support for alternative |
| frameworks, it does provide a way to invoke tests constructed for an |
| alternative framework as if they were normal Django tests. |
| |
| When you run ``./manage.py test``, Django looks at the :setting:`TEST_RUNNER` |
| setting to determine what to do. By default, :setting:`TEST_RUNNER` points to |
| ``'django.test.simple.DjangoTestSuiteRunner'``. This class defines the default Django |
| testing behavior. This behavior involves: |
| |
| #. Performing global pre-test setup. |
| |
| #. Looking for unit tests and doctests in the ``models.py`` and |
| ``tests.py`` files in each installed application. |
| |
| #. Creating the test databases. |
| |
| #. Running ``syncdb`` to install models and initial data into the test |
| databases. |
| |
| #. Running the unit tests and doctests that are found. |
| |
| #. Destroying the test databases. |
| |
| #. Performing global post-test teardown. |
| |
| If you define your own test runner class and point :setting:`TEST_RUNNER` at |
| that class, Django will execute your test runner whenever you run |
| ``./manage.py test``. In this way, it is possible to use any test framework |
| that can be executed from Python code, or to modify the Django test execution |
| process to satisfy whatever testing requirements you may have. |
| |
| .. _topics-testing-test_runner: |
| |
| Defining a test runner |
| ---------------------- |
| |
| .. versionchanged:: 1.2 |
| Prior to 1.2, test runners were a single function, not a class. |
| |
| .. currentmodule:: django.test.simple |
| |
| A test runner is a class defining a ``run_tests()`` method. Django ships |
| with a ``DjangoTestSuiteRunner`` class that defines the default Django |
| testing behavior. This class defines the ``run_tests()`` entry point, |
| plus a selection of other methods that are used to by ``run_tests()`` to |
| set up, execute and tear down the test suite. |
| |
| .. class:: DjangoTestSuiteRunner(verbosity=1, interactive=True, failfast=True, **kwargs) |
| |
| ``verbosity`` determines the amount of notification and debug information |
| that will be printed to the console; ``0`` is no output, ``1`` is normal |
| output, and ``2`` is verbose output. |
| |
| If ``interactive`` is ``True``, the test suite has permission to ask the |
| user for instructions when the test suite is executed. An example of this |
| behavior would be asking for permission to delete an existing test |
| database. If ``interactive`` is ``False``, the test suite must be able to |
| run without any manual intervention. |
| |
| If ``failfast`` is ``True``, the test suite will stop running after the |
| first test failure is detected. |
| |
| Django will, from time to time, extend the capabilities of |
| the test runner by adding new arguments. The ``**kwargs`` declaration |
| allows for this expansion. If you subclass ``DjangoTestSuiteRunner`` or |
| write your own test runner, ensure accept and handle the ``**kwargs`` |
| parameter. |
| |
| .. method:: DjangoTestSuiteRunner.run_tests(test_labels, extra_tests=None, **kwargs) |
| |
| Run the test suite. |
| |
| ``test_labels`` is a list of strings describing the tests to be run. A test |
| label can take one of three forms: |
| |
| * ``app.TestCase.test_method`` -- Run a single test method in a test |
| case. |
| * ``app.TestCase`` -- Run all the test methods in a test case. |
| * ``app`` -- Search for and run all tests in the named application. |
| |
| If ``test_labels`` has a value of ``None``, the test runner should run |
| search for tests in all the applications in :setting:`INSTALLED_APPS`. |
| |
| ``extra_tests`` is a list of extra ``TestCase`` instances to add to the |
| suite that is executed by the test runner. These extra tests are run |
| in addition to those discovered in the modules listed in ``test_labels``. |
| |
| This method should return the number of tests that failed. |
| |
| .. method:: DjangoTestSuiteRunner.setup_test_environment(**kwargs) |
| |
| Sets up the test environment ready for testing. |
| |
| .. method:: DjangoTestSuiteRunner.build_suite(test_labels, extra_tests=None, **kwargs) |
| |
| Constructs a test suite that matches the test labels provided. |
| |
| ``test_labels`` is a list of strings describing the tests to be run. A test |
| label can take one of three forms: |
| |
| * ``app.TestCase.test_method`` -- Run a single test method in a test |
| case. |
| * ``app.TestCase`` -- Run all the test methods in a test case. |
| * ``app`` -- Search for and run all tests in the named application. |
| |
| If ``test_labels`` has a value of ``None``, the test runner should run |
| search for tests in all the applications in :setting:`INSTALLED_APPS`. |
| |
| ``extra_tests`` is a list of extra ``TestCase`` instances to add to the |
| suite that is executed by the test runner. These extra tests are run |
| in addition to those discovered in the modules listed in ``test_labels``. |
| |
| Returns a ``TestSuite`` instance ready to be run. |
| |
| .. method:: DjangoTestSuiteRunner.setup_databases(**kwargs) |
| |
| Creates the test databases. |
| |
| Returns a data structure that provides enough detail to undo the changes |
| that have been made. This data will be provided to the ``teardown_databases()`` |
| function at the conclusion of testing. |
| |
| .. method:: DjangoTestSuiteRunner.run_suite(suite, **kwargs) |
| |
| Runs the test suite. |
| |
| Returns the result produced by the running the test suite. |
| |
| .. method:: DjangoTestSuiteRunner.teardown_databases(old_config, **kwargs) |
| |
| Destroys the test databases, restoring pre-test conditions. |
| |
| ``old_config`` is a data structure defining the changes in the |
| database configuration that need to be reversed. It is the return |
| value of the ``setup_databases()`` method. |
| |
| .. method:: DjangoTestSuiteRunner.teardown_test_environment(**kwargs) |
| |
| Restores the pre-test environment. |
| |
| .. method:: DjangoTestSuiteRunner.suite_result(suite, result, **kwargs) |
| |
| Computes and returns a return code based on a test suite, and the result |
| from that test suite. |
| |
| |
| Testing utilities |
| ----------------- |
| |
| .. module:: django.test.utils |
| :synopsis: Helpers to write custom test runners. |
| |
| To assist in the creation of your own test runner, Django provides a number of |
| utility methods in the ``django.test.utils`` module. |
| |
| .. function:: setup_test_environment() |
| |
| Performs any global pre-test setup, such as the installing the |
| instrumentation of the template rendering system and setting up |
| the dummy ``SMTPConnection``. |
| |
| .. function:: teardown_test_environment() |
| |
| Performs any global post-test teardown, such as removing the black |
| magic hooks into the template system and restoring normal e-mail |
| services. |
| |
| The creation module of the database backend (``connection.creation``) |
| also provides some utilities that can be useful during testing. |
| |
| .. function:: create_test_db(verbosity=1, autoclobber=False) |
| |
| Creates a new test database and runs ``syncdb`` against it. |
| |
| ``verbosity`` has the same behavior as in ``run_tests()``. |
| |
| ``autoclobber`` describes the behavior that will occur if a |
| database with the same name as the test database is discovered: |
| |
| * If ``autoclobber`` is ``False``, the user will be asked to |
| approve destroying the existing database. ``sys.exit`` is |
| called if the user does not approve. |
| |
| * If autoclobber is ``True``, the database will be destroyed |
| without consulting the user. |
| |
| Returns the name of the test database that it created. |
| |
| ``create_test_db()`` has the side effect of modifying the value of |
| :setting:`NAME` in :setting:`DATABASES` to match the name of the test |
| database. |
| |
| .. function:: destroy_test_db(old_database_name, verbosity=1) |
| |
| Destroys the database whose name is in stored in :setting:`NAME` in the |
| :setting:`DATABASES`, and sets :setting:`NAME` to use the |
| provided name. |
| |
| ``verbosity`` has the same behavior as in ``run_tests()``. |