testtools API documentation

Generated reference documentation for all the public functionality of testtools.

Please send patches if you notice anything confusing or wrong, or that could be improved.

testtools

Extensions to the standard Python unittest library.

testtools.clone_test_with_new_id(test, new_id)

Copy a TestCase, and give the copied test a new id.

This is only expected to be used on tests that have been constructed but not executed.

class testtools.CopyStreamResult(targets)

Copies all event it receives to multiple results.

This provides an easy facility for combining multiple StreamResults.

For TestResult the equivalent class was MultiTestResult.

class testtools.ConcurrentTestSuite(suite, make_tests, wrap_result=None)

A TestSuite whose run() calls out to a concurrency strategy.

run(result)

Run the tests concurrently.

This calls out to the provided make_tests helper, and then serialises the results so that result only sees activity from one TestCase at a time.

ConcurrentTestSuite provides no special mechanism to stop the tests returned by make_tests, it is up to the make_tests to honour the shouldStop attribute on the result object they are run with, which will be set if an exception is raised in the thread which ConcurrentTestSuite.run is called in.

class testtools.ConcurrentStreamTestSuite(make_tests)

A TestSuite whose run() parallelises.

run(result)

Run the tests concurrently.

This calls out to the provided make_tests helper to determine the concurrency to use and to assign routing codes to each worker.

ConcurrentTestSuite provides no special mechanism to stop the tests returned by make_tests, it is up to the made tests to honour the shouldStop attribute on the result object they are run with, which will be set if the test run is to be aborted.

The tests are run with an ExtendedToStreamDecorator wrapped around a StreamToQueue instance. ConcurrentStreamTestSuite dequeues events from the queue and forwards them to result. Tests can therefore be either original unittest tests (or compatible tests), or new tests that emit StreamResult events directly.

Parameters:result – A StreamResult instance. The caller is responsible for calling startTestRun on this instance prior to invoking suite.run, and stopTestRun subsequent to the run method returning.
class testtools.DecorateTestCaseResult(case, callout, before_run=None, after_run=None)

Decorate a TestCase and permit customisation of the result for runs.

testtools.ErrorHolder(test_id, error, short_description=None, details=None)

Construct an ErrorHolder.

Parameters:
  • test_id – The id of the test.
  • error – The exc info tuple that will be used as the test’s error. This is inserted into the details as ‘traceback’ - any existing key will be overridden.
  • short_description – An optional short description of the test.
  • details – Outcome details as accepted by addSuccess etc.
class testtools.ExpectedException(exc_type, value_re=None, msg=None)

A context manager to handle expected exceptions.

def test_foo(self):
with ExpectedException(ValueError, ‘fo.*’):
raise ValueError(‘foo’)

will pass. If the raised exception has a type other than the specified type, it will be re-raised. If it has a ‘str()’ that does not match the given regular expression, an AssertionError will be raised. If no exception is raised, an AssertionError will be raised.

class testtools.ExtendedToOriginalDecorator(decorated)

Permit new TestResult API code to degrade gracefully with old results.

This decorates an existing TestResult and converts missing outcomes such as addSkip to older outcomes such as addSuccess. It also supports the extended details protocol. In all cases the most recent protocol is attempted first, and fallbacks only occur when the decorated result does not support the newer style of calling.

class testtools.ExtendedToStreamDecorator(decorated)

Permit using old TestResult API code with new StreamResult objects.

This decorates a StreamResult and converts old (Python 2.6 / 2.7 / Extended) TestResult API calls into StreamResult calls.

It also supports regular StreamResult calls, making it safe to wrap around any StreamResult.

current_tags

The currently set tags.

tags(new_tags, gone_tags)

Add and remove tags from the test.

Parameters:
  • new_tags – A set of tags to be added to the stream.
  • gone_tags – A set of tags to be removed from the stream.
testtools.iterate_tests(test_suite_or_case)

Iterate through all of the test cases in ‘test_suite_or_case’.

exception testtools.MultipleExceptions

Represents many exceptions raised from some operation.

Variables:args – The sys.exc_info() tuples for each exception.
class testtools.MultiTestResult(*results)

A test result that dispatches to many test results.

wasSuccessful()

Was this result successful?

Only returns True if every constituent result was successful.

class testtools.PlaceHolder(test_id, short_description=None, details=None, outcome='addSuccess', error=None, tags=None, timestamps=(None, None))

A placeholder test.

PlaceHolder implements much of the same interface as TestCase and is particularly suitable for being added to TestResults.

testtools.run_test_with(test_runner, **kwargs)

Decorate a test as using a specific RunTest.

e.g.:

@run_test_with(CustomRunner, timeout=42)
def test_foo(self):
    self.assertTrue(True)

The returned decorator works by setting an attribute on the decorated function. TestCase.__init__ looks for this attribute when deciding on a RunTest factory. If you wish to use multiple decorators on a test method, then you must either make this one the top-most decorator, or you must write your decorators so that they update the wrapping function with the attributes of the wrapped function. The latter is recommended style anyway. functools.wraps, functools.wrapper and twisted.python.util.mergeFunctionMetadata can help you do this.

Parameters:
  • test_runner – A RunTest factory that takes a test case and an optional list of exception handlers. See RunTest.
  • kwargs – Keyword arguments to pass on as extra arguments to ‘test_runner’.
Returns:

A decorator to be used for marking a test as needing a special runner.

class testtools.ResourcedToStreamDecorator(decorated)

Report testresources-related activity to StreamResult objects.

Implement the resource lifecycle TestResult protocol extension supported by the testresources.TestResourceManager class. At each stage of a resource’s lifecycle, a stream event with relevant details will be emitted.

Each stream event will have its test_id field set to the resource manager’s identifier (see testresources.TestResourceManager.id()) plus the method being executed (either ‘make’ or ‘clean’).

The test_status will be either ‘inprogress’ or ‘success’.

The runnable flag will be set to False.

class testtools.Tagger(decorated, new_tags, gone_tags)

Tag each test individually.

class testtools.TestCase(*args, **kwargs)

Extensions to the basic TestCase.

Variables:
  • exception_handlers – Exceptions to catch from setUp, runTest and tearDown. This list is able to be modified at any time and consists of (exception_class, handler(case, result, exception_value)) pairs.
  • force_failure – Force testtools.RunTest to fail the test after the test has completed.
  • run_tests_with – A factory to make the RunTest to run tests with. Defaults to RunTest. The factory is expected to take a test case and an optional list of exception handlers.
addCleanup(function, *arguments, **keywordArguments)

Add a cleanup function to be called after tearDown.

Functions added with addCleanup will be called in reverse order of adding after tearDown, or after setUp if setUp raises an exception.

If a function added with addCleanup raises an exception, the error will be recorded as a test error, and the next cleanup will then be run.

Cleanup functions are always called before a test finishes running, even if setUp is aborted by an exception.

addDetail(name, content_object)

Add a detail to be reported with this test’s outcome.

For more details see pydoc testtools.TestResult.

Parameters:
  • name – The name to give this detail.
  • content_object – The content object for this detail. See testtools.content for more detail.
addDetailUniqueName(name, content_object)

Add a detail to the test, but ensure it’s name is unique.

This method checks whether name conflicts with a detail that has already been added to the test. If it does, it will modify name to avoid the conflict.

For more details see pydoc testtools.TestResult.

Parameters:
  • name – The name to give this detail.
  • content_object – The content object for this detail. See testtools.content for more detail.
addOnException(handler)

Add a handler to be called when an exception occurs in test code.

This handler cannot affect what result methods are called, and is called before any outcome is called on the result object. An example use for it is to add some diagnostic state to the test details dict which is expensive to calculate and not interesting for reporting in the success case.

Handlers are called before the outcome (such as addFailure) that the exception has caused.

Handlers are called in first-added, first-called order, and if they raise an exception, that will propagate out of the test running machinery, halting test processing. As a result, do not call code that may unreasonably fail.

assertEqual(expected, observed, message='')

Assert that ‘expected’ is equal to ‘observed’.

Parameters:
  • expected – The expected value.
  • observed – The observed value.
  • message – An optional message to include in the error.
assertEquals(expected, observed, message='')

Assert that ‘expected’ is equal to ‘observed’.

Parameters:
  • expected – The expected value.
  • observed – The observed value.
  • message – An optional message to include in the error.
assertIn(needle, haystack, message='')

Assert that needle is in haystack.

assertIs(expected, observed, message='')

Assert that ‘expected’ is ‘observed’.

Parameters:
  • expected – The expected value.
  • observed – The observed value.
  • message – An optional message describing the error.
assertIsNone(observed, message='')

Assert that ‘observed’ is equal to None.

Parameters:
  • observed – The observed value.
  • message – An optional message describing the error.
assertIsNot(expected, observed, message='')

Assert that ‘expected’ is not ‘observed’.

assertIsNotNone(observed, message='')

Assert that ‘observed’ is not equal to None.

Parameters:
  • observed – The observed value.
  • message – An optional message describing the error.
assertNotIn(needle, haystack, message='')

Assert that needle is not in haystack.

assertRaises(excClass, callableObj, *args, **kwargs)

Fail unless an exception of class excClass is thrown by callableObj when invoked with arguments args and keyword arguments kwargs. If a different type of exception is thrown, it will not be caught, and the test case will be deemed to have suffered an error, exactly as for an unexpected exception.

assertThat(matchee, matcher, message='', verbose=False)

Assert that matchee is matched by matcher.

Parameters:
  • matchee – An object to match with matcher.
  • matcher – An object meeting the testtools.Matcher protocol.
Raises:

MismatchError – When matcher does not match thing.

expectFailure(reason, predicate, *args, **kwargs)

Check that a test fails in a particular way.

If the test fails in the expected way, a KnownFailure is caused. If it succeeds an UnexpectedSuccess is caused.

The expected use of expectFailure is as a barrier at the point in a test where the test would fail. For example: >>> def test_foo(self): >>> self.expectFailure(“1 should be 0”, self.assertNotEqual, 1, 0) >>> self.assertEqual(1, 0)

If in the future 1 were to equal 0, the expectFailure call can simply be removed. This separation preserves the original intent of the test while it is in the expectFailure mode.

expectThat(matchee, matcher, message='', verbose=False)

Check that matchee is matched by matcher, but delay the assertion failure.

This method behaves similarly to assertThat, except that a failed match does not exit the test immediately. The rest of the test code will continue to run, and the test will be marked as failing after the test has finished.

Parameters:
  • matchee – An object to match with matcher.
  • matcher – An object meeting the testtools.Matcher protocol.
  • message – If specified, show this message with any failed match.
failUnlessEqual(expected, observed, message='')

Assert that ‘expected’ is equal to ‘observed’.

Parameters:
  • expected – The expected value.
  • observed – The observed value.
  • message – An optional message to include in the error.
failUnlessRaises(excClass, callableObj, *args, **kwargs)

Fail unless an exception of class excClass is thrown by callableObj when invoked with arguments args and keyword arguments kwargs. If a different type of exception is thrown, it will not be caught, and the test case will be deemed to have suffered an error, exactly as for an unexpected exception.

getDetails()

Get the details dict that will be reported with this test’s outcome.

For more details see pydoc testtools.TestResult.

getUniqueInteger()

Get an integer unique to this test.

Returns an integer that is guaranteed to be unique to this instance. Use this when you need an arbitrary integer in your test, or as a helper for custom anonymous factory methods.

getUniqueString(prefix=None)

Get a string unique to this test.

Returns a string that is guaranteed to be unique to this instance. Use this when you need an arbitrary string in your test, or as a helper for custom anonymous factory methods.

Parameters:prefix – The prefix of the string. If not provided, defaults to the id of the tests.
Returns:A bytestring of ‘<prefix>-<unique_int>’.
onException(exc_info, tb_label='traceback')

Called when an exception propagates from test code.

Seealso addOnException:
 
patch(obj, attribute, value)

Monkey-patch ‘obj.attribute’ to ‘value’ while the test is running.

If ‘obj’ has no attribute, then the monkey-patch will still go ahead, and the attribute will be deleted instead of restored to its original value.

Parameters:
  • obj – The object to patch. Can be anything.
  • attribute – The attribute on ‘obj’ to patch.
  • value – The value to set ‘obj.attribute’ to.
run_tests_with

alias of RunTest

skip(reason)

DEPRECATED: Use skipTest instead.

skipException

alias of SkipTest

skipTest(reason)

Cause this test to be skipped.

This raises self.skipException(reason). skipException is raised to permit a skip to be triggered at any point (during setUp or the testMethod itself). The run() method catches skipException and translates that into a call to the result objects addSkip method.

Parameters:reason – The reason why the test is being skipped. This must support being cast into a unicode string for reporting.
useFixture(fixture)

Use fixture in a test case.

The fixture will be setUp, and self.addCleanup(fixture.cleanUp) called.

Parameters:fixture – The fixture to use.
Returns:The fixture, after setting it up and scheduling a cleanup for it.
class testtools.TestCommand(dist)

Command to run unit tests with testtools

class testtools.TestByTestResult(on_test)

Call something every time a test completes.

class testtools.TestResult(failfast=False, tb_locals=False)

Subclass of unittest.TestResult extending the protocol for flexibility.

This test result supports an experimental protocol for providing additional data to in test outcomes. All the outcome methods take an optional dict ‘details’. If supplied any other detail parameters like ‘err’ or ‘reason’ should not be provided. The details dict is a mapping from names to MIME content objects (see testtools.content). This permits attaching tracebacks, log files, or even large objects like databases that were part of the test fixture. Until this API is accepted into upstream Python it is considered experimental: it may be replaced at any point by a newer version more in line with upstream Python. Compatibility would be aimed for in this case, but may not be possible.

Variables:skip_reasons – A dict of skip-reasons -> list of tests. See addSkip.
addError(test, err=None, details=None)

Called when an error has occurred. ‘err’ is a tuple of values as returned by sys.exc_info().

Parameters:details – Alternative way to supply details about the outcome. see the class docstring for more information.
addExpectedFailure(test, err=None, details=None)

Called when a test has failed in an expected manner.

Like with addSuccess and addError, testStopped should still be called.

Parameters:
  • test – The test that has been skipped.
  • err – The exc_info of the error that was raised.
Returns:

None

addFailure(test, err=None, details=None)

Called when an error has occurred. ‘err’ is a tuple of values as returned by sys.exc_info().

Parameters:details – Alternative way to supply details about the outcome. see the class docstring for more information.
addSkip(test, reason=None, details=None)

Called when a test has been skipped rather than running.

Like with addSuccess and addError, testStopped should still be called.

This must be called by the TestCase. ‘addError’ and ‘addFailure’ will not call addSkip, since they have no assumptions about the kind of errors that a test can raise.

Parameters:
  • test – The test that has been skipped.
  • reason – The reason for the test being skipped. For instance, u”pyGL is not available”.
  • details – Alternative way to supply details about the outcome. see the class docstring for more information.
Returns:

None

addSuccess(test, details=None)

Called when a test succeeded.

addUnexpectedSuccess(test, details=None)

Called when a test was expected to fail, but succeed.

current_tags

The currently set tags.

done()

Called when the test runner is done.

deprecated in favour of stopTestRun.

startTestRun()

Called before a test run starts.

New in Python 2.7. The testtools version resets the result to a pristine condition ready for use in another test run. Note that this is different from Python 2.7’s startTestRun, which does nothing.

stopTestRun()

Called after a test run completes

New in python 2.7

tags(new_tags, gone_tags)

Add and remove tags from the test.

Parameters:
  • new_tags – A set of tags to be added to the stream.
  • gone_tags – A set of tags to be removed from the stream.
time(a_datetime)

Provide a timestamp to represent the current time.

This is useful when test activity is time delayed, or happening concurrently and getting the system time between API calls will not accurately represent the duration of tests (or the whole run).

Calling time() sets the datetime used by the TestResult object. Time is permitted to go backwards when using this call.

Parameters:a_datetime – A datetime.datetime object with TZ information or None to reset the TestResult to gathering time from the system.
wasSuccessful()

Has this result been successful so far?

If there have been any errors, failures or unexpected successes, return False. Otherwise, return True.

Note: This differs from standard unittest in that we consider unexpected successes to be equivalent to failures, rather than successes.

class testtools.TestResultDecorator(decorated)

General pass-through decorator.

This provides a base that other TestResults can inherit from to gain basic forwarding functionality.

class testtools.TextTestResult(stream, failfast=False, tb_locals=False)

A TestResult which outputs activity to a text stream.

class testtools.RunTest(case, handlers=None, last_resort=None)

An object to run a test.

RunTest objects are used to implement the internal logic involved in running a test. TestCase.__init__ stores _RunTest as the class of RunTest to execute. Passing the runTest= parameter to TestCase.__init__ allows a different RunTest class to be used to execute the test.

Subclassing or replacing RunTest can be useful to add functionality to the way that tests are run in a given project.

Variables:
  • case – The test case that is to be run.
  • result – The result object a case is reporting to.
  • handlers – A list of (ExceptionClass, handler_function) for exceptions that should be caught if raised from the user code. Exceptions that are caught are checked against this list in first to last order. There is a catch-all of ‘Exception’ at the end of the list, so to add a new exception to the list, insert it at the front (which ensures that it will be checked before any existing base classes in the list. If you add multiple exceptions some of which are subclasses of each other, add the most specific exceptions last (so they come before their parent classes in the list).
  • exception_caught – An object returned when _run_user catches an exception.
  • _exceptions – A list of caught exceptions, used to do the single reporting of error/failure/skip etc.
run(result=None)

Run self.case reporting activity to result.

Parameters:result – Optional testtools.TestResult to report activity to.
Returns:The result object the test was run against.
testtools.skip(reason)

A decorator to skip unit tests.

This is just syntactic sugar so users don’t have to change any of their unit tests in order to migrate to python 2.7, which provides the @unittest.skip decorator.

testtools.skipIf(condition, reason)

A decorator to skip a test if the condition is true.

testtools.skipUnless(condition, reason)

A decorator to skip a test unless the condition is true.

class testtools.StreamFailFast(on_error)

Call the supplied callback if an error is seen in a stream.

An example callback:

def do_something():
    pass
class testtools.StreamResult

A test result for reporting the activity of a test run.

Typical use

>>> result = StreamResult()
>>> result.startTestRun()
>>> try:
...     case.run(result)
... finally:
...     result.stopTestRun()

The case object will be either a TestCase or a TestSuite, and generally make a sequence of calls like:

>>> result.status(self.id(), 'inprogress')
>>> result.status(self.id(), 'success')

General concepts

StreamResult is built to process events that are emitted by tests during a test run or test enumeration. The test run may be running concurrently, and even be spread out across multiple machines.

All events are timestamped to prevent network buffering or scheduling latency causing false timing reports. Timestamps are datetime objects in the UTC timezone.

A route_code is a unicode string that identifies where a particular test run. This is optional in the API but very useful when multiplexing multiple streams together as it allows identification of interactions between tests that were run on the same hardware or in the same test process. Generally actual tests never need to bother with this - it is added and processed by StreamResult’s that do multiplexing / run analysis. route_codes are also used to route stdin back to pdb instances.

The StreamResult base class does no accounting or processing, rather it just provides an empty implementation of every method, suitable for use as a base class regardless of intent.

startTestRun()

Start a test run.

This will prepare the test result to process results (which might imply connecting to a database or remote machine).

status(test_id=None, test_status=None, test_tags=None, runnable=True, file_name=None, file_bytes=None, eof=False, mime_type=None, route_code=None, timestamp=None)

Inform the result about a test status.

Parameters:
  • test_id – The test whose status is being reported. None to report status about the test run as a whole.
  • test_status

    The status for the test. There are two sorts of status - interim and final status events. As many interim events can be generated as desired, but only one final event. After a final status event any further file or status events from the same test_id+route_code may be discarded or associated with a new test by the StreamResult. (But no exception will be thrown).

    Interim states:
    • None - no particular status is being reported, or status being reported is not associated with a test (e.g. when reporting on stdout / stderr chatter).
    • inprogress - the test is currently running. Emitted by tests when they start running and at any intermediary point they might choose to indicate their continual operation.
    Final states:
    • exists - the test exists. This is used when a test is not being executed. Typically this is when querying what tests could be run in a test run (which is useful for selecting tests to run).
    • xfail - the test failed but that was expected. This is purely informative - the test is not considered to be a failure.
    • uxsuccess - the test passed but was expected to fail. The test will be considered a failure.
    • success - the test has finished without error.
    • fail - the test failed (or errored). The test will be considered a failure.
    • skip - the test was selected to run but chose to be skipped. e.g. a test dependency was missing. This is purely informative- the test is not considered to be a failure.
  • test_tags – Optional set of tags to apply to the test. Tags have no intrinsic meaning - that is up to the test author.
  • runnable – Allows status reports to mark that they are for tests which are not able to be explicitly run. For instance, subtests will report themselves as non-runnable.
  • file_name – The name for the file_bytes. Any unicode string may be used. While there is no semantic value attached to the name of any attachment, the names ‘stdout’ and ‘stderr’ and ‘traceback’ are recommended for use only for output sent to stdout, stderr and tracebacks of exceptions. When file_name is supplied, file_bytes must be a bytes instance.
  • file_bytes – A bytes object containing content for the named file. This can just be a single chunk of the file - emitting another file event with more later. Must be None unleses a file_name is supplied.
  • eof – True if this chunk is the last chunk of the file, any additional chunks with the same name should be treated as an error and discarded. Ignored unless file_name has been supplied.
  • mime_type – An optional MIME type for the file. stdout and stderr will generally be “text/plain; charset=utf8”. If None, defaults to application/octet-stream. Ignored unless file_name has been supplied.
stopTestRun()

Stop a test run.

This informs the result that no more test updates will be received. At this point any test ids that have started and not completed can be considered failed-or-hung.

class testtools.StreamResultRouter(fallback=None, do_start_stop_run=True)

A StreamResult that routes events.

StreamResultRouter forwards received events to another StreamResult object, selected by a dynamic forwarding policy. Events where no destination is found are forwarded to the fallback StreamResult, or an error is raised.

Typical use is to construct a router with a fallback and then either create up front mapping rules, or create them as-needed from the fallback handler:

>>> router = StreamResultRouter()
>>> sink = doubles.StreamResult()
>>> router.add_rule(sink, 'route_code_prefix', route_prefix='0',
...     consume_route=True)
>>> router.status(
...     test_id='foo', route_code='0/1', test_status='uxsuccess')

StreamResultRouter has no buffering.

When adding routes (and for the fallback) whether to call startTestRun and stopTestRun or to not call them is controllable by passing ‘do_start_stop_run’. The default is to call them for the fallback only. If a route is added after startTestRun has been called, and do_start_stop_run is True then startTestRun is called immediately on the new route sink.

There is no a-priori defined lookup order for routes: if they are ambiguous the behaviour is undefined. Only a single route is chosen for any event.

add_rule(sink, policy, do_start_stop_run=False, **policy_args)

Add a rule to route events to sink when they match a given policy.

Parameters:
  • sink – A StreamResult to receive events.
  • policy – A routing policy. Valid policies are ‘route_code_prefix’ and ‘test_id’.
  • do_start_stop_run – If True then startTestRun and stopTestRun events will be passed onto this sink.
Raises:

ValueError if the policy is unknown

Raises:

TypeError if the policy is given arguments it cannot handle.

route_code_prefix routes events based on a prefix of the route code in the event. It takes a route_prefix argument to match on (e.g. ‘0’) and a consume_route argument, which, if True, removes the prefix from the route_code when forwarding events.

test_id routes events based on the test id. It takes a single argument, test_id. Use None to select non-test events.

class testtools.StreamSummary

A specialised StreamResult that summarises a stream.

The summary uses the same representation as the original unittest.TestResult contract, allowing it to be consumed by any test runner.

wasSuccessful()

Return False if any failure has occurred.

Note that incomplete tests can only be detected when stopTestRun is called, so that should be called before checking wasSuccessful.

class testtools.StreamTagger(targets, add=None, discard=None)

Adds or discards tags from StreamResult events.

class testtools.StreamToDict(on_test)

A specialised StreamResult that emits a callback as tests complete.

Top level file attachments are simply discarded. Hung tests are detected by stopTestRun and notified there and then.

The callback is passed a dict with the following keys:

  • id: the test id.
  • tags: The tags for the test. A set of unicode strings.
  • details: A dict of file attachments - testtools.content.Content objects.
  • status: One of the StreamResult status codes (including inprogress) or ‘unknown’ (used if only file events for a test were received...)
  • timestamps: A pair of timestamps - the first one received with this test id, and the one in the event that triggered the notification. Hung tests have a None for the second end event. Timestamps are not compared - their ordering is purely order received in the stream.

Only the most recent tags observed in the stream are reported.

class testtools.StreamToExtendedDecorator(decorated)

Convert StreamResult API calls into ExtendedTestResult calls.

This will buffer all calls for all concurrently active tests, and then flush each test as they complete.

Incomplete tests will be flushed as errors when the test run stops.

Non test file attachments are accumulated into a test called ‘testtools.extradata’ flushed at the end of the run.

class testtools.StreamToQueue(queue, routing_code)

A StreamResult which enqueues events as a dict to a queue.Queue.

Events have their route code updated to include the route code StreamToQueue was constructed with before they are submitted. If the event route code is None, it is replaced with the StreamToQueue route code, otherwise it is prefixed with the supplied code + a hyphen.

startTestRun and stopTestRun are forwarded to the queue. Implementors that dequeue events back into StreamResult calls should take care not to call startTestRun / stopTestRun on other StreamResult objects multiple times (e.g. by filtering startTestRun and stopTestRun).

StreamToQueue is typically used by ConcurrentStreamTestSuite, which creates one StreamToQueue per thread, forwards status events to the the StreamResult that ConcurrentStreamTestSuite.run() was called with, and uses the stopTestRun event to trigger calling join() on the each thread.

Unlike ThreadsafeForwardingResult which this supercedes, no buffering takes place - any event supplied to a StreamToQueue will be inserted into the queue immediately.

Events are forwarded as a dict with a key event which is one of startTestRun, stopTestRun or status. When event is status the dict also has keys matching the keyword arguments of StreamResult.status, otherwise it has one other key result which is the result that invoked startTestRun.

route_code(route_code)

Adjust route_code on the way through.

class testtools.TestControl

Controls a running test run, allowing it to be interrupted.

Variables:shouldStop – If True, tests should not run and should instead return immediately. Similarly a TestSuite should check this between each test and if set stop dispatching any new tests and return.
stop()

Indicate that tests should stop running.

class testtools.ThreadsafeForwardingResult(target, semaphore)

A TestResult which ensures the target does not receive mixed up calls.

Multiple ThreadsafeForwardingResults can forward to the same target result, and that target result will only ever receive the complete set of events for one test at a time.

This is enforced using a semaphore, which further guarantees that tests will be sent atomically even if the ThreadsafeForwardingResults are in different threads.

ThreadsafeForwardingResult is typically used by ConcurrentTestSuite, which creates one ThreadsafeForwardingResult per thread, each of which wraps of the TestResult that ConcurrentTestSuite.run() is called with.

target.startTestRun() and target.stopTestRun() are called once for each ThreadsafeForwardingResult that forwards to the same target. If the target takes special action on these events, it should take care to accommodate this.

time() and tags() calls are batched to be adjacent to the test result and in the case of tags() are coerced into test-local scope, avoiding the opportunity for bugs around global state in the target.

tags(new_tags, gone_tags)

See TestResult.

class testtools.TimestampingStreamResult(target)

A StreamResult decorator that assigns a timestamp when none is present.

This is convenient for ensuring events are timestamped.

testtools.try_import(name, alternative=None, error_callback=None)

Attempt to import name. If it fails, return alternative.

When supporting multiple versions of Python or optional dependencies, it is useful to be able to try to import a module.

Parameters:
  • name – The name of the object to import, e.g. os.path or os.path.join.
  • alternative – The value to return if no module can be imported. Defaults to None.
  • error_callback – If non-None, a callable that is passed the ImportError when the module cannot be loaded.
testtools.try_imports(module_names, alternative=<object object>, error_callback=None)

Attempt to import modules.

Tries to import the first module in module_names. If it can be imported, we return it. If not, we go on to the second module and try that. The process continues until we run out of modules to try. If none of the modules can be imported, either raise an exception or return the provided alternative value.

Parameters:
  • module_names – A sequence of module names to try to import.
  • alternative – The value to return if no module can be imported. If unspecified, we raise an ImportError.
  • error_callback – If None, called with the ImportError for each module that fails to load.
Raises:

ImportError – If none of the modules can be imported and no alternative value was specified.

testtools.unique_text_generator(prefix)

Generates unique text values.

Generates text values that are unique. Use this when you need arbitrary text in your test, or as a helper for custom anonymous factory methods.

Parameters:prefix – The prefix for text.
Returns:text that looks like ‘<prefix>-<text_with_unicode>’.
Return type:six.text_type

testtools.assertions

Assertion helpers.

testtools.assertions.assert_that(matchee, matcher, message='', verbose=False)

Assert that matchee is matched by matcher.

This should only be used when you need to use a function based matcher, assertThat in Testtools.Testcase is preferred and has more features

Parameters:
  • matchee – An object to match with matcher.
  • matcher – An object meeting the testtools.Matcher protocol.
Raises:

MismatchError – When matcher does not match thing.

testtools.matchers

All the matchers.

Matchers, a way to express complex assertions outside the testcase.

Inspired by ‘hamcrest’.

Matcher provides the abstract API that all matchers need to implement.

Bundled matchers are listed in __all__: a list can be obtained by running $ python -c ‘import testtools.matchers; print testtools.matchers.__all__’

class testtools.matchers.AfterPreprocessing(preprocessor, matcher, annotate=True)

Matches if the value matches after passing through a function.

This can be used to aid in creating trivial matchers as functions, for example:

def PathHasFileContent(content):
    def _read(path):
        return open(path).read()
    return AfterPreprocessing(_read, Equals(content))
class testtools.matchers.AllMatch(matcher)

Matches if all provided values match the given matcher.

testtools.matchers.Always()

Always match.

That is:

self.assertThat(x, Always())

Will always match and never fail, no matter what x is. Most useful when passed to other higher-order matchers (e.g. MatchesListwise).

class testtools.matchers.Annotate(annotation, matcher)

Annotates a matcher with a descriptive string.

Mismatches are then described as ‘<mismatch>: <annotation>’.

classmethod if_message(annotation, matcher)

Annotate matcher only if annotation is non-empty.

class testtools.matchers.AnyMatch(matcher)

Matches if any of the provided values match the given matcher.

class testtools.matchers.Contains(needle)

Checks whether something is contained in another thing.

testtools.matchers.ContainsAll(items)

Make a matcher that checks whether a list of things is contained in another thing.

The matcher effectively checks that the provided sequence is a subset of the matchee.

class testtools.matchers.ContainedByDict(expected)

Match a dictionary for which this is a super-dictionary.

Specify a dictionary mapping keys (often strings) to matchers. This is the ‘expected’ dict. Any dictionary that matches this must have only these keys, and the values must match the corresponding matchers in the expected dict. Dictionaries that have fewer keys can also match.

In other words, any matching dictionary must be contained by the dictionary given to the constructor.

Does not check for strict super-dictionary. That is, equal dictionaries match.

class testtools.matchers.ContainsDict(expected)

Match a dictionary for that contains a specified sub-dictionary.

Specify a dictionary mapping keys (often strings) to matchers. This is the ‘expected’ dict. Any dictionary that matches this must have at least these keys, and the values must match the corresponding matchers in the expected dict. Dictionaries that have more keys will also match.

In other words, any matching dictionary must contain the dictionary given to the constructor.

Does not check for strict sub-dictionary. That is, equal dictionaries match.

class testtools.matchers.DirContains(filenames=None, matcher=None)

Matches if the given directory contains files with the given names.

That is, is the directory listing exactly equal to the given files?

testtools.matchers.DirExists()

Matches if the path exists and is a directory.

class testtools.matchers.DocTestMatches(example, flags=0)

See if a string matches a doctest example.

class testtools.matchers.EndsWith(expected)

Checks whether one string ends with another.

class testtools.matchers.Equals(expected)

Matches if the items are equal.

comparator()

eq(a, b) – Same as a==b.

class testtools.matchers.FileContains(contents=None, matcher=None)

Matches if the given file has the specified contents.

testtools.matchers.FileExists()

Matches if the given path exists and is a file.

class testtools.matchers.GreaterThan(expected)

Matches if the item is greater than the matchers reference object.

comparator()

gt(a, b) – Same as a>b.

class testtools.matchers.HasPermissions(octal_permissions)

Matches if a file has the given permissions.

Permissions are specified and matched as a four-digit octal string.

class testtools.matchers.Is(expected)

Matches if the items are identical.

comparator()

is_(a, b) – Same as a is b.

testtools.matchers.IsDeprecated(message)

Make a matcher that checks that a callable produces exactly one DeprecationWarning.

Parameters:message – Matcher for the warning message.
class testtools.matchers.IsInstance(*types)

Matcher that wraps isinstance.

class testtools.matchers.KeysEqual(*expected)

Checks whether a dict has particular keys.

class testtools.matchers.LessThan(expected)

Matches if the item is less than the matchers reference object.

comparator()

lt(a, b) – Same as a<b.

class testtools.matchers.MatchesAll(*matchers, **options)

Matches if all of the matchers it is created with match.

class testtools.matchers.MatchesAny(*matchers)

Matches if any of the matchers it is created with match.

class testtools.matchers.MatchesDict(expected)

Match a dictionary exactly, by its keys.

Specify a dictionary mapping keys (often strings) to matchers. This is the ‘expected’ dict. Any dictionary that matches this must have exactly the same keys, and the values must match the corresponding matchers in the expected dict.

class testtools.matchers.MatchesException(exception, value_re=None)

Match an exc_info tuple against an exception instance or type.

class testtools.matchers.MatchesListwise(matchers, first_only=False)

Matches if each matcher matches the corresponding value.

More easily explained by example than in words:

>>> from ._basic import Equals
>>> MatchesListwise([Equals(1)]).match([1])
>>> MatchesListwise([Equals(1), Equals(2)]).match([1, 2])
>>> print (MatchesListwise([Equals(1), Equals(2)]).match([2, 1]).describe())
Differences: [
2 != 1
1 != 2
]
>>> matcher = MatchesListwise([Equals(1), Equals(2)], first_only=True)
>>> print (matcher.match([3, 4]).describe())
3 != 1
class testtools.matchers.MatchesPredicate(predicate, message)

Match if a given function returns True.

It is reasonably common to want to make a very simple matcher based on a function that you already have that returns True or False given a single argument (i.e. a predicate function). This matcher makes it very easy to do so. e.g.:

IsEven = MatchesPredicate(lambda x: x % 2 == 0, '%s is not even')
self.assertThat(4, IsEven)
testtools.matchers.MatchesPredicateWithParams(predicate, message, name=None)

Match if a given parameterised function returns True.

It is reasonably common to want to make a very simple matcher based on a function that you already have that returns True or False given some arguments. This matcher makes it very easy to do so. e.g.:

HasLength = MatchesPredicate(
    lambda x, y: len(x) == y, 'len({0}) is not {1}')
# This assertion will fail, as 'len([1, 2]) == 3' is False.
self.assertThat([1, 2], HasLength(3))

Note that unlike MatchesPredicate MatchesPredicateWithParams returns a factory which you then customise to use by constructing an actual matcher from it.

The predicate function should take the object to match as its first parameter. Any additional parameters supplied when constructing a matcher are supplied to the predicate as additional parameters when checking for a match.

Parameters:
  • predicate – The predicate function.
  • message – A format string for describing mis-matches.
  • name – Optional replacement name for the matcher.
class testtools.matchers.MatchesRegex(pattern, flags=0)

Matches if the matchee is matched by a regular expression.

class testtools.matchers.MatchesSetwise(*matchers)

Matches if all the matchers match elements of the value being matched.

That is, each element in the ‘observed’ set must match exactly one matcher from the set of matchers, with no matchers left over.

The difference compared to MatchesListwise is that the order of the matchings does not matter.

class testtools.matchers.MatchesStructure(**kwargs)

Matcher that matches an object structurally.

‘Structurally’ here means that attributes of the object being matched are compared against given matchers.

fromExample allows the creation of a matcher from a prototype object and then modified versions can be created with update.

byEquality creates a matcher in much the same way as the constructor, except that the matcher for each of the attributes is assumed to be Equals.

byMatcher creates a similar matcher to byEquality, but you get to pick the matcher, rather than just using Equals.

classmethod byEquality(**kwargs)

Matches an object where the attributes equal the keyword values.

Similar to the constructor, except that the matcher is assumed to be Equals.

classmethod byMatcher(matcher, **kwargs)

Matches an object where the attributes match the keyword values.

Similar to the constructor, except that the provided matcher is used to match all of the values.

testtools.matchers.Never()

Never match.

That is:

self.assertThat(x, Never())

Will never match and always fail, no matter what x is. Included for completeness with Always(), but if you find a use for this, let us know!

class testtools.matchers.NotEquals(expected)

Matches if the items are not equal.

In most cases, this is equivalent to Not(Equals(foo)). The difference only matters when testing __ne__ implementations.

comparator()

ne(a, b) – Same as a!=b.

class testtools.matchers.Not(matcher)

Inverts a matcher.

testtools.matchers.PathExists()

Matches if the given path exists.

Use like this:

assertThat('/some/path', PathExists())
class testtools.matchers.Raises(exception_matcher=None)

Match if the matchee raises an exception when called.

Exceptions which are not subclasses of Exception propagate out of the Raises.match call unless they are explicitly matched.

testtools.matchers.raises(exception)

Make a matcher that checks that a callable raises an exception.

This is a convenience function, exactly equivalent to:

return Raises(MatchesException(exception))

See Raises and MatchesException for more information.

class testtools.matchers.SamePath(path)

Matches if two paths are the same.

That is, the paths are equal, or they point to the same file but in different ways. The paths do not have to exist.

class testtools.matchers.StartsWith(expected)

Checks whether one string starts with another.

class testtools.matchers.TarballContains(paths)

Matches if the given tarball contains the given paths.

Uses TarFile.getnames() to get the paths out of the tarball.

class testtools.matchers.Warnings(warnings_matcher=None)

Match if the matchee produces warnings.

testtools.matchers.WarningMessage(category_type, message=None, filename=None, lineno=None, line=None)

Create a matcher that will match warnings.WarningMessages.

For example, to match captured DeprecationWarnings with a message about some foo being replaced with bar:

WarningMessage(DeprecationWarning,
               message=MatchesAll(
                   Contains('foo is deprecated'),
                   Contains('use bar instead')))
Parameters:category_type (type) – A warning type, for example

DeprecationWarning. :param message_matcher: A matcher object that will be evaluated against warning’s message. :param filename_matcher: A matcher object that will be evaluated against the warning’s filename. :param lineno_matcher: A matcher object that will be evaluated against the warning’s line number. :param line_matcher: A matcher object that will be evaluated against the warning’s line of source code.

testtools.twistedsupport

Support for testing code that uses Twisted.

testtools.twistedsupport.succeeded(matcher)

Match a Deferred that has fired successfully.

For example:

fires_with_the_answer = succeeded(Equals(42))
deferred = defer.succeed(42)
assert_that(deferred, fires_with_the_answer)

This assertion will pass. However, if deferred had fired with a different value, or had failed, or had not fired at all, then it would fail.

Use this instead of twisted.trial.unittest.SynchronousTestCase.successResultOf().

Parameters:matcher – A matcher to match against the result of a Deferred.
Returns:A matcher that can be applied to a synchronous Deferred.
testtools.twistedsupport.failed(matcher)

Match a Deferred that has failed.

For example:

error = RuntimeError('foo')
fails_at_runtime = failed(
    AfterPreprocessing(lambda f: f.value, Equals(error)))
deferred = defer.fail(error)
assert_that(deferred, fails_at_runtime)

This assertion will pass. However, if deferred had fired successfully, had failed with a different error, or had not fired at all, then it would fail.

Use this instead of twisted.trial.unittest.SynchronousTestCase.failureResultOf().

Parameters:matcher – A matcher to match against the result of a failing Deferred.
Returns:A matcher that can be applied to a synchronous Deferred.
testtools.twistedsupport.has_no_result()

Match a Deferred that has not yet fired.

For example, this will pass:

assert_that(defer.Deferred(), has_no_result())

But this will fail:

>>> assert_that(defer.succeed(None), has_no_result())
Traceback (most recent call last):
  ...
  File "testtools/assertions.py", line 22, in assert_that
    raise MismatchError(matchee, matcher, mismatch, verbose)
testtools.matchers._impl.MismatchError: No result expected on <Deferred at ... current result: None>, found None instead

As will this:

>>> assert_that(defer.fail(RuntimeError('foo')), has_no_result())
Traceback (most recent call last):
  ...
  File "testtools/assertions.py", line 22, in assert_that
    raise MismatchError(matchee, matcher, mismatch, verbose)
testtools.matchers._impl.MismatchError: No result expected on <Deferred at ... current result: <twisted.python.failure.Failure <type 'exceptions.RuntimeError'>>>, found <twisted.python.failure.Failure <type 'exceptions.RuntimeError'>> instead
class testtools.twistedsupport.AsynchronousDeferredRunTest(case, handlers=None, last_resort=None, reactor=None, timeout=0.005, debug=False, suppress_twisted_logging=True, store_twisted_logs=True)

Runner for tests that return Deferreds that fire asynchronously.

Use this runner when you have tests that return Deferreds that will only fire if the reactor is left to spin for a while.

__init__(case, handlers=None, last_resort=None, reactor=None, timeout=0.005, debug=False, suppress_twisted_logging=True, store_twisted_logs=True)

Construct an AsynchronousDeferredRunTest.

Please be sure to always use keyword syntax, not positional, as the base class may add arguments in future - and for core code compatibility with that we have to insert them before the local parameters.

Parameters:
  • case (TestCase) – The TestCase to run.
  • handlers – A list of exception handlers (ExceptionType, handler) where ‘handler’ is a callable that takes a TestCase, a testtools.TestResult and the exception raised.
  • last_resort – Handler to call before re-raising uncatchable exceptions (those for which there is no handler).
  • reactor – The Twisted reactor to use. If not given, we use the default reactor.
  • timeout (float) – The maximum time allowed for running a test. The default is 0.005s.
  • debug – Whether or not to enable Twisted’s debugging. Use this to get information about unhandled Deferreds and left-over DelayedCalls. Defaults to False.
  • suppress_twisted_logging (bool) – If True, then suppress Twisted’s default logging while the test is being run. Defaults to True.
  • store_twisted_logs (bool) – If True, then store the Twisted logs that took place during the run as the ‘twisted-log’ detail. Defaults to True.
classmethod make_factory(reactor=None, timeout=0.005, debug=False, suppress_twisted_logging=True, store_twisted_logs=True)

Make a factory that conforms to the RunTest factory interface.

Example:

class SomeTests(TestCase):
    # Timeout tests after two minutes.
    run_tests_with = AsynchronousDeferredRunTest.make_factory(
        timeout=120)
class testtools.twistedsupport.AsynchronousDeferredRunTestForBrokenTwisted(case, handlers=None, last_resort=None, reactor=None, timeout=0.005, debug=False, suppress_twisted_logging=True, store_twisted_logs=True)

Test runner that works around Twisted brokenness re reactor junk.

There are many APIs within Twisted itself where a Deferred fires but leaves cleanup work scheduled for the reactor to do. Arguably, many of these are bugs. This runner iterates the reactor event loop a number of times after every test, in order to shake out these buggy-but-commonplace events.

class testtools.twistedsupport.SynchronousDeferredRunTest(case, handlers=None, last_resort=None)

Runner for tests that return synchronous Deferreds.

This runner doesn’t touch the reactor at all. It assumes that tests return Deferreds that have already fired.

class testtools.twistedsupport.CaptureTwistedLogs

Capture all the Twisted logs and add them as a detail.

Much of the time, you won’t need to use this directly, as AsynchronousDeferredRunTest captures Twisted logs when the store_twisted_logs is set to True (which it is by default).

However, if you want to do custom processing of Twisted’s logs, then this class can be useful.

For example:

class TwistedTests(TestCase):
    run_tests_with(
        partial(AsynchronousDeferredRunTest, store_twisted_logs=False))

    def setUp(self):
        super(TwistedTests, self).setUp()
        twisted_logs = self.useFixture(CaptureTwistedLogs())
        # ... do something with twisted_logs ...
testtools.twistedsupport.assert_fails_with(d, *exc_types, **kwargs)

Assert that d will fail with one of exc_types.

The normal way to use this is to return the result of assert_fails_with from your unit test.

Equivalent to Twisted’s assertFailure.

Parameters:
  • d (Deferred) – A Deferred that is expected to fail.
  • exc_types – The exception types that the Deferred is expected to fail with.
  • failureException (type) – An optional keyword argument. If provided, will raise that exception instead of testtools.TestCase.failureException.
Returns:

A Deferred that will fail with an AssertionError if d does not fail with one of the exception types.

testtools.twistedsupport.flush_logged_errors(*error_types)

Flush errors of the given types from the global Twisted log.

Any errors logged during a test will be bubbled up to the test result, marking the test as erroring. Use this function to declare that logged errors were expected behavior.

For example:

try:
    1/0
except ZeroDivisionError:
    log.err()
# Prevent logged ZeroDivisionError from failing the test.
flush_logged_errors(ZeroDivisionError)
Parameters:error_types – A variable argument list of exception types.