[Zope-dev] Re: New test runner work

Jim Fulton jim at zope.com
Tue Aug 23 10:22:01 EDT 2005

Stuart Bishop wrote:
> Jim Fulton wrote:
>>I'll note that I'm working on a newer test runner that I hope to use
>>in Zope 2.9 and 3.2.  The new test runner is a nearly complete rewrite to
>>- A more flexible test runner that can be used for a variety of projects.
>>  The current test runner has been forked for ZODB, Zope 3, and Zope 2.
>>  That's why the Zope 3 version has features that are lacking in the Zope 2
>>  version.
>>- Support for "layers" of tests, so that it can handle unit tests and
>>  functional tests.
>>- A slightly better UI.
>>- Tests (of the test runner itself :)
> Hi Jim.
> I've been looking over this - fixing tests seems to take up a significant
> amount of our time, so I might have some interesting use cases.
> A large proportion of our tests use a relational database. Some of them want
> an empty database, some of them want just the schema created but no data,
> some of them want the schema created and the data. Some of them need the
> component architecture, and some of them don't. Some of them need one or
> more twisted servers running, some of them don't.
> Note that we mix and match. We have 4 different types of database fixture
> (none, empty, schema, populated), 2 different types of database connection
> mechanisms (psycopgda, psycopg), 2 types of CA fixture (none, loaded), and
> (currently) 4 states of external daemons needed. If we were to arrange this
> in layers, it would take 56 different layers, and this will double every
> time we add a new daemon, or add more database templates (e.g. fat for lots
> of sample data to go with the existing thin).
> As a way of supporting this better, instead of specifying a layer a test
> could specify the list of resources it needs:
> import testresources as r
> class FooTest(unittest.TestCase):
>     resources = [r.LaunchpadDb, r.Librarian, r.Component]
>     [...]
> class BarTest(unittest.TestCase):
>     resources = [r.EmptyDb]
> class BazTest(unittest.TestCase):
>     resources = [r.LaunchpadDb, r.Librarian]

This is pretty much how layers work.  Layers can be arranged in
a DAG (much like a traditional multiple-inheritence class graph).
So, you can model each resource as a layer and specific combinations
of resources as layers.  The test runner will attempt to run the layers
in an order than minimizes set-up and tear-down of layers.


> Some other nice things could be done with the resources:
> - If the setUp raises NotImplementedError (or whatever), tests using this
> resource are skipped (and reported as skipped). This nicely handles tests
> that should only be run in particular environments (Win32, Internet
> connection, python.net installed etc.)

That's a good idea.

> - If the setUp raises another exception, all tests using this resource fail.
> The common case we see is 'database in use', where PostgreSQL does not let
> us destroy or use as a template a database that has open connections to it.
> Also useful for general sanity checking of the environment - no point
> running the tests if we know they are going to fail or have skewed results.


> - A resource should have a pretest and posttest hooks. pretest is used for
> lightweight resource specific initialization (e.g. setUp creates a fresh
> database from a dump and pretest initializes the connection pool). posttest
> can be used to ensure tests cleaned up properly or other housekeeping (e.g.
> issuing a rollback). This could also apply to layers in the current
> environment. This eliminates tedious boilerplate from testcases.

Ah, so the layer specifies additional per-test setUp and tearDown
that is used in addition to the tests's own setUp and tearDown.  This
sounds reasonable.

> - A resource could provide useful data to the test runner. For example, if a
> resource says it doesn't use or lock any shared system resources, the test
> runner could decide to run tests in parallel.

> Although a less blue sky use
> would be specifying a dependancy on another resource.

This is handled by layers now. Layers have __bases__ -- layers are
build on other layers. That's why they are caled layers. :)

> On another note, enforcing isolation of tests has been a continuous problem
> for us. For example, a developer registering a utility or otherwise mucking
> around with the global environment and forgetting to reset things in
> tearDown. This goes unnoticed for a while, and other tests get written that
> actually depend on this corruption. But at some point, the order the tests
> are run changes for some reason and suddenly test 500 starts failing. It
> turns out the global state has been screwed, and you have the fun task of
> tracking down which of the proceeding 499 tests screwed it. I think this is
> a use case for some sort of global posttest hook.

How so?

 > Perhaps this would be best
> done by allowing people to write wrappers around the one-true-testrunner?

or we could simply provide such a hook, if it's needed.

I think this sort of thing is better handled with layers.

> This seems to be the simplest way of allowing customization of the test runner:
> def pretest(...):
>    [...]
> def posttest(...):
>    [...]
> if __name__ == '__main__':
>     zope.test.testrunner.main(pretest=pretest,posttest=posttest)
> Other policy could also be configured - e.g. 'Run these tests or tests using
> this resource first. If any failures, don't bother running any more'. Or
> 'Stop running tests after 1 failure'.

without a lot more thought and details, it's not at all clear that the hooks
you've specialized would provide the simplest way to do this.

 > These sorts of policies are important
> for us as we run our tests in an automated environment (we can't commit to
> our trunk. Instead, we send a request to a daemon which runs the test suite
> and commits on our behalf if the tests all pass).

Hm, seems rather restrictive...

I guess making the new test runner class-based would more easily allow this
sort of customization.

> Our full test suite currently takes 45 minutes to run and it is becoming an
> issue.

Hm, I can see why you'd like to parallelize things.  Of course, this only helps
you if you have enough hardware to benefit from the parallelization.

The test runner is already prepared to run layers in separate processes if
they can't be torn down.  I don't think it would take much to have an option
to run the layers in separate processes, or to arrange the layers into sets
run as separate processes.  Of course, because the new test runner makes it
easy to select test subsets in various ways, you could probably arrange the
parallelization with a controller script.

 > We need to speed them up, determine slow tests in need of pruning or
> optimization, short circuit test runs and reduce test suite maintenance. So
> I should be able to get time to help (although I need to look closer at the
> SchoolTool and py.test runners to see if they are closer to what we need).



Jim Fulton           mailto:jim at zope.com       Python Powered!
CTO                  (540) 361-1714            http://www.python.org
Zope Corporation     http://www.zope.com       http://www.zope.org

More information about the Zope-Dev mailing list