In DM-11514 I have got to a point where “everything” (lsst_ci
, lsst_distrib
, lsst_obs
, lsst_sims
) can build and have all tests pass if, for each test file, we replace the python testfile.py
call with pytest testfile.py
. This was the simplest possible change to the test execution environment but has the downside that we have a slightly increased overhead (possibly of order 0.5 second) when executing each file’s tests because of the pytest
infrastructure spinning up and the creation of the JUnit XML output files.
Here are some test execution statistics for afw
(117 files; 1227 tests passed; 13 skipped; 2 CPUs):
- Running all tests with one
pytest
process: 1minute 45 seconds. - Running all tests with one
pytest
usingpytest-xdist
parallelization (auto mode: 4 “threads”): 1m5s. - Running all tests serially with one
pytest
process per test file: 10m30s minutes (!!) - Running all tests serially with one
python
process per test file (the current mode): 9m30s.
These statistics are frightening; the import overhead is killing us and pytest
is adding about 10% overhead to those imports. Even in multi-threaded mode with scons
managing the subprocesses, our builds are slower than they could be.
This run time discrepancy suggests to me that I should not merge the current sconsUtils
changes but should make a more extensive change to trigger a single pytest
process. This would result in the per-testfile .failed
files disappearing (binary executables should be run by the test_executables.py
test file and should not be run explicitly by scons
) and being replaced by a JUnit XML file and a file containing the output from pytest itself (which in theory I could call .failed
if we wanted to do that).
Jenkins will use the XML file to give a detailed test report.
Another advantage of using pytest
is that we can automatically add flake8
testing via a plugin if pytest
is run in auto discovery mode. I think the way forward would be:
- By default collect all the python test files and explicitly pass them to
pytest
. - If a
SConscript
flag is set indicating that auto-discovery is fine, skip the collection and just runpytest
without arguments. (if all the test files are renamed following RFC-229 before we enable this mode then no flag is needed). - If a product is
flake8
-clean add the--flake8
option to asetup.cfg
file to enable that (do not use aSConscript
flag so thatpytest
will work correctly from the command-line.
Does anyone see a problem with changing test execution in a more extreme manner described above? This should make builds faster.