An image processing test will often involve
- Process image with some piece of code. This might take 5 seconds -10 seconds.
- A bunch of different tests on the output of that processing.
How should running the processing and doing the tests be structured?
As a concrete example, here is a snippet of new testing code for a branch in processFile.  I wrote this branch partially to specifically trigger a discussion of this present issue.  I asked @parejkoj to review it because I was relative certain we disagreed.  We had this discussion somewhat in the abstract on HipChat.  I’m interested in people’s answers for a specific, but illustrative, case.
The processing run is done as part of setUpClass so the results are available to each test.  Then there are separate tests.  There are currently 3 tests, but more should be added to more intelligently test the content of the results.
The obvious alternative would be to run the processing and then a series of asserts in one test.
I welcome thoughts, discussions, and votes as we (perhaps largely I) start to go through and add more comprehensive tests to processFile and assorted obs_* packages.
def runSampleProcessFile(imageFile, outputCalexp, outputCatalog, outputCalibCatalog):
    output = subprocess.check_output(["processfile.py", imageFile,
                                      "--outputCalexp", outputCalexp,
                                      "--outputCatalog", outputCatalog,
                                      "--outputCalibCatalog", outputCalibCatalog,
                                      ])
    return output
class TestProcessFileRun(unittest.TestCase):
    """Test that processFile runs.
    Ideally processFile.py would just be a call to
    python/lsst/processFile/processFile.py parseAndRun
    But it's not structured that way right now
    So instead we're going to call the executable
    and ensure that the output files are generated and non-zero in size.
    """
    @unittest.skipIf(testDataDirectory is None, "%s is not available" % testDataPackage)
    @classmethod
    def setUpClass(self):
        dataPath = os.path.join(testDataDirectory, "data")
        testImageFile = "871034p_1_MI.fits"
        testOutputCalexpFile = "871034p_1_MI.calexp.fits"
        testOutputCatalogFile = "871034p_1_MI.src.fits"
        testOutputCalibCatalogFile = "871034p_1_MI.calib.fits"
        self.imageFile = os.path.join(dataPath, testImageFile)
        self.tmpPath = tempfile.mkdtemp()
        self.outputCalexp = os.path.join(self.tmpPath, testOutputCalexpFile)
        self.outputCatalog = os.path.join(self.tmpPath, testOutputCatalogFile)
        self.outputCalibCatalog = os.path.join(self.tmpPath, testOutputCalibCatalogFile)
        # We run processFile.py here in the setUp method.
        # so that the results are availalbe to the individual tests
        runSampleProcessFile(self.imageFile, self.outputCalexp,
                             self.outputCatalog, self.outputCalibCatalog)
    @classmethod
    def tearDownClass(self):
        if os.path.exists(self.tmpPath):
            shutil.rmtree(self.tmpPath)
    def assertFileNotEmpty(self, pathname):
        sizeOfFile = os.stat(pathname).st_size
        self.assertGreater(sizeOfFile, 0)
    def testCalexpNonEmpty(self):
        self.assertFileNotEmpty(self.outputCalexp)
    def testCatalogNonEmpty(self):
        self.assertFileNotEmpty(self.outputCatalog)
    def testCalibCatalogNonEmpty(self):
        self.assertFileNotEmpty(self.outputCalibCatalog)
 ).
).