I have a question concerning the footprint threshold used for source detection in the Pipelines. For context, a SourceDetectionTask
constructs footprints from an exposure via the detectFootprints
method. Using all default settings, by my understanding, that method broadly works as follows: It smooths the background-subtracted exposure by convolving it with the PSF, estimates an average per-pixel noise level, multiplies that by 5 to establish a “5-sigma” threshold, and creates an initial set of footprints by selecting pixels in the smoothed image with an instrumental flux above that threshold value. These initial footprints are later expanded (and merged as necessary) to form final footprints. A similar strategy is used by DynamicDetectionTask
, with an extra correction factor that incorporates estimated PSF characteristics. This corresponds to the general strategy outlined in doi:10.1093/pasj/psx080.
The Pipelines seem to diverge from that strategy, however, in the specific method used to estimate the average noise level. In both versions of detectFootprints
, the smoothed image is passed to applyThreshold
, which in turn passes it to makeThreshold
, which ultimately calls lsst.afw.math.makeStatistics
to compute a clipped standard deviation of the values stored in each pixel. But since the smoothed image is used, the standard deviation it finds is several times smaller than what it would be if the original image were used. My understanding of section 4.7.1 of doi:10.1093/pasj/psx080 is that the proper value of sigma to use here is the per-pixel noise in the original, un-smoothed image.
I haven’t found any documentation to suggest that using the noise level of the smoothed image is preferable over that of the original image. What is the rationale for this choice?