I am currently working on a use case of the Gen 3 butler and pipetask analysis for the BOT data and I could use some helpful guidance for implementing my code.
I am planning to run an analysis on a Source Catalog and encapsulate the results as a custom python class that has read/write function to save as a FITS file. I would like to be able to have the butler recognize this new class as a valid dataset type, be able to create a collection of datasets corresponding to the new class, and associate it with the original source catalog, raw exposure, post-ISR exposure, etc. Is this possible and if so what are the code requirements for the custom python class to properly interface with the butler?
Currently my ideal case would be a class that inherits from Source Catalog, but has additional code to handle a new set of information (a new HDU in the Fits file).