Contributing metrics

Hi all,

I know many people are working on metrics and we appreciate this hard work! To contribute these metrics into MAF to be run as part of our standard processing please read further.

Metrics need to be runnable within MAF, as part of rubin_sim. This means we both need it to match our API (see the tutorial) and we need the actual code.

Once your metric is runnable within MAF, the process to actually get us the code is to create a PR (Pull Request) to add it into rubin_sim . This means: create a fork of rubin_sim, add your metric to your fork (in the maf/metrics or maf/mafContrib directories), and then issue a PR from your fork to merge it back into lsst/rubin_sim.

I would like to also request a couple of additional considerations:

  • Please do match our API. The tutorial can help explain what we’re looking for in a “Metric” code object, and this workbook notebook can help you test and build your own metric.
  • If your software requires additional software packages for dependencies, please make a note of this in the PR.
    Because we don’t automatically install all of the dependencies on (e.g.) DataLab, missing dependencies can cause imports of rubin_sim to fail. Until we get these additional software dependencies installed (and installable automatically, in a future update for rubin_sim), the best work-around is to import these new dependencies inside the metric class itself, instead of at the top of your file. That way, the dependency will only be searched for if someone is trying to use your metric, instead of when it’s imported. Here is an example: import george is hidden inside a metric class StaticProbesFoMSummaryMetric .
  • Please add some documentation to your metric.
  • If you go the extra step of adding a unit test for your metric, we will really appreciate it! The bonus here is if we need to reconfigure something in your metric or something elsewhere changes (maybe your software dependency changes their API), we can much more easily diagnose and fix the problem without having to come ask you to do it for us. Here is an example of a unit test for some metrics. This works best if you explore a couple of cases where you might expect to get different answers.

Thank you and we really appreciate your work on metrics!
We do really want to gather up as many of the community metrics as possible, particularly those which feature in cadence notes or white papers. When we generate new survey simulations, being able to run these metrics and test the effects of the survey strategy variations immediately is immensely helpful.

If you have any questions or need help with this process, please do feel free to reach out - either here on community.lsst.org or on the #sims-maf slack channel is preferred.

Lynne

4 Likes