Iâm not sure Iâm totally clear on what youâre asking for.
It sounds like you have a series of (real) observations where you know the (real performance) answer, and you have a metric which you think will give you something which should approximate the performance, but that you still need to figure out fudge factors or something that will relate the metric output to real performance.
So you have metadata from the real observations â why not just input those into MAF?
Your metric requires particular columns, and it sounds like youâre maybe going to run this on one point in the sky, so you could just format your observation metadata to a numpy recarray with the required columns, and then use the metric ârunâ method directly. Kind of like this:
m = TestMetric()
dataSlice = np.array([observation metadata], dtype=[(colname, np type), ..])
m.run(dataSlice)
Of course, maybe youâre looking to run this over more than one point in the sky? If so, then youâll need to do a bit more.
In that case, I would get all of your observation metadata into a numpy recarray, set up a HealpixSlicer (as normal), set up your metric (as normal), then set up a MetricBundle and MetricBundleGroup (as normal, but pass opsdb=None).
Then instead of doing metricbundlegroup.runAll(), use metricbundlegroup.runCurrent(None, simdata=).
Would that work?
(Iâm just thinking that using your actual metadata might work even better than using simulated metadata which is close but not quite the same. Presumably you already know any translations needed, if you can choose a subset of opsim pointings that match).
(also note: numpy recarrays arenât terrible to set up by hand, but if you already have your data in a pandas dataframe because those are easy to read from CSV or other sources, you can turn that into a recarray just by using the to_numpy
method on the dataframe).