Please note this release marks some cool new stuff in sims_maf, aimed at analyzing v4 as well as v3 opsim runs!
The v4 opsim runs change the database schema in a couple of ways:
- the ‘Summary’ table becomes ‘SummaryAllProps’
- the names and contents of the tables recording slew details have changed (but this won’t be relevant to most users, because most people probably care about total slew time, rather than the details of the initial vs final slew states)
- the column names have changed in many places (much of this is capitalization – fieldID becoming fieldId, for example … but note propID becomes proposalId)
- the quantities in the database are now in degrees!! (this is an important change to note)
I’m sure many of you are now wanting to know, when will you have an opsim v4 database available. Soon, I promise.
But the consequences of the above are that MAF has had to adapt to processing both v3 and v4 runs. To this end, the traditional “standard” analysis scripts (schedulerValidation.py and sciencePerformance.py) now come in two flavors (schedulerValidation.py and schedulerValidation3.py).
More interestingly (and the direction we’re planning to go forward), we’ve introduced the concept of “batches”. Batches** are sets of metricBundles which we (or you) can grab and then run on an opsim simulated survey.
An example is that we have a “metadataBasics” batch, which will run a series of measurements on a given column of the opsim output: measuring the min, mean, median, max, rms, and 25/75th percentiles of the Airmass (in all and per filter), generating a histogram of the airmass (in all and per filter), and generating skymaps of the min/mean/median/max airmass (in all and per filter). There are also currently batches to look at various aspects of the number of visits and coadded depth. There are some more available, and more in development – we’d love to have more input on what you think would be a useful and coherent “batch”.
** batch is our name for these for now – better suggestions welcome.
The useful thing about batches is that they will standardize the metricBundle output (name-wise) across v3 and v4, so that we can compare runs from v3 vs. v4 more easily. It’s also useful to be able to run a small set of metrics easily (and again, on either a v3 or v4 run).