Dear all,
I was wondering if I could ask a few questions about the LSST data reduction pipelines.
For reference, I have no relation to LSST but have recently been reading some of the technical papers out of interest — I am a physics undergraduate with some background in software instrumentation and image reconstruction for radio interferometers (e.g., arxiv.org/abs/1708.00720).
Here are my questions:
- Does the computational cluster for the real-time data reduction pipelines have a well-defined set of hardware yet? If so, does it include GPUs? What is the estimated data rate per node?
- How much automation will there be between LSST transient detection and, e.g., LCO follow-up?
- If there is some sense of a “priority metric,” how is the observing priority (for LCO) defined based on observation? As in, will there be real-time data analysis to classify transients? If so, does that mean more computational power = more accurate priority-levels?
- Referencing Željko, et al. (2016): “Cadence optimization: are two visits per night really needed? Would perhaps a substantial increase in the computing power solve the association problem with just a single detection per night?”
- To clarify, does the computing power here refer to image processing power to find sources from their predicted trajectories based on limited data (which may instead be easier to fit with more than one visit/night)?
- I have read some mentions of compressive sampling used for analysis of transient variability. Does this mean that the cadence optimization will involve some level of explicit randomness in the time duration between passes to satisfy incoherence?
Thank you in advance.
Best regards,
Miles Cranmer