Transient Brokers and characterization of variables aided by XSim

I would like to propose a discussion on different simulators, how they work with each other as well as with transient brokers to go the entire distance. In particular, I am interested in seeing how various model light-curves look at different LSST cadence and if a broker can start telling some of the simpler cases apart.

(1) The big picture of brokering, and
(2) The nitty-gritty of methodology to bring about the big picture.

@fed, @drphilmarshall (please add others as you see fit or email them the URL - apparently, new users like me can mention only two others!)

I think a breakout on brokers is a good idea. NOAO is planning on sponsoring a focused workshop on transient brokers in 2017. Having a breakout in August would help to refine ideas for the workshop. Moreover, we need to broaden community involvement in broker development to make sure that we address as many interests as we can.

I think this is a good idea.

One question that was raised during the Tucson LSST/NOAO meeting this past month on Enabling Science in the Era of LSST, is whether we should have a variable cadence. The main idea being that there could be some initial period of LSST is focused on a faster than normal return cadence to more efficiently characterize fast moving solar system transients. Thus we could catalog these in the early period (~1 yr) of the survey and more quickly reducing the broker false alarm rate for the life of the survey.

Another item that came up was how best to use DECam in conjunction with LSST for characterization of variables.

You can increase your solar system completeness by doing more intensive surveying in the first year, but you will not be able to reduce the discovery rate of new solar system objects beyond some level. So you can reduce your confusion with solar system objects, but you cannot remove it.
The thing is, small objects will come closer to Earth and thus be brighter and suddenly discoverable, plus some objects will drift into the LSST field of view due to their synodic period being longer than one year.
I realized that we have plots that show things like “completeness @ H=X” for various populations, but we don’t have a simple plot showing “how many new objects are we seeing as a function of time” (over all magnitudes). I’ll put that on the list to have a look at.

@ljones, Thank you, that is a very good point. I would be very interested in seeing such a plot/analysis. Do you have a sense for what is more important regarding beating down the number of uncharacterized solar system transients: 1) ability to characterize orbits of identified objects and ability to cross-link with future epochs, or 2) “new” solar system objects?

What is the path to make this an official breakout session and get it on the calendar?
@bethwillman @ivezic

Hey @ashish - The SOC is meeting tomorrow to discuss this very question, so we should have an answer by Friday.

I see several topics under the “Broker” label. One is the nuts and bolts of classifying and filtering. Another is the relation - if any - between broker function and survey strategy. On the latter point, I think it is good to keep in mind the quantitative difference between early broker operation, when virtually none of the variables are known, and later operation (after ~1 year) when most variables are known. The usage of brokers will surely be very different in those eras. Another point concerns the diversity and multiplicity of brokers. Perhaps only one or a few will process the full alert stream. But there will probably be many that tap off prefiltered target lists for more flexible and customized processing. Are there enough folks thinking about that to populate a breakout session?

If we are able to solve the harder problem that we will face during the first year, the “later years” problem will have solved itself (though, as you say, tuning will have to be done, and proper, regular updating of priors will be important especially to flag rare objects).

Redundancy in the brokers will be crucial even if we have specialized brokers. We are starting to get people to think specifically about this, people have thought about it in the past (including yourself), and such a breakout can be a good opportunity to brainstorm about actual short-term action items. For the first year when we may have to rely on external datasets, we can start thinking about mechanisms (since the datasets actually available when LSST starts will be different). We can also setup data challenges with existing datasets that come closest (though in far smaller areas) and thereby allow participating researchers to produce real science.

Variant thoughts?

Along the lines of this discussion, LSST’s SAC has been discussing event brokers as well, and has raised the question of whether there will be multiple brokers provided by the community and, if so, how many the Project can support.

Its great to hear from Tom that NOAO is going to have a workshop on brokers in 2017. I 100% agree with Tom’s point that we need to broaden community involvement in broker development.

As @sridgway suggested, I also suggest that we try separating questions of (i) the impact of observing strategy on how well objects can be classified from (ii) what functionalities does the community need in a broker, how can brokers be stress tested now, and how can the community be sure that brokers will exist+be maintained during operations.

I suggest that @ashish drafts a brief abstract for a proposed breakout (that he will lead :slight_smile: ) on “how various model light-curves look at different LSST cadence and if a(ny) broker could tell some of the simpler cases apart”, with an eye on a possible Data Challenge. I suggest that @tmatheson drafts a brief abstract for a proposed breakout that he will lead on broker functionalities, broker development, and encouraging more community involvement in broker development.

We are waiting to schedule the breakouts until after this week’s deadline for suggestions - It looks like we will have time for ~3 parallel breakouts x 4 breakout sessions.

Let me know what you think.

@bethwillman, as Steve R correctly asked, we may not have too many people interested to warrant two breakouts. Perhaps the two sets of points you list could be combined in to one comprehensive/longer breakout. Since the NOAO folks have been doing the work already, I am happy for @tmatheson to lead. I can help.


I think we will have plenty of people interested in these topics. We wouldn’t want two breakouts that compete anyway, so the question really is: Can we fit all we want to talk about into a single session? If we have one session, it may be at a more introductory level. A second session would allow for a deeper exploration of some of the issues, but that may be more than we want to accomplish here.

@bethwillman, what do you think about one vs. two? One ~longish session?

I’m happy to organize things, but I will definitely want help from @ashish.

I need to speak with the rest of the SOC, but my instinct is that there will be both the physical space and the appetite for back-to-back breakouts on transient brokers (e.g. one longish session). It would be great if y’all could make a pitch for what that would look like.

Two back-to-back sessions without splitting people will be great. What duration are we talking about? The talks can be summaries with people doing homework (with the aid of speakers having sent slides in advance). That way we can have more time for discussion. Some topics (I have hopefully incuded all that has been stated in this discussion):
(1) Summary of simulators (catsim and opsim)
(2) Availability of priors and models (and how to get/generate more)
(3) Current status on our ability to separate two or more classes based on different inputs (NOAO and elsewhere)
(4) What will be needed during the first year (in terms of external data)
(5) How the fainter objects which have never been seen before will be treated during the first year

And a data challenge …

Whatever of this can be included in other breakouts we should (so long as the two do not overlap).

This seems like the most appropriate breakout discussion for this comment.

Sorry for the lateness - I have been waiting for this news to be a bit more official, but I see this as an important opportunity and time has run out.

The NSF has (unofficially, as yet) informed me that they will fund LCOGT’s MSIP proposal to provide time on our global telescope network to the U.S. astronomical community. The network is designed and operated to do follow up of time domain discoveries of objects brighter than about m=20-21, with photometry and low-resolution spectroscopy. Next year, we will be deploying a set of high-res spectrographs for bright stars. The NSF has asked that we devise a program for community use that focuses on getting the community ready for LSST and following up on current surveys. Thus, a general allocation of time is not what they want.

I am still trying to figure out how to structure this, but I want to use the LSST2016 meeting to announce the existence of this program and ask for discussion/recommendations about how it might operate to best address the goals.

1 Like

Will this program be on line in time for the start of public release of transient alerts from ZTF?

2018 First ZTF data release; public transient alerts begin
2019 Public alerts of transients & transient candidate cutouts

Yes, tentatively expect to start access April 1, 2017. The program was proposed for four years, but funding at a slightly lower level has motivated me to think that it will start April 2017 and end October 2020 (3.5 years). Each semester there will be about 1300 hours of 1m time and 220 hours of 2m time available.

So this maps very well to ZTF operations, to other existing transient-detecting facilities, and to the early LSST commissioning era, with the idea that in addition to doing science in its own right, it would inform subsequent decisions on followup resources for the LSST operations era…

Another item that came up was how best to use DECam in conjunction with LSST for characterization of variables.

This is a very interesting possibility. Was there someone with a knowledge of DECam plans?

There were both NOAO and CTIO specific representatives there. If my year+ memory serves me well, I don’t think that there were any definitive DECam plans in the LSST era. Part of the reason for the exercise was to see what existing/new facilities were important for to maintain/develop for the LSST era. I also recall that there was significant support for keeping DECam operational at CTIO during the LSST survey (should be in the report if I recall correctly).