Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Effective time for the software coincidence #33

Closed
YoshikiOhtani opened this issue Mar 11, 2022 · 4 comments · Fixed by #52
Closed

Effective time for the software coincidence #33

YoshikiOhtani opened this issue Mar 11, 2022 · 4 comments · Fixed by #52

Comments

@YoshikiOhtani
Copy link
Collaborator

YoshikiOhtani commented Mar 11, 2022

While developing codes for creating DL3 files, I noticed how to compute the effective time for the software coincidence. DAQs are performed independently in each telescope system, and the dead time is also different between them. At the moment I use the elapsed time for computing SED. Any ideas are very welcome.

@jsitarek
Copy link
Collaborator

Hi @YoshikiOhtani, sorry for late feedback on this, hopefully it is still useful
The procedure in MAGIC is assuming a fixed deadtime per event of dt1 = 26 us (this comes from DRS4 deadtime and number of slices that are used, but we also confirmed it with the cutoff of the time differences distribution). A distribution of time difference (from fRawEvtHeader->GetTimeDiff(), because it is important to have the time different to the last recorded event, and some events get lost during reconstruction) is used.
Then a fit is performed to get the rate of stored events, and slope lambda (in Hz) is get.
effective time is computed as elapsed time / (lambda * dt1 +1).
The elapsed time is computed by summing up the time difference (this time the ones between the events that survived the analysis chain !), as long as they do not produce a gap larger than 0.2s.
I guess the same method would also for LST1, but I'm not sure what is actually used there.

If you want to make it precisely and the fraction of dead time can be computed independently for MAGIC and LST1, since the two readout systems are independent, and the rate is so much different that showers will also not introduce a meaningful correlation I would say that something like this should work

effective_MAGIC+LST1 = (effective_MAGIC/elapsed_MAGIC) * (effective_LST/elapsed_LST) * elapsed_MAGIC+LST1

However since the fraction of the time that MAGIC looses due to deadtime is very small, ~1%, I think you will end up with a number that is very similar to simply LST deadtime

@YoshikiOhtani
Copy link
Collaborator Author

Hi @jsitarek, thank you very much for your comment.

Regarding the LST total dead time, I found that the calculation is done with the following method:
https://github.com/cta-observatory/cta-lstchain/blob/5ea64b8a704a990ebc66f4cd8fe5a11b03610c9e/lstchain/reco/utils.py#L635

Here it uses delta_t, which is defined as the time differences between consecutive events including shower, pedestal and calibration laser events, and uses the formula <delta_t> = dead_time + 1/rate to get the rate (or lambda for your formula). I think it can be used only for LST, because the LST trigger rate (~ 5 kHz) is much larger than that of interleaved events (= 200 Hz) so the exponential function derived from the poisson process does not reach to a cutoff by interleaved events (which may happen at diff ~ 1/200 = 0.005 sec). Actually I calculated the ratio of the LST effective time over elapsed time with this formula and got ~95%, so the total dead time seems to be 5 times larger than MAGIC.

Then, regarding the MAGIC total dead time, I actually found the branch MRawEvtHeader.fTimeDiff in the calibrated level data. These are very useful to precisely calculate the dead time but since the current MAGICEventSource does not read this information from the file, I think we need to modify the module. Anyway since the LST dead time is larger and it is relatively easy to implement the calculation, I will at first implement the LST one.

@jsitarek
Copy link
Collaborator

I think the method is basically the same with the MAGIC one, the difference is just that the rate is not obtained from the fit, but from a plain average that might be affected by the interleaved rate, but the effect of it is really minimal.
I made a small toy MC macro to check how it works and it is fine. I guess it would also work fine for MAGIC - even while the relative fraction of interleaved events is larger for MAGIC the fraction is still not very large, and the actual deadtime is small - so even if there is some uncertianty on it, it will not produce any big effect.
in case you also want to play around I'm attaching the ROOT macro that I did to test those deadtimes
test_dt.cpp.txt

@YoshikiOhtani
Copy link
Collaborator Author

Hi @jsitarek, OK thank you for your checks, and good that it works also for MAGIC. So once the modification for ctapipe_io_magic is done, I will make a pull request about the implementation of the dead time calculations. I have already implemented the calculation for the LST dead time in my local repository.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging a pull request may close this issue.

2 participants