Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add Cat-B calibration onsite scripts #1147

Merged
merged 64 commits into from
Nov 27, 2023
Merged

Add Cat-B calibration onsite scripts #1147

merged 64 commits into from
Nov 27, 2023

Conversation

FrancaCassol
Copy link
Collaborator

The scripts call the tool lstchain_create_cat_B_calibration_file and are equivalent to the Cat-A onsite scripts. They produce by default the Cat-B calibration files in the calibration data tree:

/fefs/aswg/data/real/monitoring/PixelCalibration/Cat-B/calibration/

Each calibration file will contain several calibration events, based on the interleaved events and relative to the Cat-A calibration. The files can then be used in the dl1a_to_dl1b process to correct the calibration during the night.

@codecov
Copy link

codecov bot commented Jul 25, 2023

Codecov Report

Attention: 218 lines in your changes are missing coverage. Please review.

Comparison is base (bb39662) 73.97% compared to head (045a5fd) 74.42%.
Report is 25 commits behind head on main.

Additional details and impacted files
@@            Coverage Diff             @@
##             main    #1147      +/-   ##
==========================================
+ Coverage   73.97%   74.42%   +0.44%     
==========================================
  Files         124      129       +5     
  Lines       12647    13169     +522     
==========================================
+ Hits         9356     9801     +445     
- Misses       3291     3368      +77     
Files Coverage Δ
lstchain/calib/camera/calibration_calculator.py 94.82% <100.00%> (+3.39%) ⬆️
lstchain/calib/camera/pedestals.py 97.16% <ø> (+0.94%) ⬆️
lstchain/conftest.py 97.93% <100.00%> (+0.12%) ⬆️
lstchain/reco/r0_to_dl1.py 93.18% <100.00%> (+0.03%) ⬆️
lstchain/scripts/lstchain_data_r0_to_dl1.py 83.90% <100.00%> (+0.37%) ⬆️
lstchain/scripts/tests/test_lstchain_scripts.py 100.00% <100.00%> (ø)
lstchain/tests/test_lstchain.py 97.54% <100.00%> (+0.06%) ⬆️
lstchain/tests/test_onsite.py 100.00% <100.00%> (ø)
...chain/tools/tests/test_create_calibration_files.py 100.00% <100.00%> (ø)
...tools/tests/test_create_cat_B_calibration_files.py 100.00% <100.00%> (ø)
... and 15 more

... and 5 files with indirect coverage changes

☔ View full report in Codecov by Sentry.
📢 Have feedback on the report? Share it here.

lstchain/onsite.py Outdated Show resolved Hide resolved
lstchain/onsite.py Outdated Show resolved Hide resolved
lstchain/io/io.py Outdated Show resolved Hide resolved
@@ -77,7 +77,8 @@ class CalibrationCalculator(Component):

hg_lg_ratio = traits.Float(
1.,
help='HG/LG ratio applied if use_scaled_low_gain is True. In case of calibrated data the ratio should be 1.'
help='HG/LG ratio applied if use_scaled_low_gain is True. The ratio is ~1 for calibrated data, ~17.4 for uncalibrated data.'

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Can we maybe change this into two options?

The hg_lg_ratio should be fixed (at least for LST), to 17.4.

It only needs to be applied for Cat-A.

So maybe it's simpler to leave that as 17.4 and have a switch calibrated_input or simular, that is set to False by the cat A calibration script but to True by the cat B script?

Or even automatically determined by the input / if an existing calibration file has been given?

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We never used this option for the Cat-A, it is instead useful for Cat-B calibration because pixels with low HV (due to high NSB) can not be correctly calibrated with LG. For the status of the code/data now, still in test phase, it seems the me that such a trailet is enough to deal with all cases (by experts). I do not see the necessity of an automatisation at this level (but a test can be eventually added at a higher level).

lstchain/onsite.py Outdated Show resolved Hide resolved
lstchain/io/io.py Outdated Show resolved Hide resolved
lstchain/onsite.py Outdated Show resolved Hide resolved
Copy link
Member

@morcuended morcuended left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Other than the creation of an empty DL2 file, the rest of the PR looks good to me overall. If someone else has an opinion, please speak up.

lstchain/io/io.py Outdated Show resolved Hide resolved
lstchain/io/io.py Outdated Show resolved Hide resolved
# if no events in dl2, e.g. for bad time interval from Cat-B calibration
if len(dl2) == 0:
logger.warning("No events in dl2. Return.")
return dl2
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

+1 to not write the empty DL2 file

# set unusable all pixels in periods with too many unusable pixels (car flashes, etc...)
catB_unusable_pixels[np.sum(catB_unusable_pixels, axis=2) > args.max_unusable_pixels] = True
# add good time interval column (gti)
catB_calib['gti'] = np.max(np.sum(catB_unusable_pixels, axis=2),axis=1) < args.max_unusable_pixels
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

A GTI is a good time interval. However, this is more a flag that this particular monitoring point "is valid" or has a general problem.

You would build the DL3 GTI by looking at all possible sources of issues (including the calibration monitoring data), but I wouldn't call this GTI here. Maybe just "is_valid"? We have xy_is_valid in many places in the ctapipe data structures.

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is defining periods (times intervals) in which the camera is ok from the calibration point of view (all included), honestly I find that GTI is a clearer explicative naming for that (given also that is in the calibration table, so clearly only related to the calibration). "is_valid" seems to me less explicative. Is there a document where this naming is defined?

@FrancaCassol
Copy link
Collaborator Author

Hi, I would like to finalize this PR, are there other comments?

lstchain/reco/utils.py Outdated Show resolved Hide resolved
Co-authored-by: Daniel Morcuende <[email protected]>

Returns
-------
sel_events: selected events
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
sel_events: selected events
pandas DataFrame or astropy.table.QTable
Selected events

lstchain/reco/utils.py Show resolved Hide resolved
@FrancaCassol
Copy link
Collaborator Author

FrancaCassol commented Nov 27, 2023

Hi @rlopezcoto , @maxnoe,
can we merge this PR?

@rlopezcoto rlopezcoto merged commit e82bd68 into main Nov 27, 2023
7 of 9 checks passed
@rlopezcoto rlopezcoto deleted the ctaB_online_script branch November 27, 2023 10:33
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

5 participants