Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

New calibration data tree and onsite scripts #724

Merged
merged 93 commits into from
Oct 26, 2021
Merged

Conversation

FrancaCassol
Copy link
Collaborator

@FrancaCassol FrancaCassol commented Jul 23, 2021

This PR includes the following points:

  1. It introduces a pixel calibration data tree ("/fefs/data/rea/monitoring/PixelCalibration") which will contain all the present pixel calibration files (both for drs4 and PMTs)
  2. It simplifies the onsite scripts used to fill this tree, so as to easily permit an automatic call of the scripts online
  3. It adds two new scripts: one to reconstruct the filter scan data, the other to fit them in order to obtain the systematic noise to be used in the calibration procedure
  4. It includes the systematic noise in the F-factor calibration formula

A schematic description of the data tree and of the scripts is given in this note:
CameraCalibrationNote.pdf

Copy link
Member

@morcuended morcuended left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Apart from the small comments, the thing that should be changed in my opinion is how the lstchain version is obtained. Especially when you are working with a fixed tag, the sub-version will not be available and the script will fail.

Side note, I've not been able to produce ffactor_systematics files with lstchain/scripts/onsite/onsite_create_ffactor_systematics_file.py but I guess it is a matter of setting properly the configuration file.

lstchain/scripts/onsite/onsite_create_calibration_file.py Outdated Show resolved Hide resolved
@@ -21,8 +22,9 @@

required.add_argument('-r', '--run_number', help="Run number with drs4 pedestals",
type=int, required=True)
version,subversion=lstchain.__version__.rsplit('.post',1)
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This causes trouble if the lstchain version is just a simple tag as 0.7.5 without subversion:

>>> lstchain.__version__
'0.7.5'
>>> version,subversion=lstchain.__version__.rsplit('.post',1)
Traceback (most recent call last):
  File "<stdin>", line 1, in <module>
ValueError: not enough values to unpack (expected 2, got 1)

Something like this could do the trick:

try:
    version, subversion=lstchain.__version__.rsplit('.post',1)
except ValueError:
    version = lstchain.__version__

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Ops, I just modified in a different way, let me know if it is fine

lstchain/scripts/onsite/onsite_create_calibration_file.py Outdated Show resolved Hide resolved
Comment on lines 147 to 156
cmd = f"srun onsite_create_calibration_file -r {run} " \
f"-p {ped_run} -v {prod_id} --sub_run {sub_run} " \
f"-b {base_dir} -s {stat_events} --output_base_name {output_base_name} " \
f"--filters {filters} --sys_date {sys_date} " \
f"--config {config_file} --time_run {time_run}"

if no_sys_correction:
cmd += " --no_sys_correction"

fh.write(cmd)
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Maybe, it'd be good to propagate the return code beyond the slurm job pilot. In this way, you will actually know whether the job finished successfully or not. For example, if you do not use the no_sys_correction the first time the job will raise an IOError but according to slurm the exit code is 0 as it had finished with no problems. Maybe it has to do with the way in which the error is raised in python?

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes, it would be nice, do you perhaps know how to do it?

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The problem is that you run a batch command and the submission was successful, the job itself failed.

To keep track of which jobs failed and why, you need a much more complex system, that e.g. stores submitted jobs in a database and periodically checks if the job succeeded or not.

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

yes, and with slurm is not so easy because the error files are always produced and some time with not meaningful errors

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

You should make sure that stdout and stderr are going into the same file and that exit codes are correctly propagated in the scripts. E.g. bash scripts should always have set -euo pipefail at the start.

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

why in the same file?

@FrancaCassol
Copy link
Collaborator Author

Dear @maxnoe and @morcuended,
what is missing to finalize this PR?
I need to make further developments for the automatic monitoring of the calibration data and I would like to do it after this PR is approved. Thanks!

morcuended
morcuended previously approved these changes Oct 7, 2021
jsitarek
jsitarek previously approved these changes Oct 12, 2021
Copy link
Collaborator

@jsitarek jsitarek left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks for the implementation @FrancaCassol . I do not see any major problem with it, but I left a few suggestions that you might want to have a look at

lstchain/calib/camera/calibration_calculator.py Outdated Show resolved Hide resolved
lstchain/calib/camera/calibration_calculator.py Outdated Show resolved Hide resolved
lstchain/calib/camera/calibration_calculator.py Outdated Show resolved Hide resolved
lstchain/tools/lstchain_create_calibration_file.py Outdated Show resolved Hide resolved
lstchain/tools/lstchain_fit_intensity_scan.py Outdated Show resolved Hide resolved
lstchain/tools/lstchain_fit_intensity_scan.py Show resolved Hide resolved
lstchain/tools/lstchain_fit_intensity_scan.py Outdated Show resolved Hide resolved
@FrancaCassol FrancaCassol dismissed stale reviews from jsitarek and morcuended via a41ceb6 October 13, 2021 15:19
…h and are coeherent with a F-factor systematics correction at code calibraiton level
@FrancaCassol
Copy link
Collaborator Author

Dear @moralejo and @rlopezcoto,

It seems to me this code is mature to be merged. Before including it in a production release, we must setup the calibration tree in the /fefs/aswg/data/real/monitoring/ directory. In plan to do it as soon as possible with help of @morcuended, so that he can then proceed with a small test production.

Copy link
Collaborator

@moralejo moralejo left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think this is already well reviewed, let's move on, merge it, try at La Palma and if needed we'll do further changes in another PR.

@moralejo moralejo merged commit 4431ce8 into master Oct 26, 2021
@moralejo moralejo deleted the calibration_new_tree branch October 26, 2021 15:06
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

5 participants