Skip to content

Commit

Permalink
Merge branch 'master' into fix-build_models_mva
Browse files Browse the repository at this point in the history
  • Loading branch information
HealthyPear committed Mar 30, 2021
2 parents 4a16c42 + 0dc958c commit 9198cec
Show file tree
Hide file tree
Showing 2 changed files with 118 additions and 51 deletions.
35 changes: 27 additions & 8 deletions docs/contribute/beforepushing.rst
Original file line number Diff line number Diff line change
Expand Up @@ -35,8 +35,27 @@ You will have to fix any warning that appears during documentation building,
because the documentation also runs on `readthedocs <https://readthedocs.org/>`__
with an option to treat warnings as errors.

Testing
-------

All testing code is called by issuing the ``pytest`` command.

This command can be called from any place within the cloned repository and it
will always run from the root directory of the project.

For debugging purposes you can add the ``-s`` option which will allow to
visualise any ``print`` statement within the test module(s).

Testing is automatically triggered by the CI every time a new
pull-request is pushed to the repository, and its correct
execution is one of the mandatory condition for merging.

Unit tests
----------
^^^^^^^^^^

You can follow
`these guidelines <https://cta-observatory.github.io/ctapipe/development/code-guidelines.html#unit-tests>`__
to understand what a unit-test is supposed to do.

.. note::
This is a maintenance activity which has being long overdue and we need
Expand Down Expand Up @@ -65,6 +84,12 @@ Same for *pyirf*.
Integration tests
^^^^^^^^^^^^^^^^^

These are neither unit-tests nor benchmarks, but rather functions that tests
whole functionalities and not just single API functions.

In the case of the pipeline, such functionalities are the scripts/tools
that make up its workflow.

.. note::
For more information on how to contribute to this effort check
`this issue <https://github.com/cta-observatory/protopipe/issues/70>`__.
Expand All @@ -79,13 +104,7 @@ CTAN and CTAS produced with the following Corsika settings,
- protons, ``NSHOW=10 ESLOPE=-2.0 EMIN=100 EMAX=200 NSCAT=1 CSCAT=200 VIEWCONE=3``
- electrons, ``NSHOW=10 ESLOPE=-2.0 EMIN=10 EMAX=20 NSCAT=1 CSCAT=200 VIEWCONE=3``

in the same proportions as a standard full-scale analysis.

The pipeline integration testing can be executed directly from the main folder
of *protopipe* by launching ``pytest``.
It is also automatically triggered by the CI every time a new
pull-request is pushed to the repository, and its correct
execution is a mandatory condition for merging.
and it is analysed using the same workflow as in a standard full-scale analysis.

Benchmarks
----------
Expand Down
134 changes: 91 additions & 43 deletions protopipe/scripts/tests/test_pipeline.py
Original file line number Diff line number Diff line change
Expand Up @@ -48,15 +48,23 @@ def test_GET_GAMMAS_FOR_ENERGY_MODEL_WITH_IMAGES(test_case, pipeline_testdir):

outpath = pipeline_testdir / f"test_training_withImages_{test_case}.h5"

exit_status = system(
f"python {data_training.__file__}\
--config_file {input_data[test_case]['config']}\
-o {outpath}\
--save_images\
-i {input_data[test_case]['gamma1'].parent}\
-f {input_data[test_case]['gamma1'].name}"
command = f"python {data_training.__file__}\
--config_file {input_data[test_case]['config']}\
-o {outpath}\
--save_images\
-i {input_data[test_case]['gamma1'].parent}\
-f {input_data[test_case]['gamma1'].name}"

print( # only with "pytest -s"
f'''
/n You can reproduce this test by running the following command,
{command}
'''
)

exit_status = system(command)

# check that the script ends without crashing
assert exit_status == 0

Expand All @@ -73,14 +81,22 @@ def test_GET_GAMMAS_FOR_ENERGY_MODEL(test_case, pipeline_testdir):

outpath = pipeline_testdir / f"test_gamma1_noImages_{test_case}.h5"

exit_status = system(
f"python {data_training.__file__}\
--config_file {input_data[test_case]['config']}\
-o {outpath}\
-i {input_data[test_case]['gamma1'].parent}\
-f {input_data[test_case]['gamma1'].name}"
command = f"python {data_training.__file__}\
--config_file {input_data[test_case]['config']}\
-o {outpath}\
-i {input_data[test_case]['gamma1'].parent}\
-f {input_data[test_case]['gamma1'].name}"

print( # only with "pytest -s"
f'''
/n You can reproduce this test by running the following command,
{command}
'''
)

exit_status = system(command)

# check that the script ends without crashing
assert exit_status == 0

Expand All @@ -103,13 +119,21 @@ def test_BUILD_ENERGY_MODEL_AdaBoost_DecisionTreeRegressor(test_case, pipeline_t

config = resource_filename("protopipe", "scripts/tests/test_regressor.yaml")

exit_status = system(
f"python {build_model.__file__}\
--config_file {config}\
--infile_signal {infile}\
--outdir {outdir}\
--cameras_from_file"
command = f"python {build_model.__file__}\
--config_file {config}\
--infile_signal {infile}\
--outdir {outdir}\
--cameras_from_file"

print( # only with "pytest -s"
f'''
/n You can reproduce this test by running the following command,
{command}
'''
)

exit_status = system(command)
assert exit_status == 0


Expand All @@ -124,16 +148,24 @@ def test_GET_GAMMAS_FOR_CLASSIFICATION_MODEL(test_case, pipeline_testdir):
modelpath = pipeline_testdir / f"energy_model_{test_case}"
outpath = pipeline_testdir / f"test_gamma2_noImages_{test_case}.h5"

exit_status = system(
f"python {data_training.__file__}\
--config_file {input_data[test_case]['config']}\
-o {outpath}\
-i {input_data[test_case]['gamma2'].parent}\
-f {input_data[test_case]['gamma2'].name}\
--estimate_energy True\
--regressor_dir {modelpath}"
command = f"python {data_training.__file__}\
--config_file {input_data[test_case]['config']}\
-o {outpath}\
-i {input_data[test_case]['gamma2'].parent}\
-f {input_data[test_case]['gamma2'].name}\
--estimate_energy True\
--regressor_dir {modelpath}"

print( # only with "pytest -s"
f'''
/n You can reproduce this test by running the following command,
{command}
'''
)

exit_status = system(command)

# check that the script ends without crashing
assert exit_status == 0

Expand All @@ -153,17 +185,25 @@ def test_GET_PROTONS_FOR_CLASSIFICATION_MODEL(test_case, pipeline_testdir):
modelpath = pipeline_testdir / f"energy_model_{test_case}"
outpath = pipeline_testdir / f"test_proton1_noImages_{test_case}.h5"

exit_status = system(
f"python {data_training.__file__}\
--config_file {input_data[test_case]['config']}\
-o {outpath}\
-m 10\
-i {input_data[test_case]['proton1'].parent}\
-f {input_data[test_case]['proton1'].name}\
--estimate_energy True\
--regressor_dir {modelpath}"
command = f"python {data_training.__file__}\
--config_file {input_data[test_case]['config']}\
-o {outpath}\
-m 10\
-i {input_data[test_case]['proton1'].parent}\
-f {input_data[test_case]['proton1'].name}\
--estimate_energy True\
--regressor_dir {modelpath}"

print( # only with "pytest -s"
f'''
/n You can reproduce this test by running the following command,
{command}
'''
)

exit_status = system(command)

# check that the script ends without crashing
assert exit_status == 0

Expand All @@ -187,12 +227,20 @@ def test_BUILD_CLASSIFICATION_MODEL_RandomForest(test_case, pipeline_testdir):

config = resource_filename("protopipe", "scripts/tests/test_regressor.yaml")

exit_status = system(
f"python {build_model.__file__}\
--config_file {config}\
--infile_signal {infile_signal}\
--infile_background {infile_background}\
--outdir {outdir}\
--cameras_from_file"
command = f"python {build_model.__file__}\
--config_file {config}\
--infile_signal {infile_signal}\
--infile_background {infile_background}\
--outdir {outdir}\
--cameras_from_file"

print( # only with "pytest -s"
f'''
/n You can reproduce this test by running the following command,
{command}
'''
)

exit_status = system(command)
assert exit_status == 0

0 comments on commit 9198cec

Please sign in to comment.