Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Feature 685 group pytests #1692

Merged
merged 36 commits into from
Jul 14, 2022
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
Show all changes
36 commits
Select commit Hold shift + click to select a range
4e2c4bc
per #685, added custom pytest markers for many tests so we can run gr…
georgemccabe Jul 7, 2022
46104ba
per #685, change logic to check if test category is not equal to 'pyt…
georgemccabe Jul 7, 2022
b509fee
per #685, run pytests with markers to subset tests into groups
georgemccabe Jul 7, 2022
57c8dde
fixed check if string starts with pytests
georgemccabe Jul 7, 2022
811b256
added missing pytest marker name
georgemccabe Jul 7, 2022
c487568
added logic to support running all pytests that do not match a given …
georgemccabe Jul 7, 2022
f5b31da
change pytest group to wrapper because the test expects another test …
georgemccabe Jul 7, 2022
95ef58e
fix 'not' logic by adding quotation marks around value
georgemccabe Jul 7, 2022
3bdab1e
another approach to fixing not functionality for tests
georgemccabe Jul 7, 2022
b66448e
added util marker to more tests
georgemccabe Jul 7, 2022
6afeaec
fixed typo in not logic
georgemccabe Jul 7, 2022
aed62bd
added util marker to more tests again
georgemccabe Jul 7, 2022
c6607c3
fixed logic to split string
georgemccabe Jul 7, 2022
831037a
marked rest of util tests with util marker
georgemccabe Jul 7, 2022
71eb108
fixed another typo in string splitting logic
georgemccabe Jul 7, 2022
166a0a2
tested change that should properly split strings
georgemccabe Jul 7, 2022
b01dec1
moved wrapper tests into wrapper directory
georgemccabe Jul 7, 2022
0df52e6
changed marker for plotting tests
georgemccabe Jul 7, 2022
552d2ec
added plotting marker
georgemccabe Jul 7, 2022
5e96988
improved logic for removing underscore after 'not' and around 'or' to…
georgemccabe Jul 7, 2022
4bf56ae
test running group of 3 markers
georgemccabe Jul 7, 2022
d8169db
fixed path the broke when test file was moved into a lower directory
georgemccabe Jul 7, 2022
a252858
Changed StatAnalysis tests to use plotting marker because some of the…
georgemccabe Jul 7, 2022
d9c391a
changed some tests from marker 'wrapper' to 'wrapper_a' to split up s…
georgemccabe Jul 7, 2022
6e71a21
merged develop and resolved conflicts
georgemccabe Jul 7, 2022
1f1d0b0
test to see if running pytests in single job but split into groups by…
georgemccabe Jul 7, 2022
dc82111
fixed typos in new logic
georgemccabe Jul 7, 2022
69b0cce
removed code that is no longer needed (added comment in issue #685 if…
georgemccabe Jul 7, 2022
050c1a2
per #685, divided pytests into smaller groups
georgemccabe Jul 7, 2022
038d049
added a test that will fail to test that the entire pytest job will f…
georgemccabe Jul 7, 2022
23fe0ee
add error message if any pytests failed to help reviewer search for f…
georgemccabe Jul 7, 2022
e7860cd
removed failing test after confirming that entire pytest job properly…
georgemccabe Jul 7, 2022
3cf5b91
turn on single use case group to make sure logic to build matrix of t…
georgemccabe Jul 7, 2022
962cd22
turn off use case after confirming tests were created properly
georgemccabe Jul 7, 2022
0a4ed3f
added documentation to contributor's guide to describe changes to uni…
georgemccabe Jul 7, 2022
d1a3b55
added note about adding new pytest markers
georgemccabe Jul 7, 2022
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
18 changes: 12 additions & 6 deletions .github/actions/run_tests/entrypoint.sh
Original file line number Diff line number Diff line change
Expand Up @@ -8,6 +8,8 @@ WS_PATH=$RUNNER_WORKSPACE/$REPO_NAME
# set CI jobs directory variable to easily move it
CI_JOBS_DIR=.github/jobs

PYTESTS_GROUPS_FILEPATH=.github/parm/pytest_groups.txt

source ${GITHUB_WORKSPACE}/${CI_JOBS_DIR}/bash_functions.sh

# get branch name for push or pull request events
Expand All @@ -30,10 +32,8 @@ if [ $? != 0 ]; then
${GITHUB_WORKSPACE}/${CI_JOBS_DIR}/docker_setup.sh
fi

#
# running unit tests (pytests)
#
if [ "$INPUT_CATEGORIES" == "pytests" ]; then
if [[ "$INPUT_CATEGORIES" == pytests* ]]; then
export METPLUS_ENV_TAG="pytest"
export METPLUS_IMG_TAG=${branch_name}
echo METPLUS_ENV_TAG=${METPLUS_ENV_TAG}
Expand All @@ -56,14 +56,20 @@ if [ "$INPUT_CATEGORIES" == "pytests" ]; then
.

echo Running Pytests
command="export METPLUS_PYTEST_HOST=docker; cd internal_tests/pytests; /usr/local/envs/pytest/bin/pytest -vv --cov=../../metplus"
command="export METPLUS_PYTEST_HOST=docker; cd internal_tests/pytests;"
command+="status=0;"
for x in `cat $PYTESTS_GROUPS_FILEPATH`; do
marker="${x//_or_/ or }"
marker="${marker//not_/not }"
command+="/usr/local/envs/pytest/bin/pytest -vv --cov=../../metplus -m \"$marker\""
command+=";if [ \$? != 0 ]; then status=1; fi;"
done
command+="if [ \$status != 0 ]; then echo ERROR: Some pytests failed. Search for FAILED to review; false; fi"
time_command docker run -v $WS_PATH:$GITHUB_WORKSPACE --workdir $GITHUB_WORKSPACE $RUN_TAG bash -c "$command"
exit $?
fi

#
# running use case tests
#

# split apart use case category and subset list from input
CATEGORIES=`echo $INPUT_CATEGORIES | awk -F: '{print $1}'`
Expand Down
7 changes: 5 additions & 2 deletions .github/jobs/get_use_cases_to_run.sh
Original file line number Diff line number Diff line change
@@ -1,6 +1,7 @@
#! /bin/bash

use_case_groups_filepath=.github/parm/use_case_groups.json

# set matrix to string of an empty array in case no use cases will be run
matrix="[]"

Expand Down Expand Up @@ -31,12 +32,14 @@ fi
if [ "$run_unit_tests" == "true" ]; then
echo Adding unit tests to list to run

pytests="\"pytests\","

# if matrix is empty, set to an array that only includes pytests
if [ "$matrix" == "[]" ]; then
matrix="[\"pytests\"]"
matrix="[${pytests:0: -1}]"
# otherwise prepend item to list
else
matrix="[\"pytests\", ${matrix:1}"
matrix="[${pytests}${matrix:1}"
fi
fi

Expand Down
6 changes: 6 additions & 0 deletions .github/parm/pytest_groups.txt
Original file line number Diff line number Diff line change
@@ -0,0 +1,6 @@
util
wrapper
wrapper_a
wrapper_b
wrapper_c
plotting_or_long
8 changes: 4 additions & 4 deletions .github/workflows/testing.yml
Original file line number Diff line number Diff line change
Expand Up @@ -139,24 +139,24 @@ jobs:
# copy logs with errors to error_logs directory to save as artifact
- name: Save error logs
id: save-errors
if: ${{ always() && steps.run_tests.conclusion == 'failure' && matrix.categories != 'pytests' }}
if: ${{ always() && steps.run_tests.conclusion == 'failure' && !startsWith(matrix.categories,'pytests') }}
run: .github/jobs/save_error_logs.sh

# run difference testing
- name: Run difference tests
id: run-diff
if: ${{ needs.job_control.outputs.run_diff == 'true' && steps.run_tests.conclusion == 'success' && matrix.categories != 'pytests' }}
if: ${{ needs.job_control.outputs.run_diff == 'true' && steps.run_tests.conclusion == 'success' && !startsWith(matrix.categories,'pytests') }}
run: .github/jobs/run_difference_tests.sh ${{ matrix.categories }} ${{ steps.get-artifact-name.outputs.artifact_name }}

# copy output data to save as artifact
- name: Save output data
id: save-output
if: ${{ always() && steps.run_tests.conclusion != 'skipped' && matrix.categories != 'pytests' }}
if: ${{ always() && steps.run_tests.conclusion != 'skipped' && !startsWith(matrix.categories,'pytests') }}
run: .github/jobs/copy_output_to_artifact.sh ${{ steps.get-artifact-name.outputs.artifact_name }}

- name: Upload output data artifact
uses: actions/upload-artifact@v2
if: ${{ always() && steps.run_tests.conclusion != 'skipped' && matrix.categories != 'pytests' }}
if: ${{ always() && steps.run_tests.conclusion != 'skipped' && !startsWith(matrix.categories,'pytests') }}
with:
name: ${{ steps.get-artifact-name.outputs.artifact_name }}
path: artifact/${{ steps.get-artifact-name.outputs.artifact_name }}
Expand Down
32 changes: 32 additions & 0 deletions docs/Contributors_Guide/continuous_integration.rst
Original file line number Diff line number Diff line change
Expand Up @@ -557,6 +557,38 @@ process can be found in the :ref:`use_case_input_data` section of the
Add Use Cases chapter of the Contributor's Guide.


.. _cg-ci-unit-tests:

Unit Tests
----------

Unit tests are run via pytest.
Groups of pytests are run in the 'pytests' job.
The list of groups that will be run in the automated tests are found in
.github/parm/pytest_groups.txt.
See :ref:`cg-unit-tests` for more information on pytest groups.

Items in pytest_groups.txt can include::

* A single group marker name, i.e. wrapper_a
* Multiple group marker names separated by _or_, i.e. plotting_or_long
* A group marker name to exclude starting with not_, i.e. not_wrapper

All pytest groups are currently run in a single GitHub Actions job.
This was done because the existing automation logic builds a Docker
environment to run the tests and each testing environment takes a few minutes
to create (future improvements may speed up execution time by running the
pytests directly in the GitHub Actions environment instead of Docker).
Running the pytests in smaller groups serially takes substantially less time
than calling all of the existing pytests in a single call to pytest,
so dividing tests into groups is recommended to improve performance.
Searching for the string "deselected in" in the pytests job log can be used
to see how long each group took to run.

Future enhancements could be made to save and parse this information for each
run to output a summary at the end of the log file to more easily see which
groups could be broken up to improve performance.

.. _cg-ci-use-case-tests:

Use Case Tests
Expand Down
48 changes: 44 additions & 4 deletions docs/Contributors_Guide/testing.rst
Original file line number Diff line number Diff line change
Expand Up @@ -4,19 +4,59 @@ Testing
Test scripts are found in the GitHub repository in the internal_tests
directory.

.. _cg-unit-tests:

Unit Tests
----------

Unit tests are run with pytest. They are found in the *pytests* directory.
Each tool has its own subdirectory containing its test files.

**run_pytests.sh** is a bash script that can be run to execute all of the
pytests. A report will be output showing which pytest categories failed.
When running on a new computer, a
**minimum_pytest.<HOST>.sh**
Unit tests can be run by running the 'pytest' command from the
internal_tests/pytests directory of the repository.
The 'pytest' Python package must be available.
A report will be output showing which pytest categories failed.
When running on a new computer, a **minimum_pytest.<HOST>.sh**
file must be created to be able to run the script. This file contains
information about the local environment so that the tests can run.

All unit tests must include one of the custom markers listed in the
internal_tests/pytests/pytest.ini file. Some examples include:

* util
* wrapper_a
* wrapper_b
* wrapper_c
* wrapper
* long
* plotting

To apply a marker to a unit test function, add the following on the line before
the function definition::

@pytest.mark.<MARKER-NAME>

where <MARKER-NAME> is one of the custom marker strings listed in pytest.ini.

New pytest markers should be added to the pytest.ini file with a brief
description. If they are not added to the markers list, then a warning will
be output when running the tests.

There are many unit tests for METplus and false failures can occur if all of
the are attempted to run at once.
To run only tests with a given marker, run::

pytest -m <MARKER-NAME>

To run all tests that do not have a given marker, run::

pytest -m "not <MARKER-NAME>"

Multiple marker groups can be run by using the 'or' keyword::

pytest -m "<MARKER-NAME1> or <MARKER-NAME2>"


Use Case Tests
--------------

Expand Down
Original file line number Diff line number Diff line change
@@ -1,47 +1,14 @@
#!/usr/bin/env python
#!/usr/bin/env python3

import os
import datetime
import sys
import logging
import pytest
import datetime

import produtil.setup
import os

from metplus.wrappers.make_plots_wrapper import MakePlotsWrapper
from metplus.util import met_util as util

#
# These are tests (not necessarily unit tests) for the
# wrapper to make plots, make_plots_wrapper.py
# NOTE: This test requires pytest, which is NOT part of the standard Python
# library.
# These tests require one configuration file in addition to the three
# required METplus configuration files: test_make_plots.conf. This contains
# the information necessary for running all the tests. Each test can be
# customized to replace various settings if needed.
#

#
# -----------Mandatory-----------
# configuration and fixture to support METplus configuration files beyond
# the metplus_data, metplus_system, and metplus_runtime conf files.
#

METPLUS_BASE = os.getcwd().split('/internal_tests')[0]

# Add a test configuration
def pytest_addoption(parser):
parser.addoption("-c", action="store", help=" -c <test config file>")

# @pytest.fixture
def cmdopt(request):
return request.config.getoption("-c")

#
# ------------Pytest fixtures that can be used for all tests ---------------
#
#@pytest.fixture
def make_plots_wrapper(metplus_config):
"""! Returns a default MakePlotsWrapper with /path/to entries in the
metplus_system.conf and metplus_runtime.conf configuration
Expand All @@ -55,35 +22,8 @@ def make_plots_wrapper(metplus_config):
config = metplus_config(extra_configs)
return MakePlotsWrapper(config)

# ------------------TESTS GO BELOW ---------------------------
#

#!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!
# To test numerous files for filesize, use parametrization:
# @pytest.mark.parametrize(
# 'key, value', [
# ('/usr/local/met-6.1/bin/point_stat', 382180),
# ('/usr/local/met-6.1/bin/stat_analysis', 3438944),
# ('/usr/local/met-6.1/bin/pb2nc', 3009056)
#
# ]
# )
# def test_file_sizes(key, value):
# st = stat_analysis_wrapper()
# # Retrieve the value of the class attribute that corresponds
# # to the key in the parametrization
# files_in_dir = []
# for dirpath, dirnames, files in os.walk("/usr/local/met-6.1/bin"):
# for name in files:
# files_in_dir.append(os.path.join(dirpath, name))
# if actual_key in files_in_dir:
# # The actual_key is one of the files of interest we retrieved from
# # the output directory. Verify that it's file size is what we
# # expected.
# assert actual_key == key
#!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!
METPLUS_BASE = os.getcwd().split('/internal_tests')[0]

@pytest.mark.plotting
def test_get_command(metplus_config):
# Independently test that the make_plots python
# command is being put together correctly with
Expand All @@ -98,6 +38,8 @@ def test_get_command(metplus_config):
test_command = mp.get_command()
assert(expected_command == test_command)


@pytest.mark.plotting
def test_create_c_dict(metplus_config):
# Independently test that c_dict is being created
# and that the wrapper and config reader
Expand Down
Loading