Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Switch to pytest-mpl for image testing #1891

Merged
merged 2 commits into from
Sep 29, 2021
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
5 changes: 3 additions & 2 deletions .appveyor.yml
Original file line number Diff line number Diff line change
Expand Up @@ -15,7 +15,7 @@ install:
- conda config --add channels conda-forge
- conda config --add channels conda-forge/label/testing
- set ENV_NAME=test-environment
- set PACKAGES=%PACKAGES% flufl.lock owslib pep8 pillow pyshp pytest
- set PACKAGES=%PACKAGES% owslib pep8 pillow pyshp pytest pytest-mpl
- set PACKAGES=%PACKAGES% requests setuptools_scm setuptools_scm_git_archive
- set PACKAGES=%PACKAGES% shapely
- conda create -n %ENV_NAME% python=%PYTHON_VERSION% %PACKAGES%
Expand All @@ -38,7 +38,8 @@ build_script:
test_script:
- set MPLBACKEND=Agg
- set PYPROJ_GLOBAL_CONTEXT=ON
- pytest --pyargs cartopy
- pytest -ra --pyargs cartopy
--mpl --mpl-generate-summary=html --mpl-results-path=cartopy_test_output

artifacts:
- path: cartopy_test_output
Expand Down
20 changes: 8 additions & 12 deletions .github/workflows/ci-testing.yml
Original file line number Diff line number Diff line change
Expand Up @@ -48,7 +48,7 @@ jobs:

- name: Install dependencies
run: |
PACKAGES="$PACKAGES flufl.lock owslib pep8 pillow pyshp pytest"
PACKAGES="$PACKAGES owslib pep8 pillow pyshp pytest pytest-mpl"
PACKAGES="$PACKAGES pytest-xdist requests setuptools_scm"
PACKAGES="$PACKAGES setuptools_scm_git_archive shapely"
conda install $PACKAGES
Expand All @@ -70,7 +70,10 @@ jobs:
# Check that the downloader tool at least knows where to get the data from (but don't actually download it)
python tools/cartopy_feature_download.py gshhs physical --dry-run
CARTOPY_GIT_DIR=$PWD
PYPROJ_GLOBAL_CONTEXT=ON pytest -ra -n 4 --doctest-modules --pyargs cartopy ${EXTRA_TEST_ARGS}
PYPROJ_GLOBAL_CONTEXT=ON pytest -ra -n 4 --doctest-modules \
--mpl --mpl-generate-summary=html \
--mpl-results-path="cartopy_test_output-${{ matrix.os }}-${{ matrix.python-version }}" \
--pyargs cartopy ${EXTRA_TEST_ARGS}

- name: Coveralls
if: steps.coverage.conclusion == 'success'
Expand All @@ -79,16 +82,9 @@ jobs:
run:
coveralls --service=github

- name: Create image output
if: failure() && steps.install.conclusion == 'success'
id: image-output
run:
python -c "import cartopy.tests.mpl; print(cartopy.tests.mpl.failed_images_html())" >> image-failures-${{ matrix.os }}-${{ matrix.python-version }}.html

# Can't create image output and upload in the same step
- name: Upload image results
uses: actions/upload-artifact@v2
if: failure() && steps.image-output.conclusion == 'success'
if: failure()
with:
name: image-failures-${{ matrix.os }}-${{ matrix.python-version }}.html
path: image-failures-${{ matrix.os }}-${{ matrix.python-version }}.html
name: image-failures-${{ matrix.os }}-${{ matrix.python-version }}
path: cartopy_test_output-${{ matrix.os }}-${{ matrix.python-version }}
6 changes: 3 additions & 3 deletions INSTALL
Original file line number Diff line number Diff line change
Expand Up @@ -146,11 +146,11 @@ Testing Dependencies

These packages are required for the full Cartopy test suite to run.

**flufl.lock** (https://flufllock.readthedocs.io/)
A platform independent file lock for Python.

**pytest** 5.1.2 or later (https://docs.pytest.org/en/latest/)
Python package for software testing.

**pytest-mpl** 0.11 or later (https://github.com/matplotlib/pytest-mpl)
Pytest plugin to faciliate image comparison for Matplotlib figures

**pep8** 1.3.3 or later (https://pypi.python.org/pypi/pep8)
Python package for software testing.
2 changes: 1 addition & 1 deletion environment.yml
Original file line number Diff line number Diff line change
Expand Up @@ -26,9 +26,9 @@ dependencies:
- gdal>=2.3.2
- scipy>=1.3.1
# Testing
- flufl.lock
- packaging>=20
- pytest
- pytest-mpl
- pytest-xdist
# Documentation
- beautifulsoup4
Expand Down
261 changes: 0 additions & 261 deletions lib/cartopy/tests/mpl/__init__.py
Original file line number Diff line number Diff line change
Expand Up @@ -4,276 +4,15 @@
# See COPYING and COPYING.LESSER in the root of the repository for full
# licensing details.

import base64
import os
import glob
import shutil
import warnings

import flufl.lock
import matplotlib as mpl
import matplotlib.pyplot as plt
import matplotlib.patches as mpatches
from matplotlib.testing import setup as mpl_setup
import matplotlib.testing.compare as mcompare
import packaging.version


MPL_VERSION = packaging.version.parse(mpl.__version__)


class ImageTesting:
"""
Provides a convenient class for running visual Matplotlib tests.

In general, this class should be used as a decorator to a test function
which generates one (or more) figures.

::

@ImageTesting(['simple_test'])
def test_simple():

import matplotlib.pyplot as plt
plt.plot(range(10))


To find out where the result and expected images reside one can create
a empty ImageTesting class instance and get the paths from the
:meth:`expected_path` and :meth:`result_path` methods::

>>> import os
>>> import cartopy.tests.mpl
>>> img_testing = cartopy.tests.mpl.ImageTesting([])
>>> exp_fname = img_testing.expected_path('<TESTNAME>', '<IMGNAME>')
>>> result_fname = img_testing.result_path('<TESTNAME>', '<IMGNAME>')
>>> img_test_mod_dir = os.path.dirname(cartopy.__file__)

>>> print('Result:', os.path.relpath(result_fname, img_test_mod_dir))
... # doctest: +ELLIPSIS
Result: ...output/<TESTNAME>/result-<IMGNAME>.png

>>> print('Expected:', os.path.relpath(exp_fname, img_test_mod_dir))
Expected: tests/mpl/baseline_images/mpl/<TESTNAME>/<IMGNAME>.png

.. note::

Subclasses of the ImageTesting class may decide to change the
location of the expected and result images. However, the same
technique for finding the locations of the images should hold true.

"""

#: The path where the standard ``baseline_images`` exist.
root_image_results = os.path.dirname(__file__)

#: The path where the images generated by the tests should go.
image_output_directory = os.path.join(root_image_results, 'output')
if not os.access(image_output_directory, os.W_OK):
if not os.access(os.getcwd(), os.W_OK):
raise OSError('Write access to a local disk is required to run '
'image tests. Run the tests from a current working '
'directory you have write access to to avoid this '
'issue.')
else:
image_output_directory = os.path.join(os.getcwd(),
'cartopy_test_output')

def __init__(self, img_names, tolerance=0.5, style='classic'):
# With matplotlib v1.3 the tolerance keyword is an RMS of the pixel
# differences, as computed by matplotlib.testing.compare.calculate_rms
self.img_names = img_names
self.style = style
self.tolerance = tolerance

def expected_path(self, test_name, img_name, ext='.png'):
"""
Return the full path (minus extension) of where the expected image
should be found, given the name of the image being tested and the
name of the test being run.

"""
expected_fname = os.path.join(self.root_image_results,
'baseline_images', 'mpl', test_name,
img_name)
return expected_fname + ext

def result_path(self, test_name, img_name, ext='.png'):
"""
Return the full path (minus extension) of where the result image
should be given the name of the image being tested and the
name of the test being run.

"""
result_fname = os.path.join(self.image_output_directory,
test_name, 'result-' + img_name)
return result_fname + ext

def run_figure_comparisons(self, figures, test_name):
"""
Run the figure comparisons against the ``image_names``.

The number of figures passed must be equal to the number of
image names in ``self.image_names``.

.. note::

The figures are not closed by this method. If using the decorator
version of ImageTesting, they will be closed for you.

"""
n_figures_msg = (
f'Expected {len(self.img_names)} figures (based on the number of '
f'image result filenames), but there are {len(figures)} figures '
f'available. The most likely reason for this is that this test is '
f'producing too many figures, (alternatively if not using '
f'ImageCompare as a decorator, it is possible that a test run '
f'prior to this one has not closed its figures).')
assert len(figures) == len(self.img_names), n_figures_msg

for img_name, figure in zip(self.img_names, figures):
expected_path = self.expected_path(test_name, img_name, '.png')
result_path = self.result_path(test_name, img_name, '.png')

if not os.path.isdir(os.path.dirname(expected_path)):
os.makedirs(os.path.dirname(expected_path))

if not os.path.isdir(os.path.dirname(result_path)):
os.makedirs(os.path.dirname(result_path))

with flufl.lock.Lock(result_path + '.lock'):
self.save_figure(figure, result_path)
self.do_compare(result_path, expected_path, self.tolerance)

def save_figure(self, figure, result_fname):
"""
The actual call which saves the figure.

Returns nothing.

May be overridden to do figure based pre-processing (such
as removing text objects etc.)
"""
figure.savefig(result_fname)

def do_compare(self, result_fname, expected_fname, tol):
"""
Runs the comparison of the result file with the expected file.

If an RMS difference greater than ``tol`` is found an assertion
error is raised with an appropriate message with the paths to
the files concerned.

"""
if not os.path.exists(expected_fname):
warnings.warn('Created image in %s' % expected_fname)
shutil.copy2(result_fname, expected_fname)

err = mcompare.compare_images(expected_fname, result_fname,
tol=tol, in_decorator=True)

if err:
assert False, (
f"Images were different (RMS: {err['rms']}).\n"
f"{err['actual']} {err['expected']} {err['diff']}\n"
f"Consider running idiff to inspect these differences.")

def __call__(self, test_func):
"""Called when the decorator is applied to a function."""
test_name = test_func.__name__
mod_name = test_func.__module__
if mod_name == '__main__':
import sys
fname = sys.modules[mod_name].__file__
mod_name = os.path.basename(os.path.splitext(fname)[0])
mod_name = mod_name.rsplit('.', 1)[-1]

def wrapped(*args, **kwargs):
orig_backend = plt.get_backend()
plt.switch_backend('agg')
mpl_setup()

if plt.get_fignums():
warnings.warn('Figures existed before running the %s %s test.'
' All figures should be closed after they run. '
'They will be closed automatically now.' %
(mod_name, test_name))
plt.close('all')

with mpl.style.context(self.style):
if MPL_VERSION >= packaging.version.parse('3.2.0'):
mpl.rcParams['text.kerning_factor'] = 6

r = test_func(*args, **kwargs)

figures = [plt.figure(num) for num in plt.get_fignums()]

try:
self.run_figure_comparisons(figures, test_name=mod_name)
finally:
for figure in figures:
plt.close(figure)
plt.switch_backend(orig_backend)
return r

# nose needs the function's name to be in the form "test_*" to
# pick it up
wrapped.__name__ = test_name
return wrapped


def failed_images_iter():
"""
Return a generator of [expected, actual, diff] filenames for all failed
image tests since the test output directory was created.
"""
baseline_img_dir = os.path.join(ImageTesting.root_image_results,
'baseline_images', 'mpl')
diff_dir = os.path.join(ImageTesting.image_output_directory)

baselines = sorted(glob.glob(os.path.join(baseline_img_dir,
'*', '*.png')))
for expected_fname in baselines:
# Get the relative path of the expected image 2 folders up.
expected_rel = os.path.relpath(
expected_fname, os.path.dirname(os.path.dirname(expected_fname)))
result_fname = os.path.join(
diff_dir, os.path.dirname(expected_rel),
'result-' + os.path.basename(expected_rel))
diff_fname = result_fname[:-4] + '-failed-diff.png'
if os.path.exists(diff_fname):
yield expected_fname, result_fname, diff_fname


def failed_images_html():
"""
Generates HTML which shows the image failures side-by-side
when viewed in a web browser.
"""
data_uri_template = '<img alt="{alt}" src="data:image/png;base64,{img}">'

def image_as_base64(fname):
with open(fname, "rb") as fh:
return base64.b64encode(fh.read()).decode("ascii")

html = ['<!DOCTYPE html>', '<html>', '<body>']

for expected, actual, diff in failed_images_iter():
expected_html = data_uri_template.format(
alt='expected', img=image_as_base64(expected))
actual_html = data_uri_template.format(
alt='actual', img=image_as_base64(actual))
diff_html = data_uri_template.format(
alt='diff', img=image_as_base64(diff))

html.extend([expected, '<br>',
expected_html, actual_html, diff_html,
'<br><hr>'])

html.extend(['</body>', '</html>'])
return '\n'.join(html)


def show(projection, geometry):
orig_backend = mpl.get_backend()
plt.switch_backend('tkagg')
Expand Down
17 changes: 17 additions & 0 deletions lib/cartopy/tests/mpl/conftest.py
Original file line number Diff line number Diff line change
Expand Up @@ -19,3 +19,20 @@ def mpl_test_cleanup(request):
finally:
# Closes all open figures and switches backend back to original
plt.switch_backend(orig_backend)


def pytest_itemcollected(item):
mpl_marker = item.get_closest_marker('mpl_image_compare')
if mpl_marker is None:
return

# Matches old ImageTesting class default tolerance.
mpl_marker.kwargs.setdefault('tolerance', 0.5)

for path in item.fspath.parts(reverse=True):
if path.basename == 'cartopy':
return
elif path.basename == 'tests':
subdir = item.fspath.relto(path)[:-len(item.fspath.ext)]
mpl_marker.kwargs['baseline_dir'] = f'baseline_images/{subdir}'
break
Loading