Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add unit tests for samples module #69

Merged
merged 16 commits into from
Sep 8, 2023
Merged

Add unit tests for samples module #69

merged 16 commits into from
Sep 8, 2023

Conversation

sstendahl
Copy link
Collaborator

@sstendahl sstendahl commented Aug 24, 2023

Adds unit tests for the samples module, still.

TODO:

  • Write docstrings, clarify code with comments
  • Reduce image size for reference figures.
  • Add additional unit tests. Currently there's a coverage of 90% in this module, but there's some cases not being tested for. Also the tests for the pre-made sample models are extremely similar, so I will try and see if I can parameterize that.
  • Clean up code, some parts could be cleaner
  • Rethink and re-implement SLD_profile and reflectivity_profile tests.

@sstendahl sstendahl changed the title WIP: Add unit tests for `samples module Add unit tests for samples module Aug 28, 2023
@sstendahl
Copy link
Collaborator Author

sstendahl commented Aug 28, 2023

This all works, but I will have to rethink the test_sld_profile and test_reflectivity_profile functions at the least. The compare_images is simply not sensitive enough, I can reduce the thickness of one of the layers by 1000%, and it still passes. The problem is probably that it just checks the average differences between the pixels, since everything is mostly white, it requires quite a radical change in the figure to be detected. Reducing tolerance does not really work (went all the way to tol=1e-50, default is 0.001). I can up the dpi more, that will probably help somewhat, but at 600 dpi (the default) things get pretty slow, and I'm not sure if I really like having high-res reference images in the repo. Not sure if comparisons to vector images work, but I'm kinda thinking direct image comparisons is not the way to go. It's expensive, and a bit redundant.

Checking whether the input to the matplotlib figures are as expected, instead of the figures themselves, and then testing save_plot separately, makes sense theoretically. But the calls to get the input are really simple, (just z, slds = self.structure.sld_profile()), and then it quickly becomes a case of just repeating the method in the testing method. Which then comes down to checking if running the same code twice (once in the test, and once in the tested unit) produces the same output. Which is pretty redundant. (This is also a concern I have for my current angle_info test).

Will go back to the drawing board for these tests tomorrow morning. I'll probably refrain from comparing the png's to a reference figure as I do now. It's quite expensive, and I am not 100% convinced of it being a very valuable test. My current thinking goes towards creating a helper function in the method to obtain the profile, calling that function when sld_profile or reflectivity_profile is called, and then just writing a test for that helper function.

In that case sld_profile and reflectivity_profile just reduce to a call to that helper function, followed by some matplotlib calls. The helper functions then return the x and y values that are plotted by matplotlib. In principle the matplotlib API is very stable, so a test with image comparisons is probably redundant. But I'll probably write a separate test for those calls with mocked x- and y values. Which then simply test if a valid figure is produced. Just to prevent cases where there's API-breaking changes with a matplotlib update.

Either way, I'll look into that tomorrow morning. Exact details in that implementation may differ. The plan is to declare this PR ready for review tomorrow at least (assuming that all goes well).

@sstendahl
Copy link
Collaborator Author

This should now be ready and working, currently at 93% coverage. Only thing not being tested right now is nested sampling, but I may add that later in a separate PR.

@sstendahl sstendahl marked this pull request as ready for review August 30, 2023 09:17
Reset back to working dir after test
@sstendahl
Copy link
Collaborator Author

sstendahl commented Aug 30, 2023

Found why the tests where failing, while testing the main function the module is run from a temporary dir which is deleted after the run. This is because I didn't want the test to generate data in the actual results directory.

Then when test_simulate tries to call os.getcwd(), it raises a file not error because the current working directory was set to this temporary dir, which is now deleted. Changing back to the old working directory after running that particular test solves this, but is pretty ugly.

I now just reworked it to be able to set the save_path when running main, such that it's not possible to keep messing around with os.chdir(), which is a bit hacky.

Copy link
Owner

@jfkcooper jfkcooper left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I see you have upped your mocking game, good work. A few comments here, but mostly very small

hogben/models/samples.py Outdated Show resolved Hide resolved
hogben/tests/test_samples.py Show resolved Hide resolved
hogben/tests/test_samples.py Outdated Show resolved Hide resolved
hogben/tests/test_samples.py Outdated Show resolved Hide resolved
hogben/tests/test_samples.py Outdated Show resolved Hide resolved
hogben/tests/test_samples.py Outdated Show resolved Hide resolved
@sstendahl
Copy link
Collaborator Author

sstendahl commented Sep 5, 2023

You are correct about the blank lines, flake8 detects a lot of them locally (including the double blank lines):

image

I'll probably merge PR #72 first, and then merge the new main into this branch, to get this into the workflow.

Fix quotes
@sstendahl
Copy link
Collaborator Author

sstendahl commented Sep 5, 2023

image

This all works quite well. The tests take 2 s and 20 ms on my machine, of which roughly 2 s being from the modules were figures are generated. I can consider removing those tests, or reducing the quality of those temporary figures even more, as it just checks if a valid file is generated. At a dpi of 6, the time goes down to 1s 46 ms, and at a dpi of 5 and lower the figure tests start to fail.

Tomorrow I'll go take a look if I can just check if save_plot is called successfully for those methods instead. That should definitely be possible with the mock function. Then the actual save_plot function can be tested separately in the utils test. If save_plot is called succesfully, and the save_plot function passes its own test, then we effectively test the same thing as now for less computation. (Even though 2s is not really dramatic)

@jfkcooper
Copy link
Owner

For the times I really wouldn't worry about a second here or there. If we have system tests they will possibly take minutes, so save your effort and leave it I think

@sstendahl
Copy link
Collaborator Author

sstendahl commented Sep 6, 2023

For the times I really wouldn't worry about a second here or there. If we have system tests they will possibly take minutes, so save your effort and leave it I think

Okay, I'll keep it as-is for now. A simple alternative would be to use assert_called on the mocked function, which just asserts that the mock has been called at all. Similarly, assert_called_with may also be used where we can also check for the that it is called with the correct arguments.

@sstendahl
Copy link
Collaborator Author

There's actually no conflicts at all with the latest main branch, so this is ready for review.

@jfkcooper jfkcooper merged commit 8c94287 into main Sep 8, 2023
3 checks passed
@jfkcooper jfkcooper deleted the add_test_samples branch September 8, 2023 06:51
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
Status: Done
Development

Successfully merging this pull request may close these issues.

2 participants