Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

TEST: Mock terminal output before testing changing default value #3193

Merged
merged 3 commits into from
Mar 29, 2020

Conversation

effigies
Copy link
Member

Summary

Closes #3185.

Without pytest-xdist, this reliably fails:

pytest --pdb --doctest-modules nipype/interfaces/base/tests/test_core.py nipype/interfaces/freesurfer/tests/test_utils.py

I assume the intermittent CI failures are for when the two tests were run on the same worker.

List of changes proposed in this PR (pull-request)

Acknowledgment

  • (Mandatory) I acknowledge that this contribution will be available under the Apache 2 license.

@mgxd
Copy link
Member

mgxd commented Mar 23, 2020

ah! i wonder if this is somewhat related to failures we saw in #3046 given that that interface also parses stdout

@codecov
Copy link

codecov bot commented Mar 23, 2020

Codecov Report

Merging #3193 into master will increase coverage by 0.08%.
The diff coverage is n/a.

Impacted file tree graph

@@            Coverage Diff             @@
##           master    #3193      +/-   ##
==========================================
+ Coverage   65.07%   65.16%   +0.08%     
==========================================
  Files         301      301              
  Lines       39624    39652      +28     
  Branches     5247     5263      +16     
==========================================
+ Hits        25786    25839      +53     
+ Misses      12771    12739      -32     
- Partials     1067     1074       +7     
Flag Coverage Δ
#unittests 65.16% <ø> (+0.08%) ⬆️
Impacted Files Coverage Δ
nipype/pipeline/plugins/legacymultiproc.py 65.34% <0.00%> (-3.47%) ⬇️
nipype/pipeline/plugins/base.py 62.46% <0.00%> (-1.65%) ⬇️
nipype/interfaces/dcm2nii.py 64.78% <0.00%> (+15.02%) ⬆️
nipype/interfaces/niftyreg/base.py 69.44% <0.00%> (+18.19%) ⬆️

Continue to review full report at Codecov.

Legend - Click here to learn more
Δ = absolute <relative> (impact), ø = not affected, ? = missing data
Powered by Codecov. Last update c37fcfb...5c69cd1. Read the comment docs.

@effigies
Copy link
Member Author

Yes. That error can also be reliably reproduced in the same way, and is resolved by this commit. I will remove the xfail.

We should probably find a less brittle way to extract information from output streams, however...

@effigies
Copy link
Member Author

@mgxd can you look at the failure? I don't get this locally.

@mgxd
Copy link
Member

mgxd commented Mar 24, 2020

hmm, it looks like the dataset is no longer with us
http://datasets-tests.datalad.org/dicoms/dcm2niix-tests/Siemens_Sag_DTI_20160825_145811/

@effigies
Copy link
Member Author

Now I'm curious why it doesn't fail for me locally...

@effigies
Copy link
Member Author

@mgxd If I run datalad install http://datasets-tests.datalad.org/dicoms/dcm2niix-tests, I still see the dataset correctly.

@yarikoptic Does this error mean anything to you?

fetch_data = <function fetch_data.<locals>._fetch_data at 0x7fc0c80abd08>
tmpdir = local('/tmp/pytest-of-neuro/pytest-1/test_dcm2niix_dti0')

    @pytest.mark.skipif(no_datalad, reason="Datalad required")
    @pytest.mark.skipif(no_dcm2niix, reason="Dcm2niix required")
    def test_dcm2niix_dti(fetch_data, tmpdir):
        tmpdir.chdir()
        datadir = tmpdir.mkdir("data").strpath
>       dicoms = fetch_data(datadir, "Siemens_Sag_DTI_20160825_145811")

/src/nipype/nipype/interfaces/tests/test_extra_dcm2nii.py:38: 
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 
/src/nipype/nipype/interfaces/tests/test_extra_dcm2nii.py:25: in _fetch_data
    api.get(path=data)
/opt/miniconda-latest/envs/neuro/lib/python3.6/site-packages/datalad/interface/utils.py:492: in eval_func
    return return_func(generator_func)(*args, **kwargs)
/opt/miniconda-latest/envs/neuro/lib/python3.6/site-packages/datalad/interface/utils.py:480: in return_func
    results = list(results)
/opt/miniconda-latest/envs/neuro/lib/python3.6/site-packages/datalad/interface/utils.py:411: in generator_func
    allkwargs):
/opt/miniconda-latest/envs/neuro/lib/python3.6/site-packages/datalad/interface/utils.py:527: in _process_results
    for res in results:
/opt/miniconda-latest/envs/neuro/lib/python3.6/site-packages/datalad/distribution/get.py:669: in __call__
    dataset, check_installed=True, purpose='get content')
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 

dataset = None, check_installed = True, purpose = 'get content'

    def require_dataset(dataset, check_installed=True, purpose=None):
        """Helper function to resolve a dataset.
    
        This function tries to resolve a dataset given an input argument,
        or based on the process' working directory, if `None` is given.
    
        Parameters
        ----------
        dataset : None or path or Dataset
          Some value identifying a dataset or `None`. In the latter case
          a dataset will be searched based on the process working directory.
        check_installed : bool, optional
          If True, an optional check whether the resolved dataset is
          properly installed will be performed.
        purpose : str, optional
          This string will be inserted in error messages to make them more
          informative. The pattern is "... dataset for <STRING>".
    
        Returns
        -------
        Dataset
          Or raises an exception (InsufficientArgumentsError).
        """
        if dataset is not None and not isinstance(dataset, Dataset):
            dataset = Dataset(dataset)
    
        if dataset is None:  # possible scenario of cmdline calls
            dspath = get_dataset_root(getpwd())
            if not dspath:
                raise NoDatasetFound(
                    "No dataset found at '{}'.  Specify a dataset to work with "
                    "by providing its path via the `dataset` option, "
                    "or change the current working directory to be in a "
>                   "dataset.".format(getpwd()))
E               datalad.support.exceptions.NoDatasetFound: No dataset found at '/tmp/pytest-of-neuro/pytest-1/test_dcm2niix_dti0'.  Specify a dataset to work with by providing its path via the `dataset` option, or change the current working directory to be in a dataset.

/opt/miniconda-latest/envs/neuro/lib/python3.6/site-packages/datalad/distribution/dataset.py:570: NoDatasetFound

I've installed the Debian versions of datalad/git-annex to simulate the test conditions, but can't reproduce the issue.

@effigies
Copy link
Member Author

Fiddled with it by SSHing in. It's the datalad change where you need to specify the dataset manually. 5c69cd1 should resolve it.

Still not sure why I couldn't reproduce locally.

@effigies effigies merged commit f847fa7 into nipy:master Mar 29, 2020
@effigies effigies deleted the test/mock_terminal_output branch March 29, 2020 02:39
@effigies effigies added this to the 1.5.0 milestone Apr 19, 2020
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants