Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Qt model tester #63

Merged
merged 63 commits into from
Jun 29, 2016
Merged

Qt model tester #63

merged 63 commits into from
Jun 29, 2016

Conversation

The-Compiler
Copy link
Member

@The-Compiler The-Compiler commented Jul 17, 2015

There we go, a very much work-in-progress version of the model tester.

The basic tests work for PyQt5, but probably not for anything else. Some issues to look at:

PyQt4/PySide compatibility ✅

  • How to do the sip.cast equivalent (so we can access private methods) in PySide?
  • Since the QVariant API changed, the tests using that need to be adjusted to work with PyQt5/PySide.
  • Does connecting signals like this work everywhere?

pytest integration

  • What to do with the commented out qDebug() code? Some can probably be removed because of the assert introspection, but for some of it it might make sense to use print so we get that output on a failure? Or maybe introduce a new output "category" like the Qt warning messages for modeltester messages?
  • Could the tests (the methods in modeltester, not the tests testing modeltester) somehow be split into multiple pytest tests? Probably not as all tests are re-run when the model changes.

Other

  • docs
  • pep8 cleanup
  • Expose options (e.g. for the commented out assert self._model.data(index, QtCore.Qt.DisplayRole).isValid() test) as arguments and/or attributes.

Tests

  • unit tests with some valid and broken models
  • some kind of integration test based on the PyQt5_modeltest one

@The-Compiler
Copy link
Member Author

cc @bgr

@The-Compiler The-Compiler mentioned this pull request Jul 17, 2015
@nicoddemus
Copy link
Member

Thanks @The-Compiler, I will take a look and probably start working on it tomorrow.

@nicoddemus
Copy link
Member

Hi @The-Compiler, only today I could take a look at the code, sorry about that.

I like the work so far, I think it would be a nice addition to pytest-qt, thanks for the initiative and having this so far down the road already!

A few comments the points you raised:

  • PyQt4/PySide compatibility

    I will see if I can help in this regard, although I have little experience with the new model/view framework

    Easier than I expected, although some work switching between all interpreters and python versions. 😅

  • pytest integration

    I'm trying to figure out how to make this as nice as possible into pytest. I agree that we should split the model test into several sub-tests.

    IMO, the ideal scenario would be that a user could just provide a new model implementation (possibly through a fixture, so it can reuse other fixtures if needed to instantiate it) and automatically pytest would "spawn" all the tests for the new model automatically. I'll have to think a little on how to do that.

@nicoddemus nicoddemus force-pushed the modeltest branch 3 times, most recently from 9ff298d to a20165a Compare July 22, 2015 00:56
@The-Compiler
Copy link
Member Author

Regarding pytest integration, there's another big issue in my view: The tests should be re-run automatically if the model changes, as there are also some tests checking if that happens properly.

So I should be able to set a model (which will run the tests), and then for example insert some items into the model and the tests should re-run to see the inserting worked well.

@nicoddemus
Copy link
Member

So I should be able to set a model (which will run the tests), and then for example insert some items into the model and the tests should re-run to see the inserting worked well.

Hmm I see. But shouldn't that be part of the standard tests? I mean, we could add a new standard test which automatically changes the model (adding new items) and re-runs the tests. Of course it should be generic as to work with any model.

Aside from the issue above, I have thought about how to integrate this better in pytest and thought the following:

Create a a base class which contains all the tests, which will depend on an undeclared fixture, say qmodel:

class BaseModelTestCollection:

    def test_basic(self, qmodel):
    def test_column_count(self, qmodel):
    def test_has_index(self, qmodel):
    def test_index(self, qmodel):
    def test_data(self, qmodel):

Because of how the class is named, it won't get collected by pytest.

To test a specific implementation, a user has to subclass it and implement the fixture in the class:

class TestMyModel(BaseModelTestCollection):

    @pytest.fixture
    def qmodel(self):
        return MyModelImplementation(...)

I think the advantage here is that one can easily add new test methods if they are specific to their class, other fixtures can be used if needed when implementing the qmodel fixture, and it is simple to understand.

What do you think?

@nicoddemus
Copy link
Member

Any more input here @The-Compiler? 😄

@The-Compiler
Copy link
Member Author

Whoops, sorry for the delay!

Hmm I see. But shouldn't that be part of the standard tests? I mean, we could add a new standard test which automatically changes the model (adding new items) and re-runs the tests. Of course it should be generic as to work with any model.

I'll have to think more about that - I think there could be many models which don't support inserting data from the outside. Models are basically just the data behind some kind of view.

Imagine you'd want to test a QFileSystemModel - you can't insert/remove items as part of the test, but you might want to set a different file filter with the tester "attached" and see if the model removes/adds items correctly.

As for the pytest integration - that sounds good, but probably won't work because of the "changing model" issue.

@nicoddemus
Copy link
Member

@The-Compiler did a little more work on it:

  • Created a _debug function that uses pytest's verbosity setting to print to stdout instead of the commented-out qDebug macros;
  • Only left a single public function, check (renamed from setup_and_run).

After taking a closer look at the code, I decided against moving forward with my previous idea of having a base class with tests and using a fixture to provide a model, because 1) this would change the code quite a bit, and would make it harder to port changes or fixes from the original code and 2) I just realized that run connects itself to multiples slots which calls back to itself to check the model again, which breaks the usual pytest model.

So I'm planning to release something closer to what it was originally. We can try to come up with a solution for the "model modification" problem later, but I suspect now the user can simply call qtmodeltester.check(model) again after changing the model in a test.

What do you think?

@The-Compiler
Copy link
Member Author

Nice, thank you!

I'd leave it as-is, i.e. with the automatic rechecking when the model changes. I don't know the exact reason why the original source did it that way, but I'm guessing there was some reason for it 😉

So what's left to do here? I think it definitely needs some more tests (I'll try to work on that this week, but no promises - I need to fix some qutebrowser bugs and release a new version).

Other than that there's some qDebug and FIXME's left, but the rest looks good I think!

Thanks for all your work on this!

@nicoddemus
Copy link
Member

So what's left to do here? I think it definitely needs some more tests

I was thinking of testing some (all?) Qt's builtin model classes... do you have something else in mind?

Other than that there's some qDebug and FIXME's left, but the rest looks good I think!

I will take care of the missing qDebug calls, but I'm not sure I'll be able to tackle the FIXME, as some of them don't have any comments (I have a mind of leaving them as it is, it would make it easier to port any fixes that happen upstream, if they ever happen).

@The-Compiler
Copy link
Member Author

I was thinking of testing some (all?) Qt's builtin model classes... do you have something else in mind?

These will all pass, hopefully - I'm thinking of having some custom QAbstractItemModels (or QAbstractListModel to make things easier) which test the bad cases as well.

I will take care of the missing qDebug calls, but I'm not sure I'll be able to tackle the FIXME, as some of them don't have any comments (I have a mind of leaving them as it is, it would make it easier to port any fixes that happen upstream, if they ever happen).

Those were added by me as I was unsure what to do with the qDebug stuff - now that self._debug is a good solution for that, I think they can go except of the "checked 10 deep" one.

While you're at it: Could you also add a link to the original implementation somewhere as a comment?

http://code.qt.io/cgit/qt/qtbase.git/tree/tests/auto/other/modeltest/modeltest.cpp

@coveralls
Copy link

coveralls commented Jun 29, 2016

Coverage Status

Coverage remained the same at 100.0% when pulling 5e158da on modeltest into a29570d on master.

@coveralls
Copy link

coveralls commented Jun 29, 2016

Coverage Status

Coverage remained the same at 100.0% when pulling 2537651 on modeltest into a29570d on master.

Model Tester
============

.. versionadded:: 1.6
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

2.0 now 😄

@nicoddemus
Copy link
Member

LGTM, do you feel this is ready for merging?

I will later (just woke up 😆) work on making the lazy API and wait Until, but this needs to be merged first because it will conflict a lot with the lazy API work.

@The-Compiler
Copy link
Member Author

Two things are still open in my view:

@nicoddemus
Copy link
Member

Oh yeah, sorry:

I wonder if we should only output the debug info with -v though - it doesn't seem to be too much, and it's definitely helpful if one of the related assertions fails.

I agree, let's leave it as it is for now.

Should we change the API to be a context manager?

Agree with you, don't really seem that much useful. And we can decide to add this later anyway.

I just cancelled a lot of the piled up builds on AppVeyor, let's see how this goes. 😁

@The-Compiler
Copy link
Member Author

I agree, let's leave it as it is for now.

That seems contradicting itself - currently we only display the output with --verbose 😉

Agree with you, don't really seem that much useful. And we can decide to add this later anyway.

Though if we want check to only check once (and not when the model is changed) like you proposed in your comment earlier, we'd need to do that now.

@coveralls
Copy link

coveralls commented Jun 29, 2016

Coverage Status

Coverage remained the same at 100.0% when pulling 3404351 on modeltest into a29570d on master.

@nicoddemus
Copy link
Member

That seems contradicting itself - currently we only display the output with --verbose

Oh sorry, I read that wrong and thought we were already displaying the debug output regardless of verbose. Let's go with your gut feeling then, since you have been working on this lately.

@nicoddemus
Copy link
Member

Though if we want check to only check once (and not when the model is changed) like you proposed in your comment earlier, we'd need to do that now.

Long time since I wrote that and didn't remember exactly what the proposal was in the first place.

My initial thought was that check(model) doesn't convey the idea that we are in fact checking and now listening for model changes and validating that as well... but start_checking(model) also doesn't give the idea that a check is also being done at the same time.

I don't know, perhaps I'm overthinking this and just check() is fine. What do you think?

The output will only be shown on errors anyways, and it's very useful if
there's a problem.
@The-Compiler
Copy link
Member Author

That seems contradicting itself - currently we only display the output with --verbose

Oh sorry, I read that wrong and thought we were already displaying the debug output regardless of verbose. Let's go with your gut feeling then, since you have been working on this lately.

I pushed a commit removing the verbose arg, so that's the case now.

I don't know, perhaps I'm overthinking this and just check() is fine. What do you think?

That's what I think currently, yeah. As long as things are documented that way, I think it's fine.

@coveralls
Copy link

coveralls commented Jun 29, 2016

Coverage Status

Coverage remained the same at 100.0% when pulling 5c758c5 on modeltest into a29570d on master.

@nicoddemus
Copy link
Member

Restarted AppVeyor build because it failed to download PyQt5 installer from source forge... 😓

We probably should use AppVeyor cache to avoid this

@The-Compiler
Copy link
Member Author

Not sure if AppVeyor or sourceforge is worse 😆

@nicoddemus
Copy link
Member

Heh! 😆

@nicoddemus
Copy link
Member

After this passes do you think we can merge it?

@The-Compiler
Copy link
Member Author

The previous commit passed and in 0f3ce42 I added [ci skip] as I just fixed some typos.

I still should investigate why test_file_system_model is hanging, but I'm not sure I'll get to that anytime soon, so I'd be okay with merging this, as it seems to work fine otherwise!

setRootPath doesn't actually change the data represented by the model,
i.e. the test will still end up iterating over the root filesystem.

Due to symlinks (?) or other funny stuff this could take a long time or
do other funny things, so let's get rid of it.
@The-Compiler
Copy link
Member Author

I took a quick look and noticed the filesystem model test actually runs over the root filesystem (not just tmpdir), for the reasons I explained in 444c958.

I ended up just removing it. After the checks pass, I think this can finally be merged!

@coveralls
Copy link

coveralls commented Jun 29, 2016

Coverage Status

Coverage remained the same at 100.0% when pulling 444c958 on modeltest into a29570d on master.

@nicoddemus nicoddemus merged commit 97e6f0d into master Jun 29, 2016
@nicoddemus nicoddemus deleted the modeltest branch June 29, 2016 20:17
@nicoddemus
Copy link
Member

🎉 💥

@The-Compiler
Copy link
Member Author

Yay! ✨

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants