Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

SPIKE: Test framework for recommendations feature #154

Open
4 of 5 tasks
adonahue opened this issue Aug 9, 2019 · 5 comments
Open
4 of 5 tasks

SPIKE: Test framework for recommendations feature #154

adonahue opened this issue Aug 9, 2019 · 5 comments
Assignees
Milestone

Comments

@adonahue
Copy link

adonahue commented Aug 9, 2019

As a PM, I want to release a recommendations feature to test the concept of Riff feedback based on our analytics. Because this is a prominent user-oriented feature that is mean to showcase Riff as an "intelligent" tool that directs people into behaviors that will make them more successful, it is especially important that it not be (or look) broken to the user, which would undermine their confidence in the quality of Riff as a product.

Because we don't currently have a functioning test framework with which we can execute unit tests, or integration tests, I would like to investigate the possibility of getting something in place very quickly, so have at least minimal test coverage for this feature before we deploy it for the NEXT course in September.

** Spike Acceptance Criteria **
A meeting with the team by 8/13 (or sooner) to report on the following:

  • This activity should be time-boxed to a day and a half.
  • If it's possible to get the existing Mattermost test framework up and running to execute new tests written for the Rec's feature, and an estimate of how long it will take to get a functioning framework.
  • If using the Mm framework isn't viable, are there other testing tools or frameworks that we could use instead.
  • If there are any possibilities to test the the overall function of the feature, for example: Simulate the UX of being 4 weeks into the course, seeing the rec to connect with their capstone team, and as a result verifying that the state of the recommendation changed from incomplete, to complete, once the recommended activity is completed. Or that by week 2, the rec to have your fist Riff meeting no longer displays. Note: it is not a requirement to view these state changes in the UI (although that is desirable) but just to verify that the code executed as expected.
  • Tests should be executable on demand, so that when code (and tests) are updated the test suite can be re-run.
@juliariffgit juliariffgit added this to the Sprint 8 milestone Aug 12, 2019
@jaedoucette
Copy link

Jordan and I did a brief spike on this and will attach a PR to this card.

We spent 30 minutes on the spike.

We disabled all existing tests, and were able to add a new test module for Riff that builds under the existing target. We verified that we can create new unit test cases (did not check integration testing), and that test cases can both pass and fail in expected ways.

New tests can easily be added as files under the riff_tests sub-directory.

Note: The existing test suite, although now disabled, actually is only failing on ~35 of 650 tests. Ideally, we would take a couple of days to address this, and then we'd have full test coverage of the code base, rather than only partial coverage of the portion we create new tests for. Further, by writing separate test cases, we are generating additional technical debt that will increase the long we go without fixing the original test suite. We will make a new card for fixing the 35 broken tests in the original suite and merging them with any riff tests.

Overall, we are good to go on unit tests for the new feature set.

@jaedoucette
Copy link

@adonahue

We have unit testing working now (pending merging of this PR). Unit testing allows us to easily test whether any single technical part of a feature is working correctly. So, for example, we can test that, if you give a certain date to the part of the code that generates recommendations, it does produce the expected list.

What we cannot do right now is integration tests, where we mockup the entire UI programmatically, and actually test that if we simulate setting the time on the simulated computer to a certain date, the information displayed on a portion of the simulated screen matches the expected value.

We could set up mocks for integration testing (we have checked that the test suite supports this), but this would a considerable undertaking. We estimate at least 2 more days, with a lot of potential for more.

We think unit testing is a solid standard, if we can build with functioning unit tests, that would actually be a big improvement on the reliability over existing code, and think it is not worth it to try to get integration tests. Let us know if you are okay with this. We can take tomorrow to work on trying to get integration testing, but are not optimistic.

@jaedoucette
Copy link

Assigned code review to @mlippert

@adonahue
Copy link
Author

Thank you @jaedoucette and @jordanreedie - this is super helpful! I agree, unit tests seem reasonable and like a good step in the right direction.

@juliariffgit juliariffgit modified the milestones: Sprint 8, Sprint 9 Aug 26, 2019
@adonahue
Copy link
Author

No code to merge from this spike. Marking as accepted last sprint. @juliariffgit - please change that status to another version of accepted / closed if need be.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

5 participants