-
-
Notifications
You must be signed in to change notification settings - Fork 1.3k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Fix error_case test gen to check for 'score' prpty #2840
Conversation
error_case tests were being generated with code that always raised exceptions regardless of the user's code; this was disguised as 'passing' tests since they were expecting an exception anyway.
Hi & Welcome! 👋🏽 👋 Thank you for contributing to This is an automated [🤖 🤖 ] comment for the
|
|
✅️ Have You Checked...
.
|
🛠️ Maintainers
Please take note 📒 of the following sections/review items 👀 ✨
🌈 Acknowledgements and Reputation
|
💫 General Code Quality
- The branch was updated & rebased with any (recent) upstream changes.
- All prose was checked for spelling and grammar.
- Files are formatted via yapf (yapf config) & conform to our coding standards
- Files pass flake8 with flake8 config & pylint with pylint config.
- Changed
example.py
/exemplar.py
files still pass their associated test files. - Changed test files still work with associated
example.py
/exemplar.py
files.- Check that tests fail properly, as well as succeed.
(e.g., make some tests fail on purpose to "test the tests" & failure messages).
- Check that tests fail properly, as well as succeed.
- All files have proper EOL.
- If a
JinJa2
template was modified/created, was the test file regenerated?- Does the regenerated test file successfully test the exercises
example.py
file?
- Does the regenerated test file successfully test the exercises
- The branch passes all CI checks &
configlet-lint
.
Verify: |
🌿 Changes to Concept Exercises
- ❓ Are all required files still up-to-date & configured correctly for this change?_
- ❓ Does
<exercise>/.meta/design.md
need to be updated with new implementation/design decisions - ❓ Do these changes require follow-on/supporting changes to related concept documents?
- Exercise
introduction.md
- Do all code examples compile, run, and return the shown output?
- Are all the code examples formatted per the Python docs?
- Exercise
instructions.md
- Exercise
hints.md
- Check that exercise
design.md
was fulfilled or edited appropriately - Exercise
exemplar.py
- Only uses syntax previously introduced or explained.
- Is correct and appropriate for the exercise and story.
- Exercise
<exercise_name>.py
(stub)- Includes appropriate docstrings and function names.
- Includes
pass
for each function - Includes an EOL at the end
- Exercise
<exercise_name>_test.py
- Tests cover all (reasonable) inputs and scenarios
- At least one test for each task in the exercise
- If using subtests or fixtures they're formatted correctly for the runner
- Classnames are
<ExerciseName>Test
- Test functions are
test_<test_name>
- Exercise
config.json
--> valid UUID4 - Corresponding concept
introduction.md
- Corresponding concept
about.md
- Concept
config.json
- All Markdown Files : Prettier linting (for all markdown docs)
- All Code files: PyLint linting (except for test files)
- All files with text: Spell check & grammar review.
✨ Where applicable, check the following ✨
(as a reminder: Concept Exercise Anatomy) |
🚀 Changes to Practice Exercises
-
.docs/instructions.md
(required)- Was this file updated and regenerated properly?
-
.docs/introduction.md
(optional) -
.docs/introduction.append.md
(optional) -
.docs/instructions.append.md
(optional)- Are any additional instructions needed/provided?
(e.g. error handling or information on classes)
- Are any additional instructions needed/provided?
-
.docs/hints.md
(optional)- Was this file regenerated properly?
-
.meta/config.json
(required) -
.meta/example.py
(required)- Does this pass all the current tests as written/generated?
-
.meta/design.md
(optional) -
.meta/template.j2
(template for generating tests from canonical data)- Was a test file properly regenerated from this template?
-
.meta/tests.toml
- Are there additional test cases to include or exclude?
- Are there any Python-specific test cases needed for this exercise?
-
<exercise-slug>_test.py
- Does this file need to be regenerated?
- Does this file correctly test the
example.py
file? - Does this file correctly report test failures and messages?
-
<exercise-slug>.py
(required)- Does this stub have enough information to get
the student started coding a valid solution?
- Does this stub have enough information to get
Is the exercise is in line with Practice Exercise Anatomy? |
🐣 Brand-New Concept Exercises
- Exercise
introduction.md
- Do all code examples compile, run, and return the shown output?
- Are all the code examples formatted per the Python docs?
- Exercise
instructions.md
- Exercise
hints.md
- Check that exercise
design.md
was fulfilled or edited appropriately - Exercise
exemplar.py
- Only uses syntax previously introduced or explained.
- Is correct and appropriate for the exercise and story.
- Exercise
<exercise_name>.py
(stub)- Includes appropriate docstrings and function names.
- Includes
pass
for each function - Includes an EOL at the end
- Exercise
<exercise_name>_test.py
- Tests cover all (reasonable) inputs and scenarios
- At least one test for each task in the exercise
- If using subtests or fixtures they're formatted correctly for the runner
- Classnames are
<ExerciseName>Test
- Test functions are
test_<test_name>
- Exercise
config.json
--> valid UUID4 - Corresponding concept
introduction.md
- Corresponding concept
about.md
- Concept
config.json
- All Markdown Files : Prettier linting (for all markdown docs)
- All Code files: Flake8 & PyLint linting
- All Code Examples: proper formatting and fencing. Verify they run in the REPL
- All files with text: Spell check & grammar review.
Is the exercise is in line with Concept Exercise Anatomy? |
Our 💖 for all your review efforts! 🌟 🦄
with self.assertRaisesWithMessage(Exception): | ||
{% if property == 'score' -%} | ||
game.score() | ||
{% else -%} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Would it make sense for this to explicitly check the property value?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@IsaacG - The CI for this has not run (as previously asked -- please run the CI and make sure it passes before reviewing. If you cannot run the CI, please do not review.).
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I do believe I cannot run the CI.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Please refrain from reviewing PRs that require a manually triggered CI run then. Thanks.
Regenerated test case file from new JinJa2 template.
@rneilsen - Thanks for submitting this. 😄 I'm bumping the "work" label to large because (in addition to the troubleshooting) you needed to sus out how to alter the JinJa template. Nice work, you! 🌟 @IsaacG - to answer your question, there are only two properties that create errors for this exercise: |
Hi @rneilsen 👋🏽 Apologies for being late in realizing this. Since you've updated the test generation and tests for this exercise, you should add your name to the exercise contributors list here: https://github.com/exercism/python/blob/main/exercises/practice/bowling/.meta/config.json. I would do it for you, but you should also get credit for submitting the PR, so. 😄 Make sure the name you add is your github username, since that's the name that is used to award reputation on exercism.org. Thanks again for submitting this PR, and for your attention to detail in the |
error_case tests were being generated with code that always raised exceptions regardless of the user's code; this was disguised as 'passing' tests since they were expecting an exception anyway.
This change means the generator now checks the 'property' field from the canonical-data.json file and correctly calls the
score
method instead ofroll
for error tests where that is specified.Edit to add: issue link here