Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Continuing examples #13

Open
TiemenSch opened this issue Jul 6, 2023 · 4 comments
Open

Continuing examples #13

TiemenSch opened this issue Jul 6, 2023 · 4 comments

Comments

@TiemenSch
Copy link

Would you consider supporting "continuing" examples that a feature text in between code fences?

For example:

foo = "bar"

We set foo to "bar" to indicate bar. We can then print "Quux!"

if foo == "bar":
    print("Quux!")
#> "Quux!"

Currently, I'm trying some sort of hybrid setup with pytest-markdown-docs that supports it, but that more or less has to run every example twice. Also their ```python continuation thing doesn't quite render right in MkDocs.

@TiemenSch
Copy link
Author

This might make the formatters and linters quite sad, though.

@samuelcolvin
Copy link
Member

Happy to accept a change to fix this if it's not too complicated. How would you implement a fix?

@rmorshea
Copy link

rmorshea commented Oct 8, 2024

The first thing you'd probably need is to add in the ability for EvalExample to preserve execution state each time run() is called. Once that's done a bunch of possibilities open up. The first one is execution all examples in the same file together.

import itertools

import pytest
from pytest_examples import CodeExample
from pytest_examples import EvalExample
from pytest_examples import find_examples

GROUPED_EXAMPLES = []
for _, group in itertools.groupby(find_examples("path/to/examples"), lambda ex: ex.path):
    GROUPED_EXAMPLES.append(list(group))


@pytest.mark.parametrize("examples", GROUPED_EXAMPLES, ids=str)
def test_grouped_examples(examples: list[CodeExample], eval_example: EvalExample):
    for ex in examples:
        eval_example.run(ex)

You can also get clever with CodeExample.prefix_settings() and start grouping examples:

```python { test_group="1" }
x = 1
```

Some text

```python { test_group="1" }
assert x == 1
```

Where you could then gather only the specific examples that need to be run together:

import itertools

import pytest
from pytest_examples import CodeExample
from pytest_examples import EvalExample
from pytest_examples import find_examples

GROUPED_EXAMPLES = []
UNGROUPED_EXAMPLES = []
for group_id, group in itertools.groupby(
    find_examples("path/to/examples"),
    lambda ex: ex.prefix_settings().get("test_group"),
):
    if group_id:
        GROUPED_EXAMPLES.append(list(group))
    else:
        UNGROUPED_EXAMPLES.extend(group)


@pytest.mark.parametrize("examples", GROUPED_EXAMPLES, ids=str)
def test_grouped_examples(examples: list[CodeExample], eval_example: EvalExample):
    for ex in examples:
        eval_example.run(ex)


@pytest.mark.parametrize("example", UNGROUPED_EXAMPLES, ids=str)
def test_ungrouped_examples(example: CodeExample, eval_example: EvalExample):
    eval_example.run(example)

@rmorshea
Copy link

For reference I'm already using this strategy to skip testing certain docstrings by marking them with { test="false" }.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants