Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Improve pytest completions and goto def #3727

Closed
rchiodo opened this issue Dec 6, 2022 · 37 comments
Closed

Improve pytest completions and goto def #3727

rchiodo opened this issue Dec 6, 2022 · 37 comments
Assignees
Labels
enhancement New feature or request fixed in next version (main) A fix has been implemented and will appear in an upcoming version

Comments

@rchiodo
Copy link
Contributor

rchiodo commented Dec 6, 2022

Mini-spec:

  • Go to def on a test parameter finds the fixture
  • Fixture types are known (and help with writing a test)
  • All known fixtures are listed in the completions for parameters to a function (if inside a test function)
  • Parameters of a test that aren't found in the list of fixtures should show an error/warning

Example - Hover over a parameter on a test function or a fixture shows the type of the parameter:

pytest1

Example - Go to def works for parameters for tests and fixtures:

pytest2

Example - completions are listed in the parameters for a test function or a fixture

pytest3

Example - error for parameters that don't match an existing fixture:
pytest4

@rchiodo
Copy link
Contributor Author

rchiodo commented Dec 7, 2022

Pytest features should support goto definition (like described here: #2576)
Pytest features should have the type computable so it could be used for completions too.

@gramster had an idea to generate virtual type stubs in memory to 'trick' pyright into thinking the feature has a type

Another example:
microsoft/pyright#1702

@judej judej added the enhancement New feature or request label Dec 7, 2022
@rchiodo
Copy link
Contributor Author

rchiodo commented Dec 7, 2022

Pyright has 'builtin' fixtures. Type completions on these (or shipped stubs) would be nice:

https://docs.pytest.org/en/7.2.x/builtin.html

cache -- .venv\lib\site-packages_pytest\cacheprovider.py:510
Return a cache object that can persist state between testing sessions.

capsys -- .venv\lib\site-packages_pytest\capture.py:905
Enable text capturing of writes to sys.stdout and sys.stderr.

capsysbinary -- .venv\lib\site-packages_pytest\capture.py:933
Enable bytes capturing of writes to sys.stdout and sys.stderr.

capfd -- .venv\lib\site-packages_pytest\capture.py:961
Enable text capturing of writes to file descriptors 1 and 2.

capfdbinary -- .venv\lib\site-packages_pytest\capture.py:989
Enable bytes capturing of writes to file descriptors 1 and 2.

doctest_namespace [session scope] -- .venv\lib\site-packages_pytest\doctest.py:738
Fixture that returns a :py:class:dict that will be injected into the
namespace of doctests.

pytestconfig [session scope] -- .venv\lib\site-packages_pytest\fixtures.py:1351
Session-scoped fixture that returns the session's :class:pytest.Config
object.

record_property -- .venv\lib\site-packages_pytest\junitxml.py:282
Add extra properties to the calling test.

record_xml_attribute -- .venv\lib\site-packages_pytest\junitxml.py:305
Add extra xml attributes to the tag for the calling test.

record_testsuite_property [session scope] -- .venv\lib\site-packages_pytest\junitxml.py:343
Record a new <property> tag as child of the root <testsuite>.

tmpdir_factory [session scope] -- .venv\lib\site-packages_pytest\legacypath.py:302
Return a :class:pytest.TempdirFactory instance for the test session.

tmpdir -- .venv\lib\site-packages_pytest\legacypath.py:309
Return a temporary directory path object which is unique to each test
function invocation, created as a sub directory of the base temporary
directory.

caplog -- .venv\lib\site-packages_pytest\logging.py:491
Access and control log capturing.

monkeypatch -- .venv\lib\site-packages_pytest\monkeypatch.py:29
A convenient fixture for monkey-patching.

recwarn -- .venv\lib\site-packages_pytest\recwarn.py:30
Return a :class:WarningsRecorder instance that records all warnings emitted by test functions.

tmp_path_factory [session scope] -- .venv\lib\site-packages_pytest\tmpdir.py:188
Return a :class:pytest.TempPathFactory instance for the test session.

tmp_path -- .venv\lib\site-packages_pytest\tmpdir.py:203
Return a temporary directory path object which is unique to each test
function invocation, created as a sub directory of the base temporary
directory.

@rchiodo
Copy link
Contributor Author

rchiodo commented Dec 7, 2022

Pytest raises exceptions. Message regex might be computable?

https://docs.pytest.org/en/7.2.x/reference/reference.html#pytest.raises

@rchiodo
Copy link
Contributor Author

rchiodo commented Dec 7, 2022

conftest.py controls settings.

  • Have completions for this file based on pytest allowed options
  • Have quick action to generate it?

@rchiodo
Copy link
Contributor Author

rchiodo commented Dec 7, 2022

Fixtures are basically auto matched parameter names based on the name of a fixture function. At test time the value passed is the result (or yield) of the function call.

I believe this is where Graham's idea of generating typestubs might work. Generate a typestub next to the test file with the types filled in based on the fixture return value.

More about fixtures:

  • Recursive (fixtures can use other fixtures)
  • They have to be in the same file or a 'conftest.py' for them to be picked up elsewhere.
  • They can have a dependency graph. This means they must be used in a certain order in the test?
  • There are built in fixtures outside the list I mentioned above. One of them being the request fixture that has type FixtureRequest

@rchiodo
Copy link
Contributor Author

rchiodo commented Dec 8, 2022

Markers allow for:

  • Labeling tests (not quite so useful for type checking)
  • use fixtures with a marker instead of an argument (might want to autocomplete these)
  • parmeterize a function. Parameters are passed as lists of tuples. Might typecheck them? Seems rather difficult.
  • skip,skipif,xfail - not useful for us
  • custom which allow generating more values to be sent into the test

@rchiodo
Copy link
Contributor Author

rchiodo commented Dec 15, 2022

Switch algorithm to look for test file names first
Check pytest config in Visual Studio

@rchiodo
Copy link
Contributor Author

rchiodo commented Dec 15, 2022

TypeEval class changes

  • Performance tests in Power BI
  • Run pyright CLI against
    • numpy
    • pandas
    • scikit-learn

@rchiodo
Copy link
Contributor Author

rchiodo commented Dec 15, 2022

Npm Run tests results (with Node 16.15.1)

  • TypeEvaluator as a class: 196s, 164s, 164s
  • TypeEvaluator as a set of functions: 188s, 162s, 163s

@rchiodo
Copy link
Contributor Author

rchiodo commented Dec 15, 2022

Principles of Pyright and Pylance:

  1. Pyright and pylance should always evaluate types consistently. This is important for users who want the same results in their editor and in CI.
  2. Pyright is a standards-based type checker that does not have any specialized knowledge of third-party libraries and non-standard behaviors they introduce.

Alternative ideas without changing the TypeEvaluator:

Idea 1: Second language server (started from a different extension)

  • Second language server that provides:
    • Goto def for pytest fixtures
    • Hover for pytest fixtures (computes type on its own)
    • Completions for pytest functions (list fixtures)

Pros:

  • Wouldn't violate principles

Cons:

  • Probably be a copy of pyright just to do all the parsing and types. Fork? THat would be ugly to maintain.
    • Maybe could add the extensibility just for this separate language server
  • Hover would have two types - Any from pylance, and fixture type from new server

Idea 2: Separate extension that injects middleware in python core extension?

  • Overrides necessary functionality to customize it for pytest?
  • Still needs to compute type somehow, but maybe could use pyright and ask it for the appropriate type. Runs pyright as a server and uses it to answer questions about different parts.

Pros:

  • Wouldn't violate principles.
  • Users would install it only for pytest

Cons:

  • Separate copy of all of the types
  • Maybe instead send messages directly to pylance to compute the types etc. Middleware would only need to do parsing

Idea 3: Outside command that adds types for features as a quick action

  • Completions auto add types for fixtures
  • Quick action to add types for fixtures
  • All other features aren't affected and can be custom implemented already.

Pros:

  • Types from pyright aren't changed
  • Principles not violated

Cons:

  • User has to click a button to generate the types? Is this a big deal between it just 'knowing' them?

Idea 4: Pyright somehow knows about pytest fixtures

  • Generate typestubs for test functions on the fly? Does this violate principle 1? No it won't if the typestubs actually stick around. Then Pyright would behave the same way.
  • Might require a new way to pick up typestubs (like in the user temp folder or something)

Pros:

  • Pyright and Pylance use the same types

Cons:

  • Potentially new location for stubs
  • Race condition between updating the stub and other features

@rchiodo
Copy link
Contributor Author

rchiodo commented Dec 16, 2022

I think the pytest weirdness can be summed up as:

  • Parameter types come from function return values that match the parameter name.
  • These types are applied to only certain functions (having a decorator or matching a pattern)

If you were going to codify that it such a way that a static type checker could figure it out, how would you do it?

Maybe something like a 'implicit_parameter_types' decorator?

def implicit_parameter_types(
   function_has_name_pattern: str)
   
def implicit_parameter_types(
   function_has_decorator_type: Type)

Then the hard part would be how to apply this to pytest. For fixtures, we could probably just have some mapping between pytest.fixture and one or more of these, but for 'test_' functions, that doesn't work. Users don't do anything except name functions 'test_'

Not sure how to apply some rule that a type checker can follow to the 'test_' functions. Is there any precedence for general pattern matching and implying something from it?

@erictraut
Copy link
Contributor

Pytest fixtures are typically not explicitly imported by the test modules that consume them. (This is a terrible design, IMO. I find pytest tests to be incredibly difficult to understand because of this. It's almost impossible to look at the code and determine what's going on in the test.) Because fixtures are not explicitly referenced in the modules in which they're used, the normal dependency-tracking mechanisms in pyright do not apply. What are your thoughts on how to handle this aspect of the problem? If someone modifies a fixture, how will that change be propagated?

Are you assuming that each project will have only one pytest configuration? What about mono-repos where there are multiple configurations? Have you thought about how to map specific pytest configurations and fixture sets to specific subtrees in the code base?

@rchiodo
Copy link
Contributor Author

rchiodo commented Dec 16, 2022

Maybe I missed something, but I thought the pytest fixtures are always either:

The 'conftest.py' file can be anywhere in the workspace AFAIK, so the plan is to just look up the directory tree to find all the conftest.py files. They're added to the 'tracked' file list in the server.

I need to reread the docs, but maybe there's other ways to define fixtures too.

@rchiodo
Copy link
Contributor Author

rchiodo commented Dec 16, 2022

Oh yeah, there's this too. Plugins

@erictraut
Copy link
Contributor

Ah, so fixtures are discovered based on searching for a hard-coded file name (conftest.py) within the current directory structure? I was under the impression that fixture modules (whatever name they were given) were referenced by a pytest configuration file of some sort. My mental model was probably wrong here.

@rchiodo
Copy link
Contributor Author

rchiodo commented Dec 16, 2022

Ah, so fixtures are discovered based on searching for a hard-coded file name (conftest.py) within the current directory structure? I was under the impression that fixture modules (whatever name they were given) were referenced by a pytest configuration file of some sort. My mental model was probably wrong here.

AFAIK, yes. There might be a way with some sort of outside configuration that I don't know about because @bschnurr mentioned something special that Visual Studio does for pytest. Haven't looked into it yet.

@erictraut
Copy link
Contributor

You mentioned that there's a way to invoke a test with a fixture without including a decorator — that pattern matching is also supported. Where is that pattern defined? Is it hard-coded, or is it configurable in some manner?

@rchiodo
Copy link
Contributor Author

rchiodo commented Dec 16, 2022

Sorry I meant this:

def test_foo(request):
    pass

The fixture has no decorator here because it's a builtin one. request is not referenced anywhere in the user's source other than as a parameter name.

Conftest.py fixtures are similar too:

# conftest.py
@pytest.fixture
def my_fixture():
   return "4"
# test.py
def test_stuff(my_fixture)
   pass

my_fixture isn't imported or anything. It's just implied.

@rchiodo
Copy link
Contributor Author

rchiodo commented Dec 16, 2022

Maybe you meant the pattern I mentioned here?

def implicit_parameter_types(
function_has_name_pattern: str)

Pytest thinks anything starting with the string test_ is a test function. It will then use the parameter names to match all known fixtures.

@erictraut
Copy link
Contributor

Is test_ hard-coded, or is it configurable?

Obviously, test_ has this special meaning only if you're using pytest. Do you have thoughts on how to distinguish between the two cases — one where it should be interpreted as a test with a pytest fixture and one where it doesn't?

Is it common to rely on test_ without a decorator, or is this the exception? I'm wondering if the solution can rely on the presence of a @pytest.fixture decorator? Could we tell users "you have to use the decorator if you want type support in pylance"?

I'm trying to get a sense for how much of pytest's internal behaviors need to be assumed here. The problem gets much easier (and the solution less fragile) if we don't need to encode these behaviors.

@rchiodo
Copy link
Contributor Author

rchiodo commented Dec 16, 2022

Is test_ hard-coded, or is it configurable?

test_ is hard coded AFAICT. See here.

Obviously, test_ has this special meaning only if you're using pytest. Do you have thoughts on how to distinguish between the two cases — one where it should be interpreted as a test with a pytest fixture and one where it doesn't?

AFAICT, a test with a fixture is any test function that takes a parameter other than self.

Is it common to rely on test_ without a decorator, or is this the exception? I'm wondering if the solution can rely on the presence of a @pytest.fixture decorator? Could we tell users "you have to use the decorator if you want type support in pylance"?

Test functions don't have any decorators. Fixtures do. If you want to create a new fixture, you add a decorator. That's the hard part I think in trying to distinguish fixtures as parameters. There's nothing other than the pattern of test_ on the front and the name of the parameter.

I don't know what we'd tell users other than figure out the types yourself if we don't want pyright to compute them.

I did have an idea above (Idea 3: Outside command that adds types for features as a quick action) about having a separate action that does the computation for them and then adds the type annotations for them. That way pyright wouldn't have to know about the types, but that seems crummy to me.

@rchiodo
Copy link
Contributor Author

rchiodo commented Dec 16, 2022

Looks like 'test_' isn't harcoded either. You can have a pytest configuration file too:

https://docs.pytest.org/en/7.2.x/example/pythoncollection.html#changing-naming-conventions

@erictraut
Copy link
Contributor

erictraut commented Dec 16, 2022

I think we need a more complete functional spec. It looks like you're discovering requirements on the fly, which makes it difficult to understand the scope and bound of the problem that you're trying to solve. Which pytest functionality do you think is important to support (and which is not), and how does that translate into pyright/pylance features?

@rchiodo
Copy link
Contributor Author

rchiodo commented Dec 16, 2022

I don't think we're attempting to fully define everything we want to do or all the scenarios it will cover. The goal is to just get an MVP that we can put in front of customers.

What do you think is missing?

@rchiodo
Copy link
Contributor Author

rchiodo commented Dec 16, 2022

Perhaps I should have more examples and what the behavior could be? Like maybe it's hard to imagine what we'd want to have happen for test functions and fixtures.

I'll add some gifs to the top outlining what it could look like.

@erictraut
Copy link
Contributor

I'm trying to get a sense for which pytest behaviors need to be understood and replicated in pyright or pylance logic to achieve the desired outcome. Without that understanding, it's difficult to evaluate our technical options.

I'm asking basic questions about how fixtures are applied and evaluated, which behaviors are hard-coded vs configurable, etc., and it sounds like you haven't explored those questions. In my experience, a functional spec is a good way to establish the contours of a problem and make sure that everyone has that shared context.

For example, looking at the pytest test cases in my team's code base, we often use @pytest.mark.usefixtures and/or @pytest.mark.parametrize decorators. These presumably override which fixtures are applied and would therefore need to be understood by any mechanism that supplies or infers parameter types for such functions.

My understanding of pytest is very incomplete. It sounds like yours is too. We need to obtain a much more complete understanding. Otherwise we risk pursuing a technical approach that falls apart when we discover some new requirement or behavior that needs to be supported.

@rchiodo
Copy link
Contributor Author

rchiodo commented Dec 16, 2022

A word doc would probably be a better spot to put all of this down. Then I could have stated a bunch of things that I didn't think were going to affect the MVP.

I didn't think pytest.mark.usefixtures would affect the features proposed here, but now that I think about it more, the pytest.mark.parameterize might.

It'd probably be easier to comment on these things there instead of just having my random thoughts on what I thought the MVP should be. And it could certainly be more exhaustive than what I'm listing here.

@jwilges
Copy link

jwilges commented Jan 6, 2023

Re:

I believe this is where Graham's idea of generating typestubs might work. Generate a typestub next to the test file with the types filled in based on the fixture return value.

If I'm understanding this suggestion correctly, Graham's suggesting generating *.pyi files and leaning on Pylance picking up completions from the generated PEP-571 stub files? That sounds clever at first glance (simple/elegant) but for navigation I don't think anyone would want code navigation to land them in the type stub file. Just an off-the-hip thought while reading over your notes so far.

Re:

A word doc would probably be a better spot to put all of this down.

If design thoughts steer out of this public thread and into a Word document somewhere, can we follow along somehow?

@rchiodo
Copy link
Contributor Author

rchiodo commented Jan 6, 2023

If I'm understanding this suggestion correctly, Graham's suggesting generating *.pyi files and leaning on Pylance picking up completions from the generated PEP-571 stub files? That sounds clever at first glance (simple/elegant) but for navigation I don't think anyone would want code navigation to land them in the type stub file. Just an off-the-hip thought while reading over your notes so far.

That was the idea, but the stubs would be in 'virtual file system'. Goto def would likely not use those stubs and would just find the real .py files (as that's what goto def does today for functions that are both in a pyi and a py file).

If design thoughts steer out of this public thread and into a Word document somewhere, can we follow along somehow?

The word doc (at the moment) is more about internal architecture required to implement the ideas outlined here.

The intent of this issue is to be a first stab at the multiple discussion items, so if the MVP changes, we'll update it here.

@jwilges
Copy link

jwilges commented Jan 6, 2023

That was the idea, but the stubs would be in 'virtual file system'. Goto def would likely not use those stubs and would just find the real .py files

If that's all viable, it sounds like it could really simplify the MVP here. It may also provide unique avenues to separate the "pytest"-specific concerns from this project, if warranted.

Is this "generate stubs on the fly" approach used anywhere else in "Pylance/Pyright land" so to speak?

Where are the other pytest-specific configuration points in VS Code (e.g. pytest.command, python.testing.pytest*) living today? In Pylance or vscode-python by chance? VS Code at least has a few configuration points today that make pytest feel like a bit of a first class citizen in the ecosystem. That's what initially made me think this line of features might not be terribly out of place as an integration in Microsoft-official extension(s).

(Forgive my ignorance on the above two questions, although I've been enjoying using these projects for a while, I haven't tried to dig into their internals yet, but I figured you might know off hand.)

Pylance principle #2 in your ideas pros-and-cons list comment seems reasonable in that if baking pytest-specific behaviors too closely into this project can be avoided, that certainly seems desirable.

The pytest runner supports automatically loading plugins via some scanning voodoo I haven't looked up in a while if they are installed or via enabling them explicitly with e.g. pytest_plugins. Perhaps it may be reasonable to have a pytest plugin generate fixture stubs in a compatible location/format where VS Code can then shrug off that concern?

@rchiodo
Copy link
Contributor Author

rchiodo commented Jan 6, 2023

Is this "generate stubs on the fly" approach used anywhere else in "Pylance/Pyright land" so to speak?

Not at the moment, no. I didn't end up using it for the prototype either as it was more involved than just injecting code into pyright.

You'd have to generate one for every test function, as the user types them.

So for example, you have this test func:

def test_foo(my_fixture):
   assert(my_fixture.length == 2)

The stub for that might be something like:

def test_foo(my_fixture: list[str]): pass

Where are the other pytest-specific configuration points in VS Code (e.g. pytest.command, python.testing.pytest*) living today? In Pylance or vscode-python by chance? VS Code at least has a few configuration points today that make pytest feel like a bit of a first class citizen in the ecosystem. That's what initially made me think this line of features might not be terribly out of place as an integration in Microsoft-official extension(s).

Some are in the vscode-python extension. Some are in 3rd party extensions.

The pytest runner supports automatically loading plugins via some scanning voodoo I haven't looked up in a while if they are installed or via enabling them explicitly with e.g. pytest_plugins. Perhaps it may be reasonable to have a pytest plugin generate fixture stubs in a compatible location/format where VS Code can then shrug off that concern?

Interesting idea that we should keep in mind when designing this. If plugins are used a lot, how do we handle them? I don't know if a plugin would be able to tell us the types of fixtures or not. That's what we'd need in order to generate the type stubs. I don't think the plugin could generate the type stubs because it'd be conflicting with the stubs we're generating.

@rchiodo
Copy link
Contributor Author

rchiodo commented Jan 6, 2023

Did some more thinking about the type stub generation. There's a bit of a problem with the idea.

Generated type stubs are like a cache of what type information pyright might generate. Pyright does not currently cache anything in between edits. It recomputes everything on every edit. This is because it's famously hard to update the 'right' parts of the cache. So instead it regenerates as much as necessary after every edit. JIT type evaluation is a core tenet of pyright and why it's so damn fast compared to what we had before.

Generating type stubs for test functions would essentially have to be done on every edit. (Same cache hit detection problem).

That might be way too slow. And it would have to be done before pyright was asked to compute any types (otherwise pyright might go through some point that those stubs affected). So it would have to block all other operations.

That makes it seem like to me as not the way to do this.

Granted it might be worth a try just to see how slow it is in practice. All of the types for the predefined fixtures could remain cached (as we can probably assume the user isn't going to update pytest, or we can detect when they do). But types for all user fixtures would have to be transitively computed on every edit.

@jwilges
Copy link

jwilges commented Jan 6, 2023

The stub for that might be something like:

def test_foo(my_fixture: list[str]): pass

Agreed, that looks right. I experimented a bit with it after I wrote my last post and it didn't quite feel as awesome as I thought it would. It's not that we can't inspect, ascertain the types, and generate the stubs, but that like you mention, I think there are some devils in the details including:

  1. if type stubs are generated for fixtures beside each test (or in one or more stubs directories), stubs contributed by a plugin as in the example I gave wouldn't be extensible by anything on the VS Code side, one stub will always have priority
  2. if the idea was to generate e.g. test_foo.pyi beside an actively edited test_foo.py, current hinting mechanics in VS Code don't appear to apply the stub's definitions over top of the active file's own definitions
  3. whichever way types are eventually hinted for fixtures, I think it would be helpful to ensure there's some "fixture" type applied to these, e.g. in your example my_fixture should probably have a way of being hinted as Fixture[list[str]] so to speak (a bit in the spirit of Callable)

For that second point, what I mean is if you are actively editing test_foo.py with code like in your example, the IntelliSense doesn't show the test_foo.pyi hints, it shows my_fixture: Any despite the stub's hint. However, if you were to put the fixture in its own module, then reference it from another file, yes, it would pick up the stub's hint. That's an unfortunate wrinkle though I'm sure it makes complete sense.

Edit:
And following on to your post from today:

That makes it seem like to me as not the way to do this.

Agreed, the more I looked at it, the less I liked it. And I wouldn't be inclined to sacrifice speed unless it was marginal and truly reduced complexity for maintenance of the feature set. I don't think you're at all off base on your reply above.

Also, thanks both to you and @brettcannon for engaging with the community, I think some combination of the continual skilled active development/maintenance coupled with public/open evolution of this editor and its supporting ecosystem (Pylance/etc.) is what makes it a cut above most.

Briefly circling back to the extension/injection approach vs. the stub ideas, I may revisit tinker with it a bit if I can get some time to think on it more. I still need to read up on your earlier suggestion about hacking together an implementation off Pyright. Having this group of features will be awesome in day to day work. I totally appreciate @erictraut's cautionary sentiments that it may be hard to scope and get it right, but I do think there's got to be one or more reasonable ways to get it off the ground. It saddens me that PyCharm leads the pack on this feature set at the moment.

@erictraut
Copy link
Contributor

Yeah, I don't see how type stubs help here — for the reasons you mention above (especially point 2).

To get intellisense for parameters in any other function, we require that developers provide type annotations for input parameters. Increasingly, Python developers are adopting this habit because they see the benefits not only for completion suggestions but also for static type checking. Pytest unit tests are simply functions, so I would think that we'd want to follow the same pattern here and require developers to add inlined type annotations for parameters.

The big difference is that we have some additional type information about the arguments that are likely to be passed to each parameter (by using type inference on the return type of the fixture or parameterization marker). That means we could offer a code action to insert the inferred type as a type annotation for the parameter, saving the developer the extra work.

This approach would maintain consistency of experience (since test functions would be like every other function), and it wouldn't violate any of the other core principles that we established with pylance and pyright. In particular, we want parity between diagnostics produced in pylance and in pyright (assuming the two are configured the same). This is critical for developers who want to automate testing & type checking, for example through the use of CI scripts.

@jwilges
Copy link

jwilges commented Jan 7, 2023

Pytest unit tests are simply functions, so [...] we'd want to follow the same pattern here and require developers to add inlined type annotations for parameters.

The pytest runner is performing a form of dependency injection where it calls fixture functions based on test function matching argument names at runtime and passes the fixture return values into the test function. If the editor were to simply offer auto-completion in the form of injecting a once-off type annotation for these arguments, the inserted type annotations could easily become out of date when someone updates any associated fixture's return value type hints. That's why there was a discussion earlier about regenerating the type hints on save vs. statically injecting them into the test module source.

That means we could offer a code action to insert the inferred type as a type annotation for the parameter, saving the developer the extra work.

In my inline response above I'm assuming by "code action to insert" you mean as I've described above, which has the aforementioned pitfall(s). If you mean to insert the inferred type dynamically as part of IntelliSense despite the test function not specifying a type hint, disregard my above concern. Can you clarify your idea if I misinterpreted it?

@gramster
Copy link
Member

gramster commented Jan 9, 2023

Linking some past related issues in case we make changes that could allow us to revisit them:

#356
#684
#1059
#1750
#1824

Discussion: #2576

We've also had this issue opened about 5 times, but it's different to fixture type inference and involves code flow, so not related to 3727. Looks like it was addressed ultimately anyway (so we could potentially go back to some of the older versions and change the resolution).

@judej judej removed the needs spec label Jan 17, 2023
@rchiodo rchiodo added the fixed in next version (main) A fix has been implemented and will appear in an upcoming version label Jan 31, 2023
@bschnurr
Copy link
Member

bschnurr commented Feb 2, 2023

This issue has been fixed in prerelease version 2023.2.11, which we've just released. You can find the changelog here: CHANGELOG.md

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
enhancement New feature or request fixed in next version (main) A fix has been implemented and will appear in an upcoming version
Projects
None yet
Development

No branches or pull requests

6 participants