Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

TypeVar to represent a Callable's arguments #3028

Closed
JelleZijlstra opened this issue Mar 19, 2017 · 12 comments
Closed

TypeVar to represent a Callable's arguments #3028

JelleZijlstra opened this issue Mar 19, 2017 · 12 comments
Labels

Comments

@JelleZijlstra
Copy link
Member

This is an idea for a new type system feature that can help more accurately type some decorators and wrapper functions, like @async() from https://github.com/quora/asynq.

I'd like to see something like this work:

from mypy_extensions import ArgumentsVar
from typing import TypeVar, Awaitable

_ArgsT = ArgumentsVar('_ArgsT')
_RetT = TypeVar('_RetT')

def make_awaitable(fn: Callable[_ArgsT, _RetT]) -> Callable[_ArgsT, Awaitable[_RetT]]:
    async def wrapper(**args: _ArgsT, **kwargs: _ArgsT) -> Awaitable[_RetT]:
        result = fn(*args, **kwargs)
        return result
    return wrapper

@make_awaitable
def f(x: int, y: str) -> int:
    return 3

reveal_type(f(1, 'x'))  # Awaitable[int]
f(1, 1)  # error, second argument must be str
f(1, z=3)  # error, z is not a valid kwarg

Having a function's args and kwargs annotated with an ArgumentsVar would mean that it takes the arguments that the ArgumentsVar resolves to. As an extension, we could also make something like def f(x: int, *args: _ArgsT, **kwargs: _ArgsT) -> _T: ... work to indicate that a function takes an argument called x plus the arguments specified in _ArgsT.

This could also improve some types in the standard library. For example, the annotations for functools.singledispatch currently don't check function arguments. With ArgumentsVar, it could be typed as follows:

_T = TypeVar('_T')
_ArgsT = ArgumentsVar('_ArgsT')

class _SingleDispatchCallable(Generic[_ArgsT, _T]):
    registry = ...  # type: Mapping[Any, Callable[_ArgsT, _T]]
    def dispatch(self, cls: Any) -> Callable[_ArgsT, _T]: ...
    @overload
    def register(self, cls: Any) -> Callable[[Callable[_ArgsT, _T]], Callable[_ArgsT, _T]]: ...
    @overload
    def register(self, cls: Any, func: Callable[_ArgsT, _T]) -> Callable[_ArgsT, _T]: ...
    def _clear_cache(self) -> None: ...
    def __call__(self, *args: _ArgsT, **kwargs: _ArgsT) -> _T: ...

def singledispatch(func: Callable[_ArgsT, _T]) -> _SingleDispatchCallable[_ArgsT, _T]: ...

This kind of thing has come up before, but I'm not sure a concrete solution has been proposed. I'll try to implement this to see how well it works.

@JelleZijlstra
Copy link
Member Author

@gnprice helped me find a similar (but more general) past proposal from @sixolet: python/typing#239 (comment).

@sixolet
Copy link
Collaborator

sixolet commented Mar 19, 2017 via email

@gvanrossum
Copy link
Member

Also relevant: #3157 (comment) (esp. "Argspec type variables")

@srittau
Copy link
Contributor

srittau commented Mar 1, 2018

One thing that came up in #3157 is that a hypothetical ArgumentsVar should not just replace the arguments in a callable, but you should be able to add to or consume arguments:

def my_decorator(f: Callable[[str, *_ArgsT], str]) -> Callable[[*_ArgsT, int], str]): ...

Maybe it would make sense to only allow ArgumentsVar inside Callable parameter lists and then drop the asterisk?

ezyang added a commit to pytorch/pytorch that referenced this issue Jun 8, 2020
… inline"


Just because the annotations are inline doesn't mean the files type
check; most of the newly annotated files have type errors and I
added exclusions for them in mypy.ini.  The payoff of moving
all of these modules inline is I can delete the relevant code
generation logic for the pyi files (which was added ignore
annotations that weren't actually relevant anymore.)
Because we aren't actually typechecking these modules in most
cases, it is inevitable that some of these type annotations are wrong.
I slavishly copied the old annotations from the pyi files unless there
was an obvious correction I could make.  These annotations will probably
need fixing up later.

Moving these annotations inline was really hairy because of interactions
with JIT, and also the fact that Python type erasure is a lie (inheriting
from Generic *does* change the behavior of your object). Here is
the list of things I had to fix and/or work around:

- The quantization translation passes previously barfed if the weight/bias arguments were inferred to be Optional. Previously, TorchScript type inference would have inferred that these arguments were non-Optional (because type inference happens after module construction), but accurate type annotations on these parameters override this inference process, causing the arguments to be optional. I fixed this by making the quantized operator signatures line up exactly with the non-quantized signatures, so we never change the types of the arguments. This change involved mostly making a bunch of quantized kernels take optional, and then error if they were passed nullopt. (You can have any color you like, as long as it's non-null.)
- I removed Generic support for Module and ModuleList. The intentions behind this were admirable, but making Module inherit from Generic ended up being more headache than it was worth. First, in Python 3.6 and earlier, Generic has a nontrivial metaclass, which means all subsequent metaclass shenanigans (e.g., ScriptModule) need to line up the right metaclass. Second, Generic defines `__new__` specially, which means that `inspect.signature` doesn't work (see https://bugs.python.org/issue40897), and I found a case of people using precisely this in the wild. Between these two problems, and also the general problem which is that the parametrization here is an incomplete fix (parametrization helps with output typing, but it doesn't solve problems with input typing (and with mypy as it stands this is unfixable, see python/mypy#3028) I decided to just eliminate Module generics entirely. We still apply the Callable trick so that subclasses of Module don't cause mypy to complain, but otherwise you are on your own for getting accurate type information out of Modules.
- The `Callable` trick on `forward` caused TorchScript to stop performing inference on the forward body, which is bad because in general we can only figure out the most accurate type by doing TorchScript inference. I added a special case to `infer_type` to ensure we always do inference for `Module.forward`, even if it is annotated (which it is), and another special case to make sure we ignore references to Callable (which we shouldn't process) recursively.
- When `__annotations__` is set on a class (as is the case when you add type annotations), JIT will incorrectly add further annotations to the parent class. This PR fixes #39463 by testing if `__annotations__` is defined on the specific class, excluding parent classes from the test.
- Added a missing fake source range to the invocation of `get_signature`
- In some cases, we cannot provide accurate typing for parameters on modules. This usually occurs when you have an `Optional[Tensor]` parameter, whose optional-ness is determined at `__init__` time. Without the annotation, TorchScript will infer the correct refined type depending on arguments to the constructor, but with the annotation, it will never do a refinement at `__init__` time, and you'll end up with the wrong type. I ended up just straight up deleting type annotations in all of these cases. A more robust fix might be to make some way to force TorchScript to do inference even if there is an explicit annotation, in case of refinement.

Signed-off-by: Edward Z. Yang <[email protected]>

Differential Revision: [D21497397](https://our.internmc.facebook.com/intern/diff/D21497397)

[ghstack-poisoned]
ezyang added a commit to pytorch/pytorch that referenced this issue Jun 8, 2020
Just because the annotations are inline doesn't mean the files type
check; most of the newly annotated files have type errors and I
added exclusions for them in mypy.ini.  The payoff of moving
all of these modules inline is I can delete the relevant code
generation logic for the pyi files (which was added ignore
annotations that weren't actually relevant anymore.)
Because we aren't actually typechecking these modules in most
cases, it is inevitable that some of these type annotations are wrong.
I slavishly copied the old annotations from the pyi files unless there
was an obvious correction I could make.  These annotations will probably
need fixing up later.

Moving these annotations inline was really hairy because of interactions
with JIT, and also the fact that Python type erasure is a lie (inheriting
from Generic *does* change the behavior of your object). Here is
the list of things I had to fix and/or work around:

- The quantization translation passes previously barfed if the weight/bias arguments were inferred to be Optional. Previously, TorchScript type inference would have inferred that these arguments were non-Optional (because type inference happens after module construction), but accurate type annotations on these parameters override this inference process, causing the arguments to be optional. I fixed this by making the quantized operator signatures line up exactly with the non-quantized signatures, so we never change the types of the arguments. This change involved mostly making a bunch of quantized kernels take optional, and then error if they were passed nullopt. (You can have any color you like, as long as it's non-null.)
- I removed Generic support for Module and ModuleList. The intentions behind this were admirable, but making Module inherit from Generic ended up being more headache than it was worth. First, in Python 3.6 and earlier, Generic has a nontrivial metaclass, which means all subsequent metaclass shenanigans (e.g., ScriptModule) need to line up the right metaclass. Second, Generic defines `__new__` specially, which means that `inspect.signature` doesn't work (see https://bugs.python.org/issue40897), and I found a case of people using precisely this in the wild. Between these two problems, and also the general problem which is that the parametrization here is an incomplete fix (parametrization helps with output typing, but it doesn't solve problems with input typing (and with mypy as it stands this is unfixable, see python/mypy#3028) I decided to just eliminate Module generics entirely. We still apply the Callable trick so that subclasses of Module don't cause mypy to complain, but otherwise you are on your own for getting accurate type information out of Modules.
- The `Callable` trick on `forward` caused TorchScript to stop performing inference on the forward body, which is bad because in general we can only figure out the most accurate type by doing TorchScript inference. I added a special case to `infer_type` to ensure we always do inference for `Module.forward`, even if it is annotated (which it is), and another special case to make sure we ignore references to Callable (which we shouldn't process) recursively.
- When `__annotations__` is set on a class (as is the case when you add type annotations), JIT will incorrectly add further annotations to the parent class. This PR fixes #39463 by testing if `__annotations__` is defined on the specific class, excluding parent classes from the test.
- Added a missing fake source range to the invocation of `get_signature`
- In some cases, we cannot provide accurate typing for parameters on modules. This usually occurs when you have an `Optional[Tensor]` parameter, whose optional-ness is determined at `__init__` time. Without the annotation, TorchScript will infer the correct refined type depending on arguments to the constructor, but with the annotation, it will never do a refinement at `__init__` time, and you'll end up with the wrong type. I ended up just straight up deleting type annotations in all of these cases. A more robust fix might be to make some way to force TorchScript to do inference even if there is an explicit annotation, in case of refinement.

Signed-off-by: Edward Z. Yang <[email protected]>

Differential Revision: [D21497397](https://our.internmc.facebook.com/intern/diff/D21497397)

[ghstack-poisoned]
ezyang added a commit to pytorch/pytorch that referenced this issue Jun 9, 2020
… inline"


Just because the annotations are inline doesn't mean the files type
check; most of the newly annotated files have type errors and I
added exclusions for them in mypy.ini.  The payoff of moving
all of these modules inline is I can delete the relevant code
generation logic for the pyi files (which was added ignore
annotations that weren't actually relevant anymore.)
Because we aren't actually typechecking these modules in most
cases, it is inevitable that some of these type annotations are wrong.
I slavishly copied the old annotations from the pyi files unless there
was an obvious correction I could make.  These annotations will probably
need fixing up later.

Moving these annotations inline was really hairy because of interactions
with JIT, and also the fact that Python type erasure is a lie (inheriting
from Generic *does* change the behavior of your object). Here is
the list of things I had to fix and/or work around:

- The quantization translation passes previously barfed if the weight/bias arguments were inferred to be Optional. Previously, TorchScript type inference would have inferred that these arguments were non-Optional (because type inference happens after module construction), but accurate type annotations on these parameters override this inference process, causing the arguments to be optional. I fixed this by making the quantized operator signatures line up exactly with the non-quantized signatures, so we never change the types of the arguments. This change involved mostly making a bunch of quantized kernels take optional, and then error if they were passed nullopt. (You can have any color you like, as long as it's non-null.)
- I removed Generic support for Module and ModuleList. The intentions behind this were admirable, but making Module inherit from Generic ended up being more headache than it was worth. First, in Python 3.6 and earlier, Generic has a nontrivial metaclass, which means all subsequent metaclass shenanigans (e.g., ScriptModule) need to line up the right metaclass. Second, Generic defines `__new__` specially, which means that `inspect.signature` doesn't work (see https://bugs.python.org/issue40897), and I found a case of people using precisely this in the wild. Between these two problems, and also the general problem which is that the parametrization here is an incomplete fix (parametrization helps with output typing, but it doesn't solve problems with input typing (and with mypy as it stands this is unfixable, see python/mypy#3028) I decided to just eliminate Module generics entirely. We still apply the Callable trick so that subclasses of Module don't cause mypy to complain, but otherwise you are on your own for getting accurate type information out of Modules.
- The `Callable` trick on `forward` caused TorchScript to stop performing inference on the forward body, which is bad because in general we can only figure out the most accurate type by doing TorchScript inference. I added a special case to `infer_type` to ensure we always do inference for `Module.forward`, even if it is annotated (which it is), and another special case to make sure we ignore references to Callable (which we shouldn't process) recursively.
- When `__annotations__` is set on a class (as is the case when you add type annotations), JIT will incorrectly add further annotations to the parent class. This PR fixes #39463 by testing if `__annotations__` is defined on the specific class, excluding parent classes from the test.
- Added a missing fake source range to the invocation of `get_signature`
- In some cases, we cannot provide accurate typing for parameters on modules. This usually occurs when you have an `Optional[Tensor]` parameter, whose optional-ness is determined at `__init__` time. Without the annotation, TorchScript will infer the correct refined type depending on arguments to the constructor, but with the annotation, it will never do a refinement at `__init__` time, and you'll end up with the wrong type. I ended up just straight up deleting type annotations in all of these cases. A more robust fix might be to make some way to force TorchScript to do inference even if there is an explicit annotation, in case of refinement.

Signed-off-by: Edward Z. Yang <[email protected]>

Differential Revision: [D21497397](https://our.internmc.facebook.com/intern/diff/D21497397)

[ghstack-poisoned]
ezyang added a commit to pytorch/pytorch that referenced this issue Jun 9, 2020
Just because the annotations are inline doesn't mean the files type
check; most of the newly annotated files have type errors and I
added exclusions for them in mypy.ini.  The payoff of moving
all of these modules inline is I can delete the relevant code
generation logic for the pyi files (which was added ignore
annotations that weren't actually relevant anymore.)
Because we aren't actually typechecking these modules in most
cases, it is inevitable that some of these type annotations are wrong.
I slavishly copied the old annotations from the pyi files unless there
was an obvious correction I could make.  These annotations will probably
need fixing up later.

Moving these annotations inline was really hairy because of interactions
with JIT, and also the fact that Python type erasure is a lie (inheriting
from Generic *does* change the behavior of your object). Here is
the list of things I had to fix and/or work around:

- The quantization translation passes previously barfed if the weight/bias arguments were inferred to be Optional. Previously, TorchScript type inference would have inferred that these arguments were non-Optional (because type inference happens after module construction), but accurate type annotations on these parameters override this inference process, causing the arguments to be optional. I fixed this by making the quantized operator signatures line up exactly with the non-quantized signatures, so we never change the types of the arguments. This change involved mostly making a bunch of quantized kernels take optional, and then error if they were passed nullopt. (You can have any color you like, as long as it's non-null.)
- I removed Generic support for Module and ModuleList. The intentions behind this were admirable, but making Module inherit from Generic ended up being more headache than it was worth. First, in Python 3.6 and earlier, Generic has a nontrivial metaclass, which means all subsequent metaclass shenanigans (e.g., ScriptModule) need to line up the right metaclass. Second, Generic defines `__new__` specially, which means that `inspect.signature` doesn't work (see https://bugs.python.org/issue40897), and I found a case of people using precisely this in the wild. Between these two problems, and also the general problem which is that the parametrization here is an incomplete fix (parametrization helps with output typing, but it doesn't solve problems with input typing (and with mypy as it stands this is unfixable, see python/mypy#3028) I decided to just eliminate Module generics entirely. We still apply the Callable trick so that subclasses of Module don't cause mypy to complain, but otherwise you are on your own for getting accurate type information out of Modules.
- The `Callable` trick on `forward` caused TorchScript to stop performing inference on the forward body, which is bad because in general we can only figure out the most accurate type by doing TorchScript inference. I added a special case to `infer_type` to ensure we always do inference for `Module.forward`, even if it is annotated (which it is), and another special case to make sure we ignore references to Callable (which we shouldn't process) recursively.
- When `__annotations__` is set on a class (as is the case when you add type annotations), JIT will incorrectly add further annotations to the parent class. This PR fixes #39463 by testing if `__annotations__` is defined on the specific class, excluding parent classes from the test.
- Added a missing fake source range to the invocation of `get_signature`
- In some cases, we cannot provide accurate typing for parameters on modules. This usually occurs when you have an `Optional[Tensor]` parameter, whose optional-ness is determined at `__init__` time. Without the annotation, TorchScript will infer the correct refined type depending on arguments to the constructor, but with the annotation, it will never do a refinement at `__init__` time, and you'll end up with the wrong type. I ended up just straight up deleting type annotations in all of these cases. A more robust fix might be to make some way to force TorchScript to do inference even if there is an explicit annotation, in case of refinement.

Signed-off-by: Edward Z. Yang <[email protected]>

Differential Revision: [D21497397](https://our.internmc.facebook.com/intern/diff/D21497397)

[ghstack-poisoned]
ezyang added a commit to pytorch/pytorch that referenced this issue Jun 9, 2020
… inline"


Just because the annotations are inline doesn't mean the files type
check; most of the newly annotated files have type errors and I
added exclusions for them in mypy.ini.  The payoff of moving
all of these modules inline is I can delete the relevant code
generation logic for the pyi files (which was added ignore
annotations that weren't actually relevant anymore.)
Because we aren't actually typechecking these modules in most
cases, it is inevitable that some of these type annotations are wrong.
I slavishly copied the old annotations from the pyi files unless there
was an obvious correction I could make.  These annotations will probably
need fixing up later.

Moving these annotations inline was really hairy because of interactions
with JIT, and also the fact that Python type erasure is a lie (inheriting
from Generic *does* change the behavior of your object). Here is
the list of things I had to fix and/or work around:

- The quantization translation passes previously barfed if the weight/bias arguments were inferred to be Optional. Previously, TorchScript type inference would have inferred that these arguments were non-Optional (because type inference happens after module construction), but accurate type annotations on these parameters override this inference process, causing the arguments to be optional. I fixed this by making the quantized operator signatures line up exactly with the non-quantized signatures, so we never change the types of the arguments. This change involved mostly making a bunch of quantized kernels take optional, and then error if they were passed nullopt. (You can have any color you like, as long as it's non-null.)
- I removed Generic support for Module and ModuleList. The intentions behind this were admirable, but making Module inherit from Generic ended up being more headache than it was worth. First, in Python 3.6 and earlier, Generic has a nontrivial metaclass, which means all subsequent metaclass shenanigans (e.g., ScriptModule) need to line up the right metaclass. Second, Generic defines `__new__` specially, which means that `inspect.signature` doesn't work (see https://bugs.python.org/issue40897), and I found a case of people using precisely this in the wild. Between these two problems, and also the general problem which is that the parametrization here is an incomplete fix (parametrization helps with output typing, but it doesn't solve problems with input typing (and with mypy as it stands this is unfixable, see python/mypy#3028) I decided to just eliminate Module generics entirely. We still apply the Callable trick so that subclasses of Module don't cause mypy to complain, but otherwise you are on your own for getting accurate type information out of Modules.
- The `Callable` trick on `forward` caused TorchScript to stop performing inference on the forward body, which is bad because in general we can only figure out the most accurate type by doing TorchScript inference. I added a special case to `infer_type` to ensure we always do inference for `Module.forward`, even if it is annotated (which it is), and another special case to make sure we ignore references to Callable (which we shouldn't process) recursively.
- When `__annotations__` is set on a class (as is the case when you add type annotations), JIT will incorrectly add further annotations to the parent class. This PR fixes #39463 by testing if `__annotations__` is defined on the specific class, excluding parent classes from the test.
- Added a missing fake source range to the invocation of `get_signature`
- In some cases, we cannot provide accurate typing for parameters on modules. This usually occurs when you have an `Optional[Tensor]` parameter, whose optional-ness is determined at `__init__` time. Without the annotation, TorchScript will infer the correct refined type depending on arguments to the constructor, but with the annotation, it will never do a refinement at `__init__` time, and you'll end up with the wrong type. I ended up just straight up deleting type annotations in all of these cases. A more robust fix might be to make some way to force TorchScript to do inference even if there is an explicit annotation, in case of refinement.

Signed-off-by: Edward Z. Yang <[email protected]>

Differential Revision: [D21497397](https://our.internmc.facebook.com/intern/diff/D21497397)

[ghstack-poisoned]
ezyang added a commit to pytorch/pytorch that referenced this issue Jun 9, 2020
Just because the annotations are inline doesn't mean the files type
check; most of the newly annotated files have type errors and I
added exclusions for them in mypy.ini.  The payoff of moving
all of these modules inline is I can delete the relevant code
generation logic for the pyi files (which was added ignore
annotations that weren't actually relevant anymore.)
Because we aren't actually typechecking these modules in most
cases, it is inevitable that some of these type annotations are wrong.
I slavishly copied the old annotations from the pyi files unless there
was an obvious correction I could make.  These annotations will probably
need fixing up later.

Moving these annotations inline was really hairy because of interactions
with JIT, and also the fact that Python type erasure is a lie (inheriting
from Generic *does* change the behavior of your object). Here is
the list of things I had to fix and/or work around:

- The quantization translation passes previously barfed if the weight/bias arguments were inferred to be Optional. Previously, TorchScript type inference would have inferred that these arguments were non-Optional (because type inference happens after module construction), but accurate type annotations on these parameters override this inference process, causing the arguments to be optional. I fixed this by making the quantized operator signatures line up exactly with the non-quantized signatures, so we never change the types of the arguments. This change involved mostly making a bunch of quantized kernels take optional, and then error if they were passed nullopt. (You can have any color you like, as long as it's non-null.)
- I removed Generic support for Module and ModuleList. The intentions behind this were admirable, but making Module inherit from Generic ended up being more headache than it was worth. First, in Python 3.6 and earlier, Generic has a nontrivial metaclass, which means all subsequent metaclass shenanigans (e.g., ScriptModule) need to line up the right metaclass. Second, Generic defines `__new__` specially, which means that `inspect.signature` doesn't work (see https://bugs.python.org/issue40897), and I found a case of people using precisely this in the wild. Between these two problems, and also the general problem which is that the parametrization here is an incomplete fix (parametrization helps with output typing, but it doesn't solve problems with input typing (and with mypy as it stands this is unfixable, see python/mypy#3028) I decided to just eliminate Module generics entirely. We still apply the Callable trick so that subclasses of Module don't cause mypy to complain, but otherwise you are on your own for getting accurate type information out of Modules.
- The `Callable` trick on `forward` caused TorchScript to stop performing inference on the forward body, which is bad because in general we can only figure out the most accurate type by doing TorchScript inference. I added a special case to `infer_type` to ensure we always do inference for `Module.forward`, even if it is annotated (which it is), and another special case to make sure we ignore references to Callable (which we shouldn't process) recursively.
- When `__annotations__` is set on a class (as is the case when you add type annotations), JIT will incorrectly add further annotations to the parent class. This PR fixes #39463 by testing if `__annotations__` is defined on the specific class, excluding parent classes from the test.
- Added a missing fake source range to the invocation of `get_signature`
- In some cases, we cannot provide accurate typing for parameters on modules. This usually occurs when you have an `Optional[Tensor]` parameter, whose optional-ness is determined at `__init__` time. Without the annotation, TorchScript will infer the correct refined type depending on arguments to the constructor, but with the annotation, it will never do a refinement at `__init__` time, and you'll end up with the wrong type. I ended up just straight up deleting type annotations in all of these cases. A more robust fix might be to make some way to force TorchScript to do inference even if there is an explicit annotation, in case of refinement.

Signed-off-by: Edward Z. Yang <[email protected]>

Differential Revision: [D21497397](https://our.internmc.facebook.com/intern/diff/D21497397)

[ghstack-poisoned]
@Ran4
Copy link

Ran4 commented Feb 9, 2021

Is this still an open problem?

@JelleZijlstra
Copy link
Member Author

PEP 612 (https://www.python.org/dev/peps/pep-0612/) ended up providing a similar feature. I think we can close this issue now.

@sebastian-philipp
Copy link

For future reference. The related issue for PEP 612 is: #8645

@BojieSheng
Copy link

Anyone know who to annotate below function "examle_a"?

def example_a(cls):
    class A(cls):
        def __init__(self, x, y):
            self.x=x
            self.y=y
    return A

cls can be any class.

@Dreamsorcerer
Copy link
Contributor

Dreamsorcerer commented Dec 9, 2023

Seems like a bit of an anti-pattern, I think the closest you'll get is:

T = TypeVar("T")
def example_a(cls: type[T]) -> type[T]:

Maybe with intersections (if/when supported), you might be able to something like this in future:

class Sub(Protocol):
    x: Any
    y: Any

def example_a(cls: type[T]) -> Intersection[type[T], type[Sub]]

But, I wouldn't count on it working even after intersections are introduced.

@BojieSheng
Copy link

Seems like a bit of an anti-pattern, I think the closest you'll get is:

T = TypeVar("T")
def example_a(cls: type[T]) -> type[T]:

Maybe with intersections (if/when supported), you might be able to something like this in future:

class Sub(Protocol):
    x: Any
    y: Any

def example_a(cls: type[T]) -> Intersection[type[T], type[Sub]]

But, I wouldn't count on it working even after intersections are introduced.

Dreamsorcerer, many thanks for your answer! However the return is a subclass of type[T], do you think the return type should be changed?

T = TypeVar("T")
def example_a(cls: type[T]) -> ???:

I do not quite understand your second suggestion, can. you please explain a little bit further?

@Dreamsorcerer
Copy link
Contributor

Dreamsorcerer, many thanks for your answer! However the return is a subclass of type[T], do you think the return type should be changed?

The subtype doesn't exist at static time, so there isn't a more precise type you can use. A subclass is still of type T, so it is a correct annotation, but you won't be able to use any of the subclass things without a type error. As I said, this seems like an antipattern; without knowing the type, mypy can't tell you if the subclass is even correct (are you trying to overload attributes/methods that are incompatible with the superclass, for example).

I do not quite understand your second suggestion, can. you please explain a little bit further?

I'm just saying that it might be possible (some day) to define an intersection between the superclass and a Protocol (i.e. something that is defined as supporting the behaviour of both). But, intersections are not supported yet, and I have no idea if they will actually work in this case.

@BojieSheng
Copy link

Really appreciate for your support, it works!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Projects
None yet
Development

No branches or pull requests

9 participants