Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Infer function parameter types using annotations #2216

Conversation

ringohoffman
Copy link

@ringohoffman ringohoffman commented Jun 19, 2023

Type of Changes

Type
βœ“ πŸ› Bug fix
βœ“ ✨ New feature

Description

Fixes pylint-dev/pylint#4813
Fixes pylint-dev/pylint#8781

Type annotations are not currently used for inferencing, which means that, for example, functions with parameter typed as a logging.Logger would not receive relevant pylint warnings such as logging-fstring-interpolation.

To make use of these type annotations, we now modify infer_assign to add yield additional Instances if an AssignName is being processed whose parent is an Arguments with annotations for the AssignName.

cc: @Pierre-Sassoulas

@codecov
Copy link

codecov bot commented Jun 19, 2023

Codecov Report

Merging #2216 (3ae1618) into main (eabc643) will increase coverage by 0.03%.
The diff coverage is 100.00%.

Additional details and impacted files

Impacted file tree graph

@@            Coverage Diff             @@
##             main    #2216      +/-   ##
==========================================
+ Coverage   92.71%   92.75%   +0.03%     
==========================================
  Files          94       94              
  Lines       10837    10847      +10     
==========================================
+ Hits        10048    10061      +13     
+ Misses        789      786       -3     
Flag Coverage Ξ”
linux 92.48% <100.00%> (+<0.01%) ⬆️
pypy 88.00% <100.00%> (+0.02%) ⬆️
windows 92.34% <100.00%> (+0.03%) ⬆️

Flags with carried forward coverage won't be shown. Click here to find out more.

Impacted Files Coverage Ξ”
astroid/inference.py 94.67% <100.00%> (+0.09%) ⬆️

... and 1 file with indirect coverage changes

@Pierre-Sassoulas Pierre-Sassoulas added Enhancement ✨ Improvement to a component Needs review πŸ” Needs to be reviewed by one or multiple more persons labels Jun 19, 2023
@DanielNoord
Copy link
Collaborator

The issue with this approach is that we have no way of knowing that the annotations are correct. Personally I wouldn't like pylint to assume all annotations are correct and think this should be handled with some sort of flag (--consider-annotations-are-correct).
Just turning this on by default doesn't seem like the best solution to me.

I'm wondering what other maintainers thinks of this.

@ringohoffman
Copy link
Author

I think this should be handled with some sort of flag (--consider-annotations-are-correct).

I support this. If we could assume hinted return types were correct, I would think we could save a lot of time on inferencing, assuming we could skip manual inferencing.

Personally I wouldn't like pylint to assume all annotations are correct

I understand that astroid was developed to infer types before type hints were introduced to Python. However, trusting type hints is what most type checkers do. Having to infer the return types even on standard library functions would seem like a major reason that pylint takes, for example, ~25 minutes to run on pytorch/torch.

@Pierre-Sassoulas
Copy link
Member

Pierre-Sassoulas commented Jun 23, 2023

We need to be careful here, using inference and not trusting annotations is the differentiating factor that makes pylint find issues that other tools don't in partially typed code base.

On the other hand it would be nice to be able to detect that we're in a fully typed code base where we should trust the typing. This is not going to be trivial, unless we ask users to tell us via options. It seems possible to take typing in builtin into account without any drawbacks and without detecting anything or adding an option. (Because we know that typing are truthful and reliable in builtin)

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Enhancement ✨ Improvement to a component Needs review πŸ” Needs to be reviewed by one or multiple more persons
Projects
None yet
3 participants