-
Notifications
You must be signed in to change notification settings - Fork 1.1k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
☂️ Remove dependencies of Tok
from downstream tools
#11401
Labels
tracking
A "meta" issue that tracks completion of a bigger task via a list of smaller scoped issues.
Comments
dhruvmanila
added
the
tracking
A "meta" issue that tracks completion of a bigger task via a list of smaller scoped issues.
label
May 13, 2024
This was referenced May 13, 2024
charliermarsh
pushed a commit
that referenced
this issue
May 13, 2024
dhruvmanila
added a commit
that referenced
this issue
May 13, 2024
This was referenced May 14, 2024
dhruvmanila
added a commit
that referenced
this issue
May 14, 2024
dhruvmanila
added a commit
that referenced
this issue
May 14, 2024
dhruvmanila
added a commit
that referenced
this issue
May 14, 2024
## Summary This PR moves the following rules to use `TokenKind` instead of `Tok`: * `PLE2510`, `PLE2512`, `PLE2513`, `PLE2514`, `PLE2515` * `E701`, `E702`, `E703` * `ISC001`, `ISC002` * `COM812`, `COM818`, `COM819` * `W391` I've paused here because the next set of rules (`pyupgrade::rules::extraneous_parentheses`) indexes into the token slice but we only have an iterator implementation. So, I want to isolate that change to make sure the logic is still the same when I move to using the iterator approach. This is part of #11401 ## Test Plan `cargo test`
dhruvmanila
added a commit
that referenced
this issue
May 14, 2024
## Summary This PR follows up from #11420 to move `UP034` to use `TokenKind` instead of `Tok`. The main reason to have a separate PR is so that the reviewing is easy. This required a lot more updates because the rule used an index (`i`) to keep track of the current position in the token vector. Now, as it's just an iterator, we just use `next` to move the iterator forward and extract the relevant information. This is part of #11401 ## Test Plan `cargo test`
dhruvmanila
added a commit
that referenced
this issue
May 28, 2024
## Summary Part of #11401 This PR refactors most usages of `lex_starts_at` to use the `Tokens` struct available on the `Program`. This PR also introduces the following two APIs: 1. `count` (on `StringLiteralValue`) to return the number of string literal parts in the string expression 2. `after` (on `Tokens`) to return the token slice after the given `TextSize` offset ## Test Plan I don't really have a way to test this currently and so I'll have to wait until all changes are made so that the code compiles.
dhruvmanila
added a commit
that referenced
this issue
May 30, 2024
## Summary Part of #11401 This PR refactors most usages of `lex_starts_at` to use the `Tokens` struct available on the `Program`. This PR also introduces the following two APIs: 1. `count` (on `StringLiteralValue`) to return the number of string literal parts in the string expression 2. `after` (on `Tokens`) to return the token slice after the given `TextSize` offset ## Test Plan I don't really have a way to test this currently and so I'll have to wait until all changes are made so that the code compiles.
Merged
dhruvmanila
added a commit
that referenced
this issue
May 31, 2024
Closed by #11628 |
This was referenced May 31, 2024
Merged
dhruvmanila
added a commit
that referenced
this issue
May 31, 2024
## Summary Part of #11401 This PR refactors most usages of `lex_starts_at` to use the `Tokens` struct available on the `Program`. This PR also introduces the following two APIs: 1. `count` (on `StringLiteralValue`) to return the number of string literal parts in the string expression 2. `after` (on `Tokens`) to return the token slice after the given `TextSize` offset ## Test Plan I don't really have a way to test this currently and so I'll have to wait until all changes are made so that the code compiles.
dhruvmanila
added a commit
that referenced
this issue
May 31, 2024
dhruvmanila
added a commit
that referenced
this issue
Jun 3, 2024
## Summary Part of #11401 This PR refactors most usages of `lex_starts_at` to use the `Tokens` struct available on the `Program`. This PR also introduces the following two APIs: 1. `count` (on `StringLiteralValue`) to return the number of string literal parts in the string expression 2. `after` (on `Tokens`) to return the token slice after the given `TextSize` offset ## Test Plan I don't really have a way to test this currently and so I'll have to wait until all changes are made so that the code compiles.
dhruvmanila
added a commit
that referenced
this issue
Jun 3, 2024
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Labels
tracking
A "meta" issue that tracks completion of a bigger task via a list of smaller scoped issues.
This issue is to keep track of all the tasks required to remove the
Tok
dependency from the downstream tools (linter and formatter).Notes:
Linter
PLE1300
,PLE1307
(Avoid lexer usage inPLE1300
andPLE1307
#11406)UP031
W605
(MoveW605
to the AST checker #11402)doc_lines_from_tokens
(UseTokenKind
indoc_lines_from_tokens
#11418)TokenKind
(via AddTokens
newtype wrapper,TokenKind
iterator #11361):COM812
,COM818
,COM819
(Move most of token-based rules to useTokenKind
#11420)E301
..E306
(blank line rules) (UseTokenKind
in blank lines checker #11419)E701
..E703
(Move most of token-based rules to useTokenKind
#11420)ISC001
,ISC002
(Move most of token-based rules to useTokenKind
#11420)PLE2510
,PLE2512
..PLE2515
(Move most of token-based rules to useTokenKind
#11420)UP034
(MoveUP034
to useTokenKind
instead ofTok
#11424)W391
(Move most of token-based rules to useTokenKind
#11420)remove_import_members
useslex
method (Replace most usages oflex_starts_at
withTokens
#11511)extract_noqa_line_for
uses thekind
information from the string tokenlocate_cmp_ops
uses thelex
method (Replacelex
usages #11562)tokens_and_ranges
extracts the comment ranges from the token streamunsorted_dunder_all
andunsorted_dunder_slots
extracts the value from the string token (Replace most usages oflex_starts_at
withTokens
#11511)UP032
uses thelex_starts_at
method (Replace most usages oflex_starts_at
withTokens
#11511)I001
(trailing_comma
) uses thelex_starts_at
method (Replace most usages oflex_starts_at
withTokens
#11511)Stylist
extracts the indentation and quote information from the token (UpdateStylist
,Indexer
to use tokens from parsed output #11592)Indexer
extracts trivia and string ranges, line continuation start value (UpdateStylist
,Indexer
to use tokens from parsed output #11592)Formatter
lex_starts_at
inwrite_suppressed_statements_starting_with_trailing_comment
(Replacelex_starts_at
withTokens
in the formatter #11515)Internal document: https://www.notion.so/astral-sh/Downstream-work-items-551b86e104a34054b7192675550a6c25?pvs=4
The text was updated successfully, but these errors were encountered: