-
Notifications
You must be signed in to change notification settings - Fork 1.1k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Replace most usages of lex_starts_at
with Tokens
#11511
Conversation
1c2cae9
to
0691e60
Compare
49d4e2f
to
592f634
Compare
lex_starts_at
usage from UP031
lex_starts_at
with Tokens
0691e60
to
a59bbd6
Compare
592f634
to
d58ae92
Compare
a59bbd6
to
6c9fd43
Compare
d58ae92
to
11e2144
Compare
/// If the given offset lies within a token, the returned slice will start from the token after | ||
/// that. In other words, the returned slice will not include the token containing the offset. | ||
pub fn after(&self, offset: TextSize) -> &[Token] { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Do we depend on this behavior? I think I would rather have the implementation panic
if the offset isn't at a token boundary.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think I'll have a clearer picture once the code compiles. I plan on being able to do that by today as that will allow me to do a lot of testing. And, I plan on revisiting the APIs once everything is changed.
@@ -417,6 +417,15 @@ impl Tokens { | |||
}; | |||
&self[start..=start + end] | |||
} | |||
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'm not sure in which PR you added tokens_in_range
and I also don't know if it is a good idea.
We could avoid the end
binary search by either:
- doing a linear serach from the start. That would be based on the assumption that the end is close to the start (linear search has better cache locality)
- Return a custom iterator that lazily checks
end
in thenext
call. I don't know if that's feasible or if any logic depends on the fact thattokens_in_range
returns a slice.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I also don't know if it is a good idea.
Why though?
A lot of usages of the lexer are to get the tokens within a specified range, usually the range belongs to a node. This does mean that the token tree would be more useful than this.
We could avoid the
end
binary search by either:
- doing a linear serach from the start. That would be based on the assumption that the end is close to the start (linear search has better cache locality)
- Return a custom iterator that lazily checks
end
in thenext
call. I don't know if that's feasible or if any logic depends on the fact thattokens_in_range
returns a slice.
I can explore this ideas once the code compiles.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Why though?
Sorry, I phrased this poorly. The tokens_in_range
is a good idea. I'm not sure if my proposal of not using a binary search is a good idea.
6c9fd43
to
67e5827
Compare
e55db3c
to
0e13d06
Compare
11e2144
to
a24c861
Compare
## Summary Part of #11401 This PR refactors most usages of `lex_starts_at` to use the `Tokens` struct available on the `Program`. This PR also introduces the following two APIs: 1. `count` (on `StringLiteralValue`) to return the number of string literal parts in the string expression 2. `after` (on `Tokens`) to return the token slice after the given `TextSize` offset ## Test Plan I don't really have a way to test this currently and so I'll have to wait until all changes are made so that the code compiles.
## Summary Part of #11401 This PR refactors most usages of `lex_starts_at` to use the `Tokens` struct available on the `Program`. This PR also introduces the following two APIs: 1. `count` (on `StringLiteralValue`) to return the number of string literal parts in the string expression 2. `after` (on `Tokens`) to return the token slice after the given `TextSize` offset ## Test Plan I don't really have a way to test this currently and so I'll have to wait until all changes are made so that the code compiles.
## Summary Part of #11401 This PR refactors most usages of `lex_starts_at` to use the `Tokens` struct available on the `Program`. This PR also introduces the following two APIs: 1. `count` (on `StringLiteralValue`) to return the number of string literal parts in the string expression 2. `after` (on `Tokens`) to return the token slice after the given `TextSize` offset ## Test Plan I don't really have a way to test this currently and so I'll have to wait until all changes are made so that the code compiles.
## Summary This PR updates the entire parser stack in multiple ways: ### Make the lexer lazy * #11244 * #11473 Previously, Ruff's lexer would act as an iterator. The parser would collect all the tokens in a vector first and then process the tokens to create the syntax tree. The first task in this project is to update the entire parsing flow to make the lexer lazy. This includes the `Lexer`, `TokenSource`, and `Parser`. For context, the `TokenSource` is a wrapper around the `Lexer` to filter out the trivia tokens[^1]. Now, the parser will ask the token source to get the next token and only then the lexer will continue and emit the token. This means that the lexer needs to be aware of the "current" token. When the `next_token` is called, the current token will be updated with the newly lexed token. The main motivation to make the lexer lazy is to allow re-lexing a token in a different context. This is going to be really useful to make the parser error resilience. For example, currently the emitted tokens remains the same even if the parser can recover from an unclosed parenthesis. This is important because the lexer emits a `NonLogicalNewline` in parenthesized context while a normal `Newline` in non-parenthesized context. This different kinds of newline is also used to emit the indentation tokens which is important for the parser as it's used to determine the start and end of a block. Additionally, this allows us to implement the following functionalities: 1. Checkpoint - rewind infrastructure: The idea here is to create a checkpoint and continue lexing. At a later point, this checkpoint can be used to rewind the lexer back to the provided checkpoint. 2. Remove the `SoftKeywordTransformer` and instead use lookahead or speculative parsing to determine whether a soft keyword is a keyword or an identifier 3. Remove the `Tok` enum. The `Tok` enum represents the tokens emitted by the lexer but it contains owned data which makes it expensive to clone. The new `TokenKind` enum just represents the type of token which is very cheap. This brings up a question as to how will the parser get the owned value which was stored on `Tok`. This will be solved by introducing a new `TokenValue` enum which only contains a subset of token kinds which has the owned value. This is stored on the lexer and is requested by the parser when it wants to process the data. For example: https://github.com/astral-sh/ruff/blob/8196720f809380d8f1fc7651679ff3fc2cb58cd7/crates/ruff_python_parser/src/parser/expression.rs#L1260-L1262 [^1]: Trivia tokens are `NonLogicalNewline` and `Comment` ### Remove `SoftKeywordTransformer` * #11441 * #11459 * #11442 * #11443 * #11474 For context, https://github.com/RustPython/RustPython/pull/4519/files#diff-5de40045e78e794aa5ab0b8aacf531aa477daf826d31ca129467703855408220 added support for soft keywords in the parser which uses infinite lookahead to classify a soft keyword as a keyword or an identifier. This is a brilliant idea as it basically wraps the existing Lexer and works on top of it which means that the logic for lexing and re-lexing a soft keyword remains separate. The change here is to remove `SoftKeywordTransformer` and let the parser determine this based on context, lookahead and speculative parsing. * **Context:** The transformer needs to know the position of the lexer between it being at a statement position or a simple statement position. This is because a `match` token starts a compound statement while a `type` token starts a simple statement. **The parser already knows this.** * **Lookahead:** Now that the parser knows the context it can perform lookahead of up to two tokens to classify the soft keyword. The logic for this is mentioned in the PR implementing it for `type` and `match soft keyword. * **Speculative parsing:** This is where the checkpoint - rewind infrastructure helps. For `match` soft keyword, there are certain cases for which we can't classify based on lookahead. The idea here is to create a checkpoint and keep parsing. Based on whether the parsing was successful and what tokens are ahead we can classify the remaining cases. Refer to #11443 for more details. If the soft keyword is being parsed in an identifier context, it'll be converted to an identifier and the emitted token will be updated as well. Refer https://github.com/astral-sh/ruff/blob/8196720f809380d8f1fc7651679ff3fc2cb58cd7/crates/ruff_python_parser/src/parser/expression.rs#L487-L491. The `case` soft keyword doesn't require any special handling because it'll be a keyword only in the context of a match statement. ### Update the parser API * #11494 * #11505 Now that the lexer is in sync with the parser, and the parser helps to determine whether a soft keyword is a keyword or an identifier, the lexer cannot be used on its own. The reason being that it's not sensitive to the context (which is correct). This means that the parser API needs to be updated to not allow any access to the lexer. Previously, there were multiple ways to parse the source code: 1. Passing the source code itself 2. Or, passing the tokens Now that the lexer and parser are working together, the API corresponding to (2) cannot exists. The final API is mentioned in this PR description: #11494. ### Refactor the downstream tools (linter and formatter) * #11511 * #11515 * #11529 * #11562 * #11592 And, the final set of changes involves updating all references of the lexer and `Tok` enum. This was done in two-parts: 1. Update all the references in a way that doesn't require any changes from this PR i.e., it can be done independently * #11402 * #11406 * #11418 * #11419 * #11420 * #11424 2. Update all the remaining references to use the changes made in this PR For (2), there were various strategies used: 1. Introduce a new `Tokens` struct which wraps the token vector and add methods to query a certain subset of tokens. These includes: 1. `up_to_first_unknown` which replaces the `tokenize` function 2. `in_range` and `after` which replaces the `lex_starts_at` function where the former returns the tokens within the given range while the latter returns all the tokens after the given offset 2. Introduce a new `TokenFlags` which is a set of flags to query certain information from a token. Currently, this information is only limited to any string type token but can be expanded to include other information in the future as needed. #11578 3. Move the `CommentRanges` to the parsed output because this information is common to both the linter and the formatter. This removes the need for `tokens_and_ranges` function. ## Test Plan - [x] Update and verify the test snapshots - [x] Make sure the entire test suite is passing - [x] Make sure there are no changes in the ecosystem checks - [x] Run the fuzzer on the parser - [x] Run this change on dozens of open-source projects ### Running this change on dozens of open-source projects Refer to the PR description to get the list of open source projects used for testing. Now, the following tests were done between `main` and this branch: 1. Compare the output of `--select=E999` (syntax errors) 2. Compare the output of default rule selection 3. Compare the output of `--select=ALL` **Conclusion: all output were same** ## What's next? The next step is to introduce re-lexing logic and update the parser to feed the recovery information to the lexer so that it can emit the correct token. This moves us one step closer to having error resilience in the parser and provides Ruff the possibility to lint even if the source code contains syntax errors.
Summary
Part of #11401
This PR refactors most usages of
lex_starts_at
to use theTokens
struct available on theProgram
.This PR also introduces the following two APIs:
count
(onStringLiteralValue
) to return the number of string literal parts in the string expressionafter
(onTokens
) to return the token slice after the givenTextSize
offsetTest Plan
I don't really have a way to test this currently and so I'll have to wait until all changes are made so that the code compiles.