Is there a way to allow tokenization even when error has occurred? #2019
Unanswered
nair1abhishek
asked this question in
Q&A
Replies: 1 comment 2 replies
-
Hey @nair1abhishek,
What do you mean by this? I.e. just adding it the erroneous text to the previous token? You should be able to do that yourself with the lexer result. The lexer result contains all errors that appeared during lexing. Chevrotain doesn't have any builtin methods to incorporate lexer errors into tokens (which is why they're lexer errors in the first place!). |
Beta Was this translation helpful? Give feedback.
2 replies
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
-
Hi folks,
I'm trying to build a query editor similar to sourcegraph search, but I'm facing an issue with the lexer. I have a fixed grammar, but when user enters an invalid input the lexer registers an error, but the character which caused the error is not present as part of the tokens, I understand that this is because it did not match any token defined in the lexer. Is there a way to keep it part of token while simulatenously throwing the error?
Beta Was this translation helpful? Give feedback.
All reactions