-
-
Notifications
You must be signed in to change notification settings - Fork 368
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Limit completions to top 40 #1218
Conversation
We are overwhelming the LSP client by sending 100s of completions after the first character. Instead, let's send 20 at a time and refresh for more when the user types another word
could we do some sorting to send the most used ones? |
We don't have any usage statistics, so I don't see how |
It would be really frustrating if autocompletion does not display what I want just because there are 20 similarly named functions... (say I want |
If its not in the top 20, are you going to scroll down the list repeatedly (unlikely) or keep typing characters until the one you want gets to the top (almost certainly). I would ask why 20 though - where does VS Code start to get noticably slow? 20 feels relatively low to me (sometimes I do |
Note that as soon as you type an additional character the completions are refreshed. I picked 20 because it seems unreasonable to scroll more than 20 lines, but given that completions can also serve as hints I think it's reasonable to increase this to 50 or even 100. EDIT: Actually, VSCode popup only ever shows 12 lines, so I think 20 is reasonable |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Should we have a test for this?
Or wrapping arround to directly go to the bottom of the list. I often do such a wrapping move. (I don't know this is applicable to VSCode or other editors, though) |
Just type another character and will get better suggestions. I've increased the limit to 40, maybe you can contribute a change to make it user customisable? A freezing UI is a much worse experience than a limited amount of suggestions, so I think the current tradeoff is justified |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM, thank you for the adjustment!
Do we have documentation of all our configs, what they do, what they mean etc? If yes, you probably need to note it there. If not, should we add a ticket to create such a document? |
I don't think that such a document exists, no. /cc @jneira |
It's documented in readme. Not sure how complete and correct documentation is, tho. |
@pepeiborra - would be good to document this new flag in the readme. |
48e6357
to
79bfb44
Compare
It looks like eval plugin test is really unstable... Maybe it's the case that we should replace eval parser with the actual GHC parser. |
I wonder if we used the streaming results support whether that would help with the freezing problem? |
what's the streaming results support? |
Returning partial results. I became aware of it from this lsp-mode issue. However, looking at it a bit more, I take it back. I think the intention is more to allow servers to return results faster, rather than to allow sending smaller batches to avoid overloading the client. |
It's not very widely implemented either, not even by VSCode |
My experience with emacs is that they are only refreshed once the list of matching completions becomes empty. The limiting also interacts poorly with some matching mechanisms; eg. to match |
Your experience with previous versions of HLS/ghcide? Then that's consistent with the expected behaviour, since we were telling the LSP client that the list of completions provided was "complete" and didn't need refreshing. Re your example with Finally, since the |
We are overwhelming the LSP client by sending 100s of completions after the first character. In VSCode this is noticeable - occasionally the UI freezes for a second or so while displaying the completions popup.
Instead, let's send 20 at a time and refresh for more when the user types another word - VSCode no longer freezes after this change.