-
Notifications
You must be signed in to change notification settings - Fork 29.3k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Implement reference streaming from language server to UI #20010
Comments
I understand that the actual data provider can have a streaming nature but I don't understand why that means the fronted/UX should have a streaming nature as well? Is it that computing the whole result set is very slow and that therefore incremental updates are desired? Or is the whole set so large that it won't fit into memory? |
Correct. For large repositories the language server may take multiple seconds to find all references. To get the full benefit of these changes, we will also be proposing changes to LSP to support streaming, but for now, I would like to keep that out of scope for this particular discussion. |
Thanks for clarifying. To be honest it will be a long way with many open questions before this can be implemented. The big question is what the UI should do? Remember that there can always be more than one XYZ provider which means results get merged. Also it's unclear to me how sorting should happen? Today we have one sort order over all items and we also pick the nearest item considering the location at which reference search was invoked. I don't think sorting and picking the nearest should change while results come in. In the beginnings we had Cmd+P show results from file and symbol search and I remember that we would simply wait for all results to come in because all other attempts yielded in nervous/jumpy UI. I would tackle this UX-first also considering other providers like CompletionItemProvider before talking about API. |
I agree that there are some UX issues to think through and solve, but I hope that doesn't block the discussion of the mechanics of streaming. I think the UX issues are solvable.
Can you explain why this might cause a problem with my proposal?
I don't think this needs to change with streaming. Any time the data changes, sort the intermediate result and display it.
The baseline is the user is looking at nothing until all results are done. If we stream results in, then they might get lucky and find the one they want before all results are done (especially if we get lucky and the language server happens to send "local" results first). It seems like the user is strictly better off with streaming. The challenge is that the intermediate UX won't be that useful if it is adding items so quick that the user can't click on the one they want (e.g. they see a reference in another file they want but references to the local file are streaming in and bumping that result down). In the worst case they just wait for the search to complete anyway. If the jitter ends up being a problem, I think there are UX techniques we could use to mitigate the jitter (e.g. grouping results and reserving space). We do something like this on sourcegraph.com
For a little more context, Sourcegraph's custom language servers support global reference results from GitHub. These results take a bit longer, but we don't want to necessarily block local results from showing up. This week I am working to get the results streamed all the way to the UI in VS Code so I should have more context on the UX issues soon. In the meantime, please let me know if you have any feedback on this API design. |
That exactly is the problem because there is no user interaction to synchronise with. Think about this: Two references are already shown, you are moving your mouse towards one to select it and just before you do that the list updates, resorts and you 'lose' your item. Now you have to find it again and the whole process repeats until all results are there. The fact that there are multiple providers makes that worst because a single provider might be able to provide reasonable clues to keep its results stable but it doesn't know how its items stack in comparison to those of other providers. Wrt the API design the |
Yes, we are both describing the same scenario. I have two thoughts on this.
Anyway, I will try to get it working locally to see what the actual user experience is.
edit: I agree and updated the original proposal to reflect this feedback. Thanks! |
I updated the original proposal to
|
@felixfbecker fyi |
This issue is being closed to keep the number of issues in our inbox on a manageable level, we are closing issues that have been on the backlog for a long time but have not gained traction: We look at the number of votes the issue has received and the number of duplicate issues filed. If you disagree and feel that this issue is crucial: We are happy to listen and to reconsider. If you wonder what we are up to, please see our roadmap and issue reporting guidelines. Thanks for your understanding and happy coding! |
edited 2017-02-16
This is a proposal to allow language servers to stream the response to "Find all references" requests. This is beneficial because it may take a while for the language server to find all results for large repos and the user can benefit from seeing results as they are found.
Implementing this will require changes at multiple layers.
The text was updated successfully, but these errors were encountered: