Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Implement reference streaming from language server to UI #20010

Closed
nicksnyder opened this issue Feb 6, 2017 · 10 comments
Closed

Implement reference streaming from language server to UI #20010

nicksnyder opened this issue Feb 6, 2017 · 10 comments
Assignees
Labels
api *out-of-scope Posted issue is not in scope of VS Code under-discussion Issue is under discussion for relevance, priority, approach

Comments

@nicksnyder
Copy link
Contributor

nicksnyder commented Feb 6, 2017

edited 2017-02-16

This is a proposal to allow language servers to stream the response to "Find all references" requests. This is beneficial because it may take a while for the language server to find all results for large repos and the user can benefit from seeing results as they are found.

Implementing this will require changes at multiple layers.

@jrieken
Copy link
Member

jrieken commented Feb 6, 2017

I understand that the actual data provider can have a streaming nature but I don't understand why that means the fronted/UX should have a streaming nature as well? Is it that computing the whole result set is very slow and that therefore incremental updates are desired? Or is the whole set so large that it won't fit into memory?

@jrieken jrieken added api info-needed Issue requires more information from poster labels Feb 6, 2017
@nicksnyder
Copy link
Contributor Author

@jrieken

Is it that computing the whole result set is very slow and that therefore incremental updates are desired?

Correct. For large repositories the language server may take multiple seconds to find all references.

To get the full benefit of these changes, we will also be proposing changes to LSP to support streaming, but for now, I would like to keep that out of scope for this particular discussion.

@jrieken
Copy link
Member

jrieken commented Feb 6, 2017

Thanks for clarifying. To be honest it will be a long way with many open questions before this can be implemented. The big question is what the UI should do? Remember that there can always be more than one XYZ provider which means results get merged. Also it's unclear to me how sorting should happen? Today we have one sort order over all items and we also pick the nearest item considering the location at which reference search was invoked. I don't think sorting and picking the nearest should change while results come in.

In the beginnings we had Cmd+P show results from file and symbol search and I remember that we would simply wait for all results to come in because all other attempts yielded in nervous/jumpy UI.

I would tackle this UX-first also considering other providers like CompletionItemProvider before talking about API.

@jrieken jrieken removed the info-needed Issue requires more information from poster label Feb 6, 2017
@nicksnyder
Copy link
Contributor Author

I agree that there are some UX issues to think through and solve, but I hope that doesn't block the discussion of the mechanics of streaming. I think the UX issues are solvable.

Remember that there can always be more than one XYZ provider which means results get merged.

Can you explain why this might cause a problem with my proposal?

Also it's unclear to me how sorting should happen? Today we have one sort order over all items and we also pick the nearest item considering the location at which reference search was invoked.

I don't think this needs to change with streaming. Any time the data changes, sort the intermediate result and display it.

I don't think sorting and picking the nearest should change while results come in.

The baseline is the user is looking at nothing until all results are done. If we stream results in, then they might get lucky and find the one they want before all results are done (especially if we get lucky and the language server happens to send "local" results first). It seems like the user is strictly better off with streaming.

The challenge is that the intermediate UX won't be that useful if it is adding items so quick that the user can't click on the one they want (e.g. they see a reference in another file they want but references to the local file are streaming in and bumping that result down). In the worst case they just wait for the search to complete anyway.

If the jitter ends up being a problem, I think there are UX techniques we could use to mitigate the jitter (e.g. grouping results and reserving space). We do something like this on sourcegraph.com

screen shot 2017-02-06 at 9 17 59 am
Click here to see how it works

For a little more context, Sourcegraph's custom language servers support global reference results from GitHub. These results take a bit longer, but we don't want to necessarily block local results from showing up.

This week I am working to get the results streamed all the way to the UI in VS Code so I should have more context on the UX issues soon. In the meantime, please let me know if you have any feedback on this API design.

@jrieken
Copy link
Member

jrieken commented Feb 6, 2017

I don't think this needs to change with streaming. Any time the data changes, sort the intermediate result and display it.

That exactly is the problem because there is no user interaction to synchronise with. Think about this: Two references are already shown, you are moving your mouse towards one to select it and just before you do that the list updates, resorts and you 'lose' your item. Now you have to find it again and the whole process repeats until all results are there. The fact that there are multiple providers makes that worst because a single provider might be able to provide reasonable clues to keep its results stable but it doesn't know how its items stack in comparison to those of other providers.

Wrt the API design the Thenable isn't what I prefer. The Thenable is the common denominator between various promise implementations. I know that there are some implementations, like WinJS.Promise, that have a progress callback but in the end it's non standard and we don't want to push that. Instead, something like we have explored for #18066 should be used. There is some proposed API that defines Progress<T>

@nicksnyder
Copy link
Contributor Author

nicksnyder commented Feb 6, 2017

Two references are already shown, you are moving your mouse towards one to select it and just before you do that the list updates, resorts and you 'lose' your item.

Yes, we are both describing the same scenario. I have two thoughts on this.

  1. The user only "loses" their item if a new result is inserted above their target item.
    a. Grouping may be able to mitigate this.
    b. Smart language servers may be able to mitigate this.
    c. As more result stream in, the jitter slows down so the user may not need to wait for all results to be able to click on the result they want.
  2. Even if the user has to wait for all the results to stop streaming, they are no worse off then they were before.

Anyway, I will try to get it working locally to see what the actual user experience is.

I know that there are some implementations, like WinJS.Promise, that have a progress callback but in the end it's non standard and we don't want to push that.

edit: I agree and updated the original proposal to reflect this feedback. Thanks!

@nicksnyder
Copy link
Contributor Author

I updated the original proposal to

  1. Remove references to ProgressThenable, and
  2. Add details about how to handle progress callbacks.

@nicksnyder nicksnyder changed the title API to stream references Implement reference streaming from language server to UI Feb 17, 2017
@nicksnyder
Copy link
Contributor Author

@jrieken @dbaeumer

I have updated the issue description with links to all of the necessary pull requests for this feature. Please take a look at these when you have the chance.

@nicksnyder
Copy link
Contributor Author

@felixfbecker fyi

@vscodebot
Copy link

vscodebot bot commented May 31, 2018

This issue is being closed to keep the number of issues in our inbox on a manageable level, we are closing issues that have been on the backlog for a long time but have not gained traction: We look at the number of votes the issue has received and the number of duplicate issues filed. If you disagree and feel that this issue is crucial: We are happy to listen and to reconsider.

If you wonder what we are up to, please see our roadmap and issue reporting guidelines.

Thanks for your understanding and happy coding!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
api *out-of-scope Posted issue is not in scope of VS Code under-discussion Issue is under discussion for relevance, priority, approach
Projects
None yet
Development

No branches or pull requests

2 participants