Skip to content

Commit

Permalink
Return responses in corresponding request order
Browse files Browse the repository at this point in the history
According to these three discussions on GitHub:

microsoft/language-server-protocol#12
microsoft/language-server-protocol#306
microsoft/language-server-protocol#584

While the server implementation is free to process incoming requests
concurrently in whatever order it wants, it is generally preferred for
servers to send responses back to the client in the same order the
requests were sent, for the sake of convenience on the client side.

To achieve this, we switch the message stream from `buffer_unordered(4)`
to `buffered(4)` instead. As before, at most 4 futures race to
completion concurrently with one another, but the responses now have to
wait for the others started before them to complete in order to preserve
correct message transmission order.

However, this introduces a potential for deadlocks when considering RPC
method handlers which need to send a request back to the client and
await for a response back as part of computing the final result. Imagine
a scenario where a flood of client-to-server requests come in all at
once over `stdin`, where each of the method handlers happen to issue one
or more server-to-client requests as part of their work. The handlers
for these incoming requests exhaust all 4 available execution slots in
the buffer, issue server-to-client requests, and await the responses. At
this point, the `Buffered<St>` is blocked and the `mspc::channel(1)`
buffer has filled up as well, temporarily blocking `framed_stdin` from
processing new messages as a form of backpressure. Unfortunately, this
means that the pending RPC handlers will likely never receive the
responses they require from the client, and the 4 pending futures will
block forever. While this is generally less likely to occur with
`buffer_unordered(4)` (the lack of an order requirement means that
quickly-resolving futures can exit the buffer quickly, compensating for
any blocked futures and making room for more incoming requests to be
processed), it's still possible to stall the pipeline given enough
volume from the client.

To fix this issue while keeping the request/response ordering
requirement, we can either increase the buffer size of `mpsc::channel()`
or switch to an `mpsc::unbounded()` to allow the `framed_stdin` stream
to keep moving forward even when all 4 slots in `Buffered<St>` are full.
Given that having no backpressure mechanism is probably inadvisable, we
opt for increasing the `mspc::channel()` buffer size from 1 to 16. At
this point, even if all 4 available buffer slots are full, we can skip
forward at least 16 JSON-RPC messages ahead in `framed_stdin` and
process the incoming responses from the client, if any. Typing very
quickly in my editor with a simple language server in place, this seems
like a reasonable value to me.
  • Loading branch information
ebkalderon committed Aug 14, 2020
1 parent 5265224 commit 030d3c0
Showing 1 changed file with 2 additions and 2 deletions.
4 changes: 2 additions & 2 deletions src/transport.rs
Original file line number Diff line number Diff line change
Expand Up @@ -91,11 +91,11 @@ where
T::Error: Into<Box<dyn Error + Send + Sync>>,
T::Future: Send,
{
let (mut sender, receiver) = mpsc::channel(1);
let (mut sender, receiver) = mpsc::channel(16);

let framed_stdin = FramedRead::new(self.stdin, LanguageServerCodec::default());
let framed_stdout = FramedWrite::new(self.stdout, LanguageServerCodec::default());
let responses = receiver.buffer_unordered(4).filter_map(future::ready);
let responses = receiver.buffered(4).filter_map(future::ready);
let interleave = self.interleave.fuse();

let mut messages = framed_stdin
Expand Down

0 comments on commit 030d3c0

Please sign in to comment.