Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Create tests for QUICStreams #10

Closed
5 tasks done
tegefaulkes opened this issue Apr 21, 2023 · 18 comments
Closed
5 tasks done

Create tests for QUICStreams #10

tegefaulkes opened this issue Apr 21, 2023 · 18 comments
Assignees
Labels
development Standard development r&d:polykey:core activity 4 End to End Networking behind Consumer NAT Devices

Comments

@tegefaulkes
Copy link
Contributor

tegefaulkes commented Apr 21, 2023

Specification

We're missing tests for streams, these need to be created. We should use fast-check to generate random data for streams. There are a few things we need to test.

  1. Streams work, the can be created, data sent and closed.
  2. Multiple streams work without issue
  3. Streams are cleaned up when the connection cleans up
  4. Streams propagate errors where required. EG general communication failure, stream was destroyed, connection was destroyed, ect.

Additional context

Tasks

  1. Test for stream creation.
  2. Test for stream destruction.
  3. Test for sending data through stream
  4. Errors propagate through streams
  5. Streams clean up during connection clean up.
@tegefaulkes
Copy link
Contributor Author

When it comes to streams, connections and servers. we have two ways of ending them.

  1. Wait until it has finished and drain while rejecting any new usage.
  2. Force it to finish, with an errors if required.

On the stream level this means we either wait for them to end and destroy themselves. Or force them to end with errors being sent through the readable and writable stream.

On the connection level, if we're draining then we wait for the streams to end on their own while rejecting any new streams. Or we force all the existing streams to end then end itself.

On the server level it's pretty similar. If we're draining we need to reject new connections and wait for existing connections to end. Or force all existing connections to destroy.

All 3 levels of this have a similar enough usage that we need a standard way of handling it. How would we standardise this?

Case 2 is pretty easy, if we're forcing things to end then we just need to propagate destruction down the layers.

Case 1 takes a little more work. We need to support a draining state where we're waiting for all existing usage to end while rejecting any new usage. I'm using the work 'usage' generically here, usage can be existing or creating new connections, or the same for streams. To support this we need 3 things.

  1. A way to enter a draining state where we reject any new connections/streams while waiting for existing connections/streams to end and destroy themselves.
  2. When a stream or connection has ended they need to destroy themselves automatically.
  3. We need a way to await the destruction of the connection/stream. This can be a promise or the destroy event.

Another question is, do we have these things separate or do we encapsulate all this behaviour within the destroy() methods with a force flag two switch between the two cases?

@tegefaulkes
Copy link
Contributor Author

It's possible for a stream to be closed or rejected. This would be used when rejecting new streams during a draining state. Currently I don't think this is supported within QUICStream.

To end a stream we can use connection.stream_shutdown https://docs.rs/quiche/latest/quiche/struct.Connection.html#method.stream_shutdown to signal the end of the readable and writable side of the stream. Theses will be seen as STOP_SENDING for the forward stream and STREAM_RESET frame for the reverse stream.

The STREAM_RESET will result in an error when processing the stream's recv. The STOP_SENDING will result in an error on the stream's send. Usually we won't see that the writable stream has ended until we try writing to it. But if we shutdown both directions at the same time, we'll see the result of this in the next recv step. So we can check that the writable has closed at the same time using connection.streamWritable(this.streamId, 0). If the writable has closed then this will throw the StreamStopped error.

In any case, when calling QUICConnection.streamNew() we need to know that the stream was rejected immediately. So ultimately stream creation needs to fail if the stream was rejected. We already send a single stream message when creating the stream, we can also await a response and any error of stream creation that comes along with it.

@tegefaulkes
Copy link
Contributor Author

I can see that we try to send a 0-length buffer when creating a new stream. So far as I can tell this message is not actually sent. In fact, as far as I can tell, no 0-length stream messages are sent unless its a finish frame for the stream.

I'll dig into this some more. But so far, to actually open a stream I have to send a message.

tegefaulkes added a commit that referenced this issue Apr 26, 2023
* Related #10

[ci skip]
@CMCDragonkai
Copy link
Member

I can see that we try to send a 0-length buffer when creating a new stream. So far as I can tell this message is not actually sent. In fact, as far as I can tell, no 0-length stream messages are sent unless its a finish frame for the stream.

I'll dig into this some more. But so far, to actually open a stream I have to send a message.

I thought this was being done. Do note that something has to actually "flush" the stream data... and correspond to sending a UDP message on the UDP socket.

You might want to look into the looping problem first to get an understanding how the UDP socket is actually sending data while looping from any data that's still in the quiche buffer.

@tegefaulkes
Copy link
Contributor Author

The quiche send should be called after stream sends and recvs. I should check if that's being done.

@tegefaulkes
Copy link
Contributor Author

To have the stream shutdowns be more reactive on the receiving side, we need to check if the sending or receiving stream has closed. this matters for the sending stream since it can close at any time, but we currently only know this when writing to it. Ideally the QUICStream can destroy and clean itself up without interacting with the stream.

tegefaulkes added a commit that referenced this issue Apr 27, 2023
@tegefaulkes
Copy link
Contributor Author

I think I understand the stream state machines better now. One of the problems I was running into was that the QUICStream was cleaning up before the underlying stream had fully finished. I have now determined this is because, if we force close the stream with streamShutdown the streams sending and receiving streams don't close immediately. The wait for the shutdown to be essentially acknowledged.

SO what was happening, internally in QUICStream we were forcing the streams to close, immediately closing the webstreams which is fine. But if we were also destroying and cleaning up the QUICStream before the underlying stream had ended. As a result, we were re-creating the QUICStream before the underlying stream fully finished.

The main change is, we can't finish destroying the QUICStream until we have received a STOP_SENDING and a STREAM_RESET frame on the writer and reader respectively.

Here are some quick notes about what needs to be done in each case of a stream ending. Note that this is from the perspective of a uni-direction stream, there are two of these in opposite directions for a bi direction stream.

  1. normal ending, sender sends fin packet when webstream writable has ended. receiver reads fin packet and ends webstream reader.
  2. sender initiated end. sender calls streamShutdown(writer) and causes a RESET_STREAM frame to be sent, Receiver tries to read and receives StreamReset(u64) exception. Receiver should error webstream readable immediately. the sender side isn't considered complete until a STOP_SENDING frame has been received.
  3. Receiver initiated end. Receiver calls streamShutdown(reader) and causes a STOP_SENDING frame to be sent. Sender tries to send and receives a StreamStopped(u64) exception. webstream writable should be error-ed. Receiver should immediately end the webstream readable, but the stream hasn't fully closed until the RESET_STREAM frame has been received.

@tegefaulkes
Copy link
Contributor Author

This is mostly solved now. But annoyingly, the stream doesn't show up in the readable or writable iterator once it has finished. So while the QUICStream waiting for the streams to finish, it ends up waiting forever since it's read() or write() is never called again. So the destroy never resolves even though the underlying stream enters the finished state almost immediately.

So unless I want to poll all active streams just to know if they're finished, I should have the stream check for a finished state after every time we process a recv on the connection.

tegefaulkes added a commit that referenced this issue Apr 28, 2023
Just need to check for stream state change after connection receives a packet. Ideally we only check finishing streams.

* Related #10

[ci skip]
tegefaulkes added a commit that referenced this issue May 1, 2023
* Related #10

[ci skip]
@tegefaulkes
Copy link
Contributor Author

I've dealt with most of the problems now. Just need to finish up the tests.

Just to note one thing I need to look into. The time between destroying a connection and it cleaning up is taking longer than expected. I need to look into this.

tegefaulkes added a commit that referenced this issue May 1, 2023
* Related #10

[ci skip]
@tegefaulkes
Copy link
Contributor Author

There's a problem where the readableStream is polling streamRecv() before any stream messages have been sent. So technically the stream doesn't exist. So I'm getting an InvalidStreamState(0) error.

On top of this, since 0-length messages are coalesced and ignored, if we want the stream to exist on both ends when the client creates it, we need to send at least 1 byte. So a stream will need to ignore the 1st sent byte.

I also want the ability to reject a new stream. In that case we want stream creation to fail. To facilitate this, I think stream creation should send and recv 1 byte before completing. If a stream is rejected or any errors happen, the stream creation can throw in that case.

@CMCDragonkai
Copy link
Member

CMCDragonkai commented May 1, 2023 via email

@tegefaulkes
Copy link
Contributor Author

I can do that. In that case it means streams are not created until application sends data through it. So when we create a new stream, it will exist in a nascent state until data is sent through it. This is important since we can't do any stream operations until the underlying stream state has been created.

So far as I can tell, stream state isn't created until data on that stream is sent or received. Until then most interaction with the underlying stream will result in an InvalidState(0) error.

Ah, so local state is created if we 'send' a 0-length buffer for that stream. But no data is sent for that. So state is created locally but no state is created remotely in this case.

@CMCDragonkai
Copy link
Member

So we should send a 0 length buffer then?

@tegefaulkes
Copy link
Contributor Author

Yes, just so we can create local state for the stream.

tegefaulkes added a commit that referenced this issue May 2, 2023
* Related #10

[ci skip]
tegefaulkes added a commit that referenced this issue May 2, 2023
tegefaulkes added a commit that referenced this issue May 2, 2023
* Related #10

[ci skip]
tegefaulkes added a commit that referenced this issue May 3, 2023
* Related #10

[ci skip]
@tegefaulkes
Copy link
Contributor Author

All the tests are complete now and seem to be working. some notes about the streams.

  1. Locally stream state is created when the QUICStream is created. No state is created remotely until data is sent on that stream.
  2. If the remote side would reject a stream by closing it immediately then the local side will not know this until the first message is sent.
  3. Remote side will not know a stream exists until it receives the first message for it.
  4. It's possible for a stream to be ended before it is created on the remote side. This happens when you create a QUICStream and destroy or end it immediately. In this case, the remote state is created when it receives the end frames. The remote QUICStream is created and is ended immediately.

I need to do some clean up on the streams code and then I can resolve this issue.

tegefaulkes added a commit that referenced this issue May 3, 2023
* Related #10

[ci skip]
@CMCDragonkai
Copy link
Member

How is 2. possible?

If the remote side would reject a stream by closing it immediately then the local side will not know this until the first message is sent.

How would the remote side know to close a stream if the first message hasn't been sent?

tegefaulkes added a commit that referenced this issue May 3, 2023
* Related #10

[ci skip]
@CMCDragonkai
Copy link
Member

This ability to have bidirectional streams is going to be crucial for any interactive RPC comms where back and forth needs to occur. But if we can get away by nesting RPC calls that would be good.

@tegefaulkes
Copy link
Contributor Author

How is 2. possible?

If the remote side would reject a stream by closing it immediately then the local side will not know this until the first message is sent.

How would the remote side know to close a stream if the first message hasn't been sent?

It can't that's why rejection would happen on the first message as apposed to QUICStream creation.

CMCDragonkai pushed a commit that referenced this issue May 17, 2023
Should handle more edge cases now, a lot of odd interactions and edge cases was revealed by the tests.

* Fixes #10

[ci skip]
@CMCDragonkai CMCDragonkai added the r&d:polykey:core activity 4 End to End Networking behind Consumer NAT Devices label Jul 9, 2023
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
development Standard development r&d:polykey:core activity 4 End to End Networking behind Consumer NAT Devices
Development

No branches or pull requests

2 participants