-
-
Notifications
You must be signed in to change notification settings - Fork 81
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Implement #214 JUICE_CONCURRENCY_MODE_USER #226
base: master
Are you sure you want to change the base?
Conversation
Now that I think about it |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Nice job, thank you for the PR! Sorry for the late review, it got buried in my todo list.
- I just made DNS resolution's defined behavior blocking in
JUICE_CONCURRENCY_MODE_USER
. There were some things that confused me about asynchronous DNS handling namely I thinkconn_impl
can beNULL
until DNS resolution is finished? I also just like the fact that it means there are no threads required when using this concurrency mode. I think it's reasonable to make DNS resolution the user's problem, but feel free to rewrite this part if you want non-blocking DNS resolution.
This is fine by me. To asnwer your question, conn_impl
is NULL before the call to agent_gather_candidates()
, where it is initialized., however it is initialized before the DNS resolution.
- Generally the smallest way to use
juice_user_poll
is a little verbose. It is basically likerecv(3)
even to the point wherejuice_user_poll
returns for all packets including those only used for ICE. This may have some practical use, but it's more of a side-effect of the implementation. It's also a bit disjoint since everything should still be handled inon_recv
. This at leastint ret; do { char buffer[1500]; int ret = juice_user_poll(agent, buffer, sizeof(buffer)); } while (ret > 0); if (ret < 0) { // Handle error }The benefit of pushing the loop out of
juice_user_poll
here rather than callingon_recv
in the function over and over is to facilitate zero-copys which I give an example of inuser.c
.
I'm not a fan of this design. It's quite awkward, makes the usage convoluted, and impacts performance for no clear benefit. Why not simply loop to receive all available packets in an internal buffer and update once if necessary on each poll?
- Each agent has its own lock like we discussed in Add single-threaded mode #214 so agent use across threads should be thread safe for everything except
juice_destroy
if I did everything correctly
I've seen no obvious issue, it looks good to me.
@@ -41,4 +41,6 @@ int conn_send(juice_agent_t *agent, const addr_record_t *dst, const char *data, | |||
int ds); | |||
int conn_get_addrs(juice_agent_t *agent, addr_record_t *records, size_t size); | |||
|
|||
#define CONN_MODE_IS_CONCURRENT(mode) mode != JUICE_CONCURRENCY_MODE_USER |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
#define CONN_MODE_IS_CONCURRENT(mode) mode != JUICE_CONCURRENCY_MODE_USER | |
#define CONN_MODE_IS_CONCURRENT(mode) (mode != JUICE_CONCURRENCY_MODE_USER) |
if (milliseconds >= 1000) | ||
sleep(milliseconds / 1000); | ||
usleep((milliseconds % 1000) * 1000); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
if (milliseconds >= 1000) | |
sleep(milliseconds / 1000); | |
usleep((milliseconds % 1000) * 1000); | |
usleep(milliseconds * 1000); |
juice_state_t state = juice_get_state(agent); | ||
if (state == JUICE_STATE_CONNECTED) { | ||
// Send three messages | ||
while (agent_data[i].sent < 3) { | ||
char message[50]; | ||
snprintf(message, sizeof(message), "Message %d from Agent %d", agent_data[i].sent + 1, agent_data[i].id); | ||
juice_send(agent, message, strlen(message)); | ||
agent_data[i].sent++; | ||
} | ||
} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
You should send messages in the state callback instead, so you'd not have to keep a record of sent count.
char buffer[BUFFER_SIZE]; | ||
if (size > BUFFER_SIZE - 1) | ||
size = BUFFER_SIZE - 1; | ||
memcpy(buffer, data, size); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It understand that you keep an history of packets as an example to achieve zero-copy, but it seems to me that copying here defeats the purpose. I think in practice keeping a packet history is not that useful as there is typically a depacketization step that involves parsing and copying the data one way or another.
if ( ret > 0 // We just received a datagram | ||
|| conn_impl->next_timestamp <= current_timestamp() | ||
|| agent->state != JUICE_STATE_COMPLETED) { | ||
if (agent_conn_update(agent, &conn_impl->next_timestamp) != 0) { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Moving the loop out of juice_user_poll()
is actually problematic performance-wise: you have to call the agent_conn_update()
process after each received packet, which comes at a performance cost when receiving a lot of application traffic. You should instead receive all available datagrams, then update the agent once.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The last condition in this expression seems like it shouldn't be needed, but I needed it to make the connection establish in a reasonable amount of time. I'm thinking there is something going wrong with how the timer is set, but it's a bit beyond my understanding to fix this.
I think the issue is that an agent calling conn_*_interrupt()
expects you to schedule a call to agent_conn_update()
, but it's not done by conn_user_interrupt()
.
} | ||
|
||
int conn_user_interrupt(juice_agent_t *agent) { | ||
// juice_user_poll does not block when polling, so there's nothing to interrupt |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
You still need to schedule a call to agent_conn_update()
on the next poll, it could be achieved by setting resetting next_timestamp
to 0. This is why you currently need to add the state != JUICE_STATE_COMPLETED
in juice_user_poll()
.
Thanks for helping me figure this out @paullouisageneau. I think I've addressed most of the things discussed in the original issue. I also added a test too. There might be some things you want to change though so this PR is open to maintainer edits. Also some guesswork and copying-and-pasting was done, so there might be some dead code or things that don't make sense. And I think I have my tabs and spaces straight... although my sanity has been permanently lost
Some Things you Might Want to Review
I just made DNS resolution's defined behavior blocking in
JUICE_CONCURRENCY_MODE_USER
. There were some things that confused me about asynchronous DNS handling namely I thinkconn_impl
can beNULL
until DNS resolution is finished? I also just like the fact that it means there are no threads required when using this concurrency mode. I think it's reasonable to make DNS resolution the user's problem, but feel free to rewrite this part if you want non-blocking DNS resolution.Generally the smallest way to use
juice_user_poll
is a little verbose. It is basically likerecv(3)
even to the point wherejuice_user_poll
returns for all packets including those only used for ICE. This may have some practical use, but it's more of a side-effect of the implementation. It's also a bit disjoint since everything should still be handled inon_recv
. This at leastThe benefit of pushing the loop out of
juice_user_poll
here rather than callingon_recv
in the function over and over is to facilitate zero-copys which I give an example of inuser.c
.Each agent has its own lock like we discussed in Add single-threaded mode #214 so agent use across threads should be thread safe for everything except
juice_destroy
if I did everything correctlyThe last condition in this expression seems like it shouldn't be needed, but I needed it to make the connection establish in a reasonable amount of time. I'm thinking there is something going wrong with how the timer is set, but it's a bit beyond my understanding to fix this.