Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

added support for blocking channel->wait #414

Open
wants to merge 3 commits into
base: master
Choose a base branch
from

Conversation

arep
Copy link

@arep arep commented Mar 12, 2021

Using blocking call can speed up the latency for the queue processing significantly.

@vyuldashev
Copy link
Owner

@arep During development of Consumer I tried with setting nonblocking to false and not all features of Laravel worked.

@arep
Copy link
Author

arep commented Mar 13, 2021

That might be, but it would be nice to have the option to use it anyway. For me at least it is important that the messages are processed as quickly as possible, and then I must use blocking call.
Maybe mention in the docs that blocking call might not work for all features.
Do you remember what features that did not work?

@Xfaider48
Copy link

+1. It's very important option to use if you want to have high MPS

@M-Porter M-Porter self-assigned this Jan 26, 2023
@M-Porter
Copy link
Collaborator

Hi @arep, is this still something you would like added to the library?

@khepin
Copy link
Collaborator

khepin commented Jan 26, 2023

Might be better in the connection config.

@arep
Copy link
Author

arep commented Jan 26, 2023

Hi @arep, is this still something you would like added to the library?

Yes. We are using this feature in production and it has worked great.

@MorrisonHotel
Copy link
Contributor

Well, i've been trying to push same solution (#296) in late of 2019 :)

@khepin
Copy link
Collaborator

khepin commented Apr 10, 2023

@arep do you have some links / docs explaining the perf value of doing this?
Can you detail in which cases this brings such value?
Are you using prefetch-count? Without knowing much more about your case, it seems like loosing a loop or 2 because of non-blocking calls but then getting 300 messages to work on would end up being not much of an issue.
I'm all for improving performance, before jumping in, I'd like to have a clear case for when this option makes things better.

@arep
Copy link
Author

arep commented Apr 15, 2023

@arep do you have some links / docs explaining the perf value of doing this? Can you detail in which cases this brings such value? Are you using prefetch-count? Without knowing much more about your case, it seems like loosing a loop or 2 because of non-blocking calls but then getting 300 messages to work on would end up being not much of an issue. I'm all for improving performance, before jumping in, I'd like to have a clear case for when this option makes things better.

We are using rabbitmq to do RPC calls from the webserver to a service that runs database querys. The reason is that some queries can take a long time, 5-60 sec, while most is completed within milliseconds. On one of our setups there are around 150 queries pr/sec.
The query service instances are waiting for messages on the queue with blocking call so they process them as fast as possible. If we did not use blocking call, there would be a wait loop and any queries that use sub-second time would be slowed down significantly. Any non-blocking loop would have to have some time of sleep or else the servers CPU would go 100% all the time.
Only one query at a time can be processed by one query service instance, so prefetch can not be higher than 1. Since we can't know in advance if the query is slow or fast.

Hope that clarifies the use case.

@khepin
Copy link
Collaborator

khepin commented Apr 17, 2023

Understood, so it's a queue with low traffic but where the latency of processing the job matters.

I'll look into this a bit more.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

6 participants