-
-
Notifications
You must be signed in to change notification settings - Fork 934
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
WebSocket should be pickable #216
Comments
It’s not clear what you’re trying to do exactly, but either way I’ve no interest in supporting pickle. Language specific serialisation formats aren’t a good idea for anything much. |
A common pattern, when accepting a ws client, is to maintain a list of all "ws clients" on serverside : think a "chat service" thru websocket, the server side could be : ws.accept()
clients.append( ws)
while 1:
txt = await ws.receive_text() # recept something
for client in clients: # and broadcast it to everybody
await client.send_text( txt ) It works well with one worker(process). But if you spawn more than one worker with gunicorn : This clients list will not be shared with the others process ... so each process has its own list of clients. if WebSocket was pickable, it could be possible, for each process to unpickle/pickle this list before use (and make it sharable between processes) ... (AFAIK redis/memcache use pickle to save states of objects, no ?) |
For multi-worker we’ll want to use “broadcast” channels. Eg redis PUB/SUB or Postgres LISTEN/NOTIFY. Each worker will track its own connections, and listen for broadcast messages to send out. Have a look at Django channels to see how this sort of setup will work. That way it’s not just multi-worker, but multi-host. |
Looks great ! thanks for this advice ! (But it could be overbloated for simple needs, no ?) |
For single-host deployments we could provide a shared-memory broadcast backend. |
You still need to deal with restarting processes, and you also want to be able to expand out if needed, so it’s still the start approach to take. |
Do you know a simple example of "shared-memory broadcast backend" ? (an url ?) |
I’d suggest starting with redis pub/sub |
That’ll likely be the easiest thing to integrate. |
Thanks a lot, tom ! |
Thanks tom, I ended with : async def loopPubSub():
while ws.client_state == WebSocketState.CONNECTED:
message = events.get_message()
if message and message["type"]=="message":
await ws.send_text( message["data"].decode() )
await asyncio.sleep(0.001)
async def loopWS():
while ws.client_state == WebSocketState.CONNECTED:
o = await ws.receive_text()
r.publish('chan:recept', o)
t1=asyncio.ensure_future(loopPubSub())
t2=asyncio.ensure_future(loopWS())
await asyncio.wait( [t1, t2] ) it works like a charm ! |
Wonderful, thanks for the update. :) |
BTW, I really think that uvicorn should provide a redis-like (an in-memory-db) ... |
Noted. Will consider all this when I get onto #133. |
python comes with multiprocessing.connection module |
btw, I have created a POC ... my vision of a simple redis-like : redys (asyncio compliant) |
Is a pubsub system still the recommended way of dealing with the "list of all ws clients" problem in a multi process application? As long as Websocket isn't pickable, there would be no way to create a shared cache right? |
I am still researching the ways to create a global cache. Not found yet |
Currently, the WebSocket is not pickable, so it's impossible to share clients of the WebSocket in a multiple worker environment.
The text was updated successfully, but these errors were encountered: