-
Notifications
You must be signed in to change notification settings - Fork 66
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Automatic Event Tracking #17
base: main
Are you sure you want to change the base?
Conversation
I'm unsure if this is right place to discuss... But I'm currently working on implementation of semi-lockless event queue (fast lockfree read; write under lock). https://github.com/tower120/lockless_event_queue/blob/master/src/event.rs It is in somewhat draft version now. It is linked-list of fixed-size chunks. Each chunk have Each EventReader is not thread-safe (they're cheap to get, and you should just have one for each thread/job). Event is thread-safe. I didn't really dig into the bevy guts yet, but I think that it is possible to integrate The caveats that I encounter so far with that design:
P.S. It is also possible to speed up write with |
I made some progress with my EventQueue. According to benchmarks, read performance is stunning https://github.com/tower120/lockless_event_queue/blob/91383baff8aeda56bdd80ab60984daae6f3fd992/benches/read_bench.rs#L8 :
Benchmark is single-threaded, to show overhead. Since there is no locks on read operations, in MT case - it is as fast as vec/deque read in MT (namely, memory bandwidth bounded). I think, I found how to deal with clean operation without additional locks. Thus, reader (within read session quants) always work with actual queue. So, I see it as following:
So.... What do you say? For me, looks like it deal with both performance and memory issues, while being able to pass messages reliably between frames. |
@tower120 Are you sure your units are correct? Numbers in the 10s of milliseconds seems outrageously high for the time per operation. Unless this is total time taken? If so I’m sorry I’m not sure I understand the benchmark. Can you please explain it further? |
@colepoirier that's time of reading 100`000 items. Look at Vec as baseline. Nothing can't be faster than it. |
@tower120 Ah thanks that makes more sense. I would suggest you add information like this and some further context because as it stands right now you’re explanation is lacking in what makes this a good solution and relies on the reader to properly understand your benchmark and implementation. |
@tower120 Impressive! I'd be interested in seeing a dedicated RFC (or PR with nice clear explanatory text) for this. This type of tool may be very useful to have in place as we tackle UI next, so I'd love a proper write-up. |
I would like to make this |
Awesome, sounds like a great choice. It's always nice to help grow the whole Rust ecosystem. |
Rendered
Implementation PR