Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[mono][interp] Fix concurrency issues with publishing transfomed imethod #87555

Merged
merged 3 commits into from
Jun 16, 2023

Conversation

BrzVlad
Copy link
Member

@BrzVlad BrzVlad commented Jun 14, 2023

When publishing a transformed InterpMethod*, we first set all relevant fields (like code, alloca_size etc), we execute a write barrier and finally we set the transformed flag. On relaxed memory arches we need to have a read barrier on the consumer, since there is no data dependency between transformed and the other fields of InterpMethod.

On arm this change does a full barrier (we could get away with just a load acquire but we haven't yet added support for emitting this in the runtime). Still, this doesn't seem to introduce a heavy perf penalty (at least on my arm64 M1) but we can revisit if necessary. On x86/amd64 this is a compiler barrier so it should have no impact. WASM is single threaded for now.

Fixes #87271

@BrzVlad
Copy link
Member Author

BrzVlad commented Jun 14, 2023

This change additionally transforms mono_memory_write_barrier and mono_memory_read_barrier to compiler barriers on x86/amd64, instead of a full mfence. This is because on these arches, stores are not reordered with other stores, and same for loads.

Before this change they were doing a full memory barrier, regardless of architecture. We have a beginning of more precise implementation via the *_FENCE defines. Implement mono_memory_write_barrier and mono_memory_read_barrier reusing these defines instead.

The only consequence of this change is that, on x86 and amd64, `mono_memory_write_barrier` and `mono_memory_read_barrier` become a compiler barrier instead of a full mfence.
When publishing a transformed InterpMethod*, we first set all relevant fields (like `code`, `alloca_size` etc), we execute a write barrier and finally we set the `transformed` flag. On relaxed memory arches we need to have a read barrier on the consumer, since there is no data dependency between `transformed` and the other fields of `InterpMethod`.

On arm this change does a full barrier (we could get away with just a load acquire but we haven't yet added support for emitting this in the runtime). Still, this doesn't seem to introduce a heavy perf penalty (on my arm64 M1) but we can revisit if necessary. On x86/amd64 this is a compiler barrier so it should have no impact. WASM is single threaded for now.
@lateralusX
Copy link
Member

lateralusX commented Jun 14, 2023

This change additionally transforms mono_memory_write_barrier and mono_memory_read_barrier to compiler barriers on x86/amd64, instead of a full mfence. This is because on these arches, stores are not reordered with other stores, and same for loads.

One thing to consider is the store buffer on x86/amd64 that will delay stores from becoming visible to other cores (still visible to current core while in store buffer). Not sure what semantics we should have in mono_memory_write_barrier, but if we would like the store buffer to be drained as part of mono_memory_write_barrier, then we would at least need a write barrier on x86/amd64. If we only care about the stores (without relation to loads), then a compiler barrier will be enough.

@BrzVlad
Copy link
Member Author

BrzVlad commented Jun 16, 2023

@lateralusX In all the cases that I've seen where we use mono_memory_write_barrier, the main concern is the order of the stores when it is read by the other threads. I would expect the slight delay in publication of data to not cause any problems, but we can revisit later if any weird behavior shows up.

Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Projects
None yet
Development

Successfully merging this pull request may close these issues.

Random crashes in System.Net.Mail test suites with mono interpreter
3 participants