-
Notifications
You must be signed in to change notification settings - Fork 2
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
What should happen on memory.unmap? Should a subsequent memory access trap? #11
Comments
One use case for unmapping is to catch bugs in programs (e.g. null-pointer dereferences). For that use case, zero-filling isn't useful. Trapping might be OK, since it's fine for the access to be fatal and not catchable inside wasm (modulo implementing a debugger or virtualization inside of wasm). Another use case is for implementing null-pointer exceptions in Java and .NET: i.e. rather than instrumenting memory references, a common implementation strategy is to unmap the zero page, and then catch the OS signal from the memory reference and resume it. Just having memory.umap might not be enough to implement this today; we could imagine allowing memory references to throw, but resumption would still be a problem. Maybe once we get stack switching we could combine unmapping with some sort of resumable continuation or signal to be handled on a separate stack. I wouldn't want to preclude that here, but probably having a trap now could be generalized later. |
But to attempt to actually shed some light on the question, don't we already trap in the middle of memory? We have the combination of memory references that can cross page boundaries, bulk memory instructions, and atomics already, and those can all already race with memory growth and trap and leave various types of observable effects behind. |
I'm not personally clear on how much trapping we allow during races on shared memories, but it would certainly help to clarify a lot of design issues. Our prototype of As for the original question, I am strongly in favor of trapping in mappable memories. I think trapping is generally more desirable for applications (crashing on bad access instead of continuing), and certainly far easier to implement. (Generally, I imagine that requiring zeroes would require us to fill pages with zeroes on demand in a signal handler, since we cannot just commit an entire large memory without hitting commit limits on Windows. Not only would this be a pain, I imagine it would be rather slow.) I also like the possibilities @dschuff mentioned as far as null pointer exceptions and such. I think that's a good aspirational goal for memory control features in general, and I don't see any way to make them happen if we go with zeroes. |
As for I would generally suggest excluding |
Phrasing the question slightly differently, should inaccessible memory be treated differently? for bulk memory instructions, and for atomics, the traps are on a size mismatch, i.e. the memory operation performed is going over the target memory, or in the case of atomic operations for operations that violate the natural alignment. But in both cases, all linear memory is still contiguously accessible with a valid memory access. In the case of a data race with memory.unmap, we render parts of the memory inaccessible. So it violates a little bit of what the memory model in the threads proposal has as the text i.e. with trapping on previously accessible memory we are in the fully undefined bucket and not defined but non-deterministic bucket, or at least that's how I'm parsing it. |
We've previously considered zero filling on discard, and unmap, but there are several reasons this doesn't seem like the right option.
We've previously decided against this to avoid trapping in the middle of memory, what are the implications of doing so? Opening this issue as a catch-all for discussing trapping vs. zero filling.
The text was updated successfully, but these errors were encountered: