-
Notifications
You must be signed in to change notification settings - Fork 137
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Better stack limits #554
Better stack limits #554
Changes from all commits
654dd11
3bbe29c
d65b809
a48a034
1ad79bf
File filter
Filter by extension
Conversations
Jump to
Diff view
Diff view
There are no files selected for viewing
Original file line number | Diff line number | Diff line change |
---|---|---|
|
@@ -124,8 +124,8 @@ impl NetworkConfig { | |
pub fn new(network_version: NetworkVersion) -> Self { | ||
NetworkConfig { | ||
network_version, | ||
max_call_depth: 4096, | ||
max_wasm_stack: 64 * 1024, | ||
max_call_depth: 1024, | ||
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. To be technically correct, we should set 1024 on nv16 and above, only. There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. So, @magik6k pointed out that we'll hit the rust stack limit at 4096. The only real solution would be to increase the thread size. But, in practice, it's unclear if one can actually make a message that can get past 1024 calls on mainnet. By "unclear", I mean "totally not close to possible". @magik6k only got to 1025 by making a custom actor that did nothing but recursively send. On mainnet, one would have to make a recursive call in a multisig actor, where each recursive call would be significantly more expensive (running out of block gas before anything else). There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Tradeoffs:
|
||
max_wasm_stack: 2048, | ||
actor_debugging: false, | ||
builtin_actors_override: None, | ||
price_list: price_list_by_network_version(network_version), | ||
|
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,13 @@ | ||
[package] | ||
name = "fil_stack_overflow_actor" | ||
version = "0.1.0" | ||
edition = "2021" | ||
|
||
[dependencies] | ||
fvm_sdk = { version = "0.6.1", path = "../../../../sdk" } | ||
fvm_shared = { version = "0.6.1", path = "../../../../shared" } | ||
|
||
|
||
[build-dependencies] | ||
wasm-builder = "3.0.1" | ||
wasmtime = "0.33.0" |
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,12 @@ | ||
fn main() { | ||
use wasm_builder::WasmBuilder; | ||
WasmBuilder::new() | ||
.with_current_project() | ||
.import_memory() | ||
.append_to_rust_flags("-Ctarget-feature=+crt-static") | ||
.append_to_rust_flags("-Cpanic=abort") | ||
.append_to_rust_flags("-Coverflow-checks=true") | ||
.append_to_rust_flags("-Clto=true") | ||
.append_to_rust_flags("-Copt-level=z") | ||
.build() | ||
} |
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1 @@ | ||
nightly |
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,64 @@ | ||
use fvm_sdk as sdk; | ||
use fvm_shared::address::Address; | ||
use fvm_shared::error::ExitCode; | ||
|
||
#[no_mangle] | ||
pub fn invoke(_: u32) -> u32 { | ||
let m = sdk::message::method_number(); | ||
// If we start with method 1, we'll be over recursive send limit, starting | ||
// with method 2 should be fine | ||
if m > 1026 { | ||
sdk::vm::abort(0x42, None); | ||
} | ||
|
||
if m == 1 { | ||
// if method 0, we want to run out of stack | ||
recurse(m, 1000) | ||
} else { | ||
// 5 stack elems per level (wasm-instrument charges for highest use in the | ||
// function) + some overhead mean that with the 2048 element wasm limit we | ||
// can do 396 recursive calls while still being able do do a send at that | ||
// depth | ||
recurse(m, 396) | ||
} | ||
} | ||
|
||
// we need two recurse functions; just one gets optimized into wasm loop | ||
|
||
#[inline(never)] | ||
pub fn recurse(m: u64, n: u64) -> u32 { | ||
if n > 0 { | ||
call_extern(); | ||
|
||
return recurse2(m, n - 1); | ||
} | ||
do_send(m) | ||
} | ||
|
||
#[inline(never)] | ||
pub fn recurse2(m: u64, n: u64) -> u32 { | ||
if n > 0 { | ||
call_extern(); | ||
|
||
return recurse(m, n - 1); | ||
} | ||
do_send(m) | ||
} | ||
|
||
// external call to prevent the compiler from doing smart things | ||
#[inline(never)] | ||
pub fn call_extern() { | ||
let _ = sdk::message::method_number(); | ||
} | ||
|
||
#[inline(never)] | ||
pub fn do_send(m: u64) -> u32 { | ||
let r = sdk::send::send(&Address::new_id(10000), m + 1, Vec::new().into(), 0.into()); | ||
match r { | ||
Ok(rec) => match rec.exit_code { | ||
ExitCode::OK => 0, | ||
e => sdk::vm::abort(e.value() | 0x80000000, None), | ||
}, | ||
Err(e) => sdk::vm::abort((e as u32) | 0xc0000000, None), | ||
} | ||
} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This is an arbitrary number; Just for chain-sync we probably don't need more than 1/2, but some users may want this to be higher e.g. to get internal sends in a bunch of historic tipsets.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
How about using NUM CPUs?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Some people have a lot of CPUs. We're already reserving half a gig for stacks here. We'll actually likely want to shrink this.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I.e., 128 cores = 8GiB of memory just sitting there.