Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Schedule v2 #1021

Merged
merged 3 commits into from
Dec 13, 2020
Merged

Schedule v2 #1021

merged 3 commits into from
Dec 13, 2020

Conversation

cart
Copy link
Member

@cart cart commented Dec 7, 2020

Draft Note

This is a draft because I'm looking for feedback on this api. Please let me know if you can think of improvements or gaps.

Bevy Schedule V2

Bevy's old Schedule was simple, easy to read, and easy to use. But it also had significant limitations:

  • Only one Schedule allowed
  • Very static: you are limited to using the tools we gave you (stages are lists of systems, you can add stages to schedules)
  • Couldn't switch between schedules at runtime
  • Couldn't easily support "fixed timestep" scenarios

V2 of Bevy Schedule aims to solve these problems while still maintaining the ergonomics we all love:

Stage Trait

Stage is now a trait. You can implement your own stage types!

struct MyStage;

impl Stage for MyStage {
    fn run(&mut self, world: &mut World, resources: &mut Resources);
}

There are now multiple built in Stage types:

SystemStage

// runs systems in parallel
let parallel_stage =
    SystemStage::parallel()
        .with_system(a)
        .with_system(b);

// runs systems serially (in registration order)
let serial_stage =
    SystemStage::serial()
        .with_system(a)
        .with_system(b);

// you can also write your own custom SystemStageExecutor
let custom_executor_stage =
    SystemStage::new(MyCustomExecutor::new())
        .with_system(a)
        .with_system(b);

StateStage<T>

Bevy now supports states. More on this below!

Schedule

You read that right! Schedules are also stages, which means you can nest Schedules

let schedule = Schedule::default()
    .with_stage("update", SystemStage::parallel()
        .with_system(a)
        .with_system(b)
    )
    .with_stage("nested",
        Schedule::default()
            .with_stage("nested_stage", SystemStage::serial()
                .with_system(b)
            )
    );

// schedule stages can be downcasted
let mut update_stage = schedule.get_stage_mut::<SystemStage>("update").unwrap();
update_stage.add_system(something_new);

States

By popular demand, we now support States!

  • Each state value has its own "enter", "update", and "exit" schedule
  • You can queue up state changes from any system
  • When a StateStage runs, it will dequeue all state changes and run through each state's lifecycle
  • If at the end of a StateStage, new states have been queued, they will immediately be applied. This means moving between states will not be delayed across frames.

The new state.rs example is the best illustrator of this feature. It shows how to transition between a Menu state and an InGame state. The texture_atlas.rs example has been adapt to use states to transition between a Loading state and a Finished state.

This enables much more elaborate app models:

#[derive(Clone, PartialEq, Eq, Hash)]
enum AppState {
    Loading,
    InGame,
} 

App::build()
    // This initializes the state (adds the State<AppState> resource and adds StateStage<T> to the schedule)
    // State stages are added right after UPDATE by default, but you also manually add StateStage<T> anywhere
    .add_state(AppState::Loading)
    // A state's "enter" schedule is run once when the state is entered
    .state_enter(AppState::Loading, SystemStage::parallel()
        .with_system(setup)     
        .with_system(load_textures)     
    )
    // A state's "update" schedule is run once on every app update
    // Note: Systems implement IntoStage, so you can do this:
    .state_update(AppState::Loading, check_asset_loads)
    // A state's "exit" schedule is run once when the state is exited 
    .state_exit(AppState::Loading, setup_world)
    .state_update(AppState::InGame, SystemStage::parallel()
        .with_system(movement)
        .with_system(collision)
    )
    // You can of course still compose your schedule "normally"
    .add_system(do_thing)
    // add_system_to_stage assumes that the stage is a SystemStage
    .add_system_to_stage(stage::POST_UPDATE, do_another_thing)

// this system checks to see if assets are loaded and transitions to the InGame state when they are finished 
fn check_asset_loads(state: Res<State<AppState>>, asset_server: Res<AssetServer>) {
    if assets_finished_loading(&asset_server) {
        // state changes are put into a queue, which the StateStage consumes during execution
        state.queue(AppState::InGame)
    }
}

fn setup_world(commands: &mut Commands, state: Res<State<AppState>>, textures: Res<Assets<Textures>>) {
    // This system only runs after check_asset_loads has checked that all assets have loaded
    // This means we can now freely access asset data
    let texture = textures.get(SOME_HANDLE).unwrap();

    commands
        .spawn(Camera2dBundle::default())
        // spawn more things here
        .spawn(SpriteBundle::default());
}

Run Criteria

Criteria driven stages (and schedules): only run stages or schedules when a certain criteria is met.

app
    .add_stage_after(stage::UPDATE, "only_on_10_stage", SystemStage::parallel()
        .with_run_criteria(|value: Res<usize>| if *value == 10 { ShouldRun::Yes } else { ShouldRun::No } )
        .with_system(a)
    )
    .add_stage_after(stage::RUN_ONCE, "one_and_done", Schedule::default()
        .with_run_criteria(RunOnce::default())
        .with_system(a)
    )

Fixed Timestep:

app.add_stage_after(stage::UPDATE, "fixed_update", SystemStage::parallel()
    .with_run_criteria(FixedTimestep::steps_per_second(40.0))
    .with_system(a)
)

Schedule Building

Adding stages now takes a Stage value:

App::build()
    .add_stage_after(stage::UPDATE, SystemStage::parallel())

Typed stage building with nesting:

app
    .stage("my_stage", |my_stage: &mut Schedule|
        my_stage
            .add_stage_after("substage1", "substage2", SystemStage::parallel()
                .with_system(some_system)
            )
            .add_system_to_stage("substage_2", some_other_system)
            .stage("a_2", |a_2: &mut SystemStage| 
                a_2.add_stage("a_2_1", StateStage::<MyState>::default())
            )
    )
    .add_stage("b", SystemStage::serial())
)

Unified Schedule

No more separate "startup" schedule! It has been moved into the main schedule with a RunOnce criteria

startup_stage::STARTUP (and variants) have been removed in favor of this:

app
    // this:
    .add_startup_system(setup)
    // is equivalent to this:
    .stage(stage::STARTUP, |startup: &mut Schedule| {
        startup.add_system_to_stage(startup_stage::STARTUP, setup)
    }) 
    // choose whichever one works for you!

this is a non-breaking change. you can continue using the AppBuilder .add_startup_system() shorthand

Discussion Topics

  • General API thoughts: What do you like? What do you dislike?
  • Do States behave as expected? Are they missing anything?
  • Does FixedTimestep behave as expected?
  • I added support for "owned builders" and "borrowed builders" for most schedule/stage building:
    // borrowed (add_x and in some cases set_x)
    app
        .add_stage("my_stage", SystemStage::parallel())
        .stage("my_stage", |my_stage: &mut Schedule|
            my_stage
                .add_stage("a", )
                .add_system_to_stage("a", some_system)
        )
    
    // owned (with_x)
    app
        .add_stage("my_stage", Schedule::default()
            .with_stage("a", SystemStage::parallel())
            .with_system_in_stage("a", some_system)
        )
    )
    • Does this make sense? We could remove with_x in favor of borrowed add_x in most cases. This would reduce the api surface, but it would mean slightly more cumbersome initialization. We also definitely want with_x in some cases (such as stage.with_run_criteria())

Next steps:

  • (Maybe) Support queuing up App changes (which will be applied right before the next update):
    commands.app_change(|app: &mut App| {app.schedule = Schedule::default();})
  • (Maybe) Event driven stages
    app.add_stage_after(stage::UPDATE, EventStage::<SomeEvent>::default().with_system(a))
    • These could easily build on top of the existing schedule features. It might be worth letting people experiment with their own implementations for a bit.
    • We could also somehow try to work in "system inputs" to this. Aka when an event comes in, pass it in to each system in the schedule as input.

@cart cart added A-ECS Entities, components, systems, and events C-Feature A new feature, making something new possible labels Dec 7, 2020
@cart cart marked this pull request as draft December 7, 2020 06:16
@smokku
Copy link
Member

smokku commented Dec 7, 2020

  • Does FixedTimestep behave as expected?

Behaves - yes. :-) Great work.

It lacks a crucial feature though.
In context of fixed timestep it is much more interesting how much of the frame time the scheduler slipped (in percent of the frame), than how long did it take to reach current frame. See https://github.com/smokku/bevy_contrib_schedules/blob/master/src/lib.rs#L75 for an example.
This allows "overshooting" the current simulated frame, without knowing how often the system is being run. Without this information the system needs some out-of-band information on how often it is being run - kept in sync with FixedTimestep.step and computing it from last_time: Local<f64>.

@smokku
Copy link
Member

smokku commented Dec 7, 2020

Is there a way of accessing the current Scheduler from a running system to obtain its state?

fn main() {
    let fixed_timestep_scheduler = SystemStage::parallel()
        .with_run_criteria(FixedTimestep::step(2.0))
        .with_system(fixed_update);

    App::build()
        .add_plugins(DefaultPlugins)
        // this system will run once every update (it should match your screen's refresh rate)
        .add_system(update)
        // add a new stage that runs every two seconds
        .add_stage("fixed_update", fixed_timestep_scheduler)
        .add_resource(fixed_timestep_scheduler)
        .run();
}

but that does not build (as expected)

error[E0382]: use of moved value: `fixed_timestep_scheduler`
  --> examples/ecs/fixed_timestep.rs:14:23
   |
4  |     let fixed_timestep_scheduler = SystemStage::parallel()
   |         ------------------------ move occurs because `fixed_timestep_scheduler` has type `bevy::prelude::SystemStage`, which does not implement the `Copy` trait
...
13 |         .add_stage("fixed_update", fixed_timestep_scheduler)
   |                                    ------------------------ value moved here
14 |         .add_resource(fixed_timestep_scheduler)
   |                       ^^^^^^^^^^^^^^^^^^^^^^^^ value used here after move

@cart
Copy link
Member Author

cart commented Dec 7, 2020

In context of fixed timestep it is much more interesting how much of the frame time the scheduler slipped (in percent of the frame), than how long did it take to reach current frame. See https://github.com/smokku/bevy_contrib_schedules/blob/master/src/lib.rs#L75 for an example.

Hmm yeah passing both FixedTimestep::step and "slippage percent" to systems sounds useful. I can think of a number of options:

  1. For a static value like step it could just be inserted into a resource during schedule creation. It doesn't "need" to be in sync. That doesn't solve the "slippage percent" problem
  2. There could be shared state: Arc<RwLock<FixedTimestepState>, which could be cloned and inserted into a resource. This is a bit cumbersome and notably doesn't handle multiple FixedTimestep stages nicely (they would need different resources)
  3. Create a FixedTimestepState resource, which each FixedTimestep criteria modifies on each run. Because only one stage runs at a time, there would be no contention here. So it would work equally well across multiple fixed timestep stages.

(3) feels the nicest to me

@smokku
Copy link
Member

smokku commented Dec 7, 2020

I like (3) too.

@BorisBoutillier
Copy link
Contributor

To give some feedback on the state part of this new scheduling system, I have updated my Kataster mini-game, replacing its custom state management system with the new bevy state of this PR.

First I'd like to say that I have been able to get rid of my home-made state system, and replace it with bevy state without any hacks. This gain ~5% lines of code and is globally clearer/cleaner.

Now for the feedbacks, I see 4 points of discussion :

  1. state.queue
    Multiple queuing support seems a bit weird to me. If two systems do queue state changes from the same current state, they are not aware that the other system has done a queuing, so are probably taking an incorrect decision. And if this happens while these systems are parallelized, the final state is non-deterministic.
    I feel like when the changed_queue is longer than 1, this is a game design bug, but I can have missed use cases here.

  2. sub-state.
    In the case of Kataster, I have a main AppState with StartMenu/Game and a sub-state for Game which has Running/Paused/GameOver.
    Some Entity/System have a lifespan associated to the full Game state, while other are only restricted to the Game/Paused state for example.
    I have solved this by having two State in my App, and this mostly work.
    I don't have a real proposal , and I am not sure we need to provide one for sub-state.
    The checking of the synchronization between the two states can be handled by adding an Invalid entry to the sub-state, as it is now simple to add a debug system checking the coherency ( when main state is not Game, sub-state must be Invalid).

  3. entity despawning on state change.
    Should we have an 'official' way to associate an Entity to a given state or list of states.
    Meaning that when we enter a state that this entity is not associated to, we despawn it.
    In the state.rs example you have stored the Entities associated to the menu in a global ressource.
    In Kataster I have implemented a ForState vec Component, that I attached to the created entities, and a despawning system that I add to all stage_exit. This system iterates all entity with a ForState component, and despawn those that don't have the current state ( which is already the next one in state_exit calls) in the vec of the ForState component.

  4. state value during exit/enter
    As far as I have seen, if a system runs during exit phase, Res value will already be the next_state.
    I wonder if we should have state API with:
    previous()
    get() or current()
    next()
    Where previous()/next() return None if called outside of the enter/exit phase respectively ( or when first entering for previous)

Again, as is, I have found the state implementation to be fully usable.

@ndarilek
Copy link
Contributor

ndarilek commented Dec 7, 2020 via email

@BorisBoutillier
Copy link
Contributor

In fact, my ForState takes a vec!, as some entity lifespan can extend multiple states.

It was named ForStates in my custom version, but as I migrated gradually and needed the two version at the same time, the one for bevy state, I named ForState :). But should be ForStates when I'll do it in a plugin.

@cart
Copy link
Member Author

cart commented Dec 7, 2020

@Bobox214

  1. state.queue

Hmm yeah thats a good point. I do think we generally want state change order to be well defined. We could:

  1. Leave it as is (resolve states in the order they come in, which assumes the order either doesn't matter or was controlled by the user)
  2. Replace the Vec<T> with Option<T>. Multiple queued items overwrite each other
  3. Replace the Vec<T> with Option<T>. Return an error when multiple items are queued.
  1. sub-state

Yeah I think that sort of "cross-state coherency" is probably best left as an exercise for the user. If common patterns emerge we could consider adding built in features for them, but it seems too early to consider that.

  1. entity despawning on state change.

I also think this falls in to the "too early to commit to something" category. Theres a ton of different ways to track entities (especially across states), and it would be hard to build something that meets everyone's constraints. I'd rather just wait and see if common patterns emerge.

I also hope that scenes (or just entities spawned under a root entity) will occupy this niche: ex: spawn in a complex scene for a given state, then despawn it when you're done.

  1. state value during exit/enter

Thats a really good idea. I'm sure there will be cases where knowing what state is being transitioned from/to would be very useful.

@BorisBoutillier
Copy link
Contributor

For 1), I personnaly would prefer 3. And between 2 and 1 I don't know , 2 avoids the 'glitch' of entering/exiting a state instantaneously, without update, but this can make it difficult to debug if the overwrite was not intended.

For 2), 3), fully aggree, these features can be proposed as plugins.

4). The fact that get() returns the next state when calling the exit system, is a bit strange to me, so a complete/clearer API would be better and useful.

@alice-i-cecile
Copy link
Member

I would really like to see the stages used in e.g. add_system_to_stage move away from stringly typed stages, and into a proper enum of some sort.

Reasons to make this change:

  1. Better discoverability of available stages through exploring the enums and IDE autocomplete.
  2. Adding systems to a stage that doesn't exist fails at compile time, rather than runtime.
  3. Better clarity on what type of argument is expected, rather than relying on magic constants.
  4. No collisions between user-defined stages and those in other libraries that they're using. This is particularly bad: if GraphicsCrateA defines a 'postprocessing' stage and GraphicsCrateB defines a different 'postprocessing' stage, trying to use them both at once will fail in a catastrophic way that's very hard to debug, and basically impossible for the end user to fix.

Possible approaches for implementation:

  1. Change the internal representation to a pub enum, with a UserDefined variant that stores a string, and then handle those separately.
  2. Require users to composite all of the Stages in the crates they're using, then supply that back to bevy. Combine that with a derive macro to impl e.g. From<Lib1::Stage> into user_crate::Stage, then call .into()` within bevy itself. Leading to an interface like so:
#[derive(BevyStages)]
enum MyStages {
  Lib1Stage(lib1::Stage),
  Lib2Stage(lib2::Stage),
  /* ... */
}

Internally, make your enum of stages something like

enum StageKey<T> {
  Foo,
  Bar,
  Baz,
  State(TypeId),
  User(T),
}

allowing you to use type signatures like: add_stage<S: Copy + Eq + Hash, T: Into<S>>(key: T) .

  1. Allow users to construct their own enums of stages, then consume them using trait bounds wherever we're currently using strings for trait names.

1 is a partial solution, but basically all upside. 2 is apparently the most idiomatic but I worry about the end user burden. 3 would be ideal and gets all of the benefits without any extra boilerplate, but I'm not experienced enough to be sure that this is feasible throughout the code base.

@alice-i-cecile
Copy link
Member

alice-i-cecile commented Dec 7, 2020

As an extension to 1 above, you could extend the Stage enum above to namespace third-party library stages by their crate as well, which would help avoid conflicts tremendously. If we nest the built-in stages as their own enums, this gives us:

enum Stage
   Startup(StartupStage), 
   Core(CoreStage),
   User(String),
   Lib(String, String)

This would be much simpler to implement than either 2 or 3, and should capture most of their benefits. The syntax for user defined stages stays nice and simple, and the third-party stages are disambiguated nicely without too much boilerplate :)

Of the benefits, we get 3 and 4, and half of 1 and 2. My largest complaint is that you might still get runtime failures, but that'll occur if you attempt to add things to a valid stage that hasn't been added no matter what we do. I think that this is a pretty clear win, and should make the internal code easier to follow and refactor too.

@danieljpetersen
Copy link

This looks awesome, I was hoping you'd do something this expressive.

Are we able to change the run criteria for stages during runtime? I'd like to be able to do things like change the fixed timestep for a given state on the fly. Think of something like Sim City or The Sims where the user can slow down or speed up the simulation speed.

It's not entirely clear to me what nested stages do. If I have two stages, A and B, and B is nested inside of A, does this mean that B runs during stage A? If so I imagine this is really only useful when combined with run criteria to optionally run some logic? How does parallel vs sequential execution work in regard to nesting?

On a final note, this is likely approaching the realm of unreasonable but I'd also like to be able to compose states from other states. For a game I'm currently working on I have an API which is somewhat similar to this, and I'm able to call .withProgramState(ProgramStateId); This is nice because it allows me to do things like have the core simulation (minus player input) in one state, and then use this state in various other contexts. So I have another state which is the core_simulation + input + gui (for the actual game), and then another state which is the main_menu + core_simulation + input_playback where with input_playback I'm feeding in pre-recorded input into the simulation. In other words, behind the main menu a pre-recorded gameplay demo takes place. The concept is similar to how mario 3 on the original NES played out a pre-recorded segment on the start screen.

It looks like with the current setup you can already accomplish this sort of composability by segregating out the state build calls into separate functions, and then just reusing those function with maybe passing in a different state name for use in the build call?

Thanks for the work you're doing, bevy is shaping up to be amazing.

@cart
Copy link
Member Author

cart commented Dec 8, 2020

@alice-i-cecile: Thanks for exploring the "stage label" design space a bit. Enough people are interested in non-string stage labels that I do thinks its worth discussing (although unless we reach an immediate consensus, I won't block this pr on this topic as it is largely orthogonal to most of the changes in this PR).

My criteria

First lets enumerate my personal requirements for stage labels:

  1. Low boilerplate
    • labels are not complicated enough to justify significant abstraction or typing. new users shouldn't need to think too much about the "right way" to define stage labels.
  2. Easy to consume and extend
    • stage symbols should be discoverable / autocomplete-friendly
    • available stages should be easily extensible by downstream plugins (at least when this is desired by the stage author)
  3. Convertible to human-friendly strings for things like logging, profiling, and schedule visualization
  4. Easy to read and understand (I consider "consistency" across stage declarations to inform this)
    • an example of a design that violates this rule: some plugins using integers for stage labels, some using strings, and others using enums. The additional "choice" this design gives consumers doesn't accomplish much and seriously complicates the "definition" of labels.
  5. Resilient to simple human errors

The current state of things

I consider the pattern we use today to reasonably cover 1-5:

// bevy internally sets up some stages
mod bevy {
    mod stage {
        const UPDATE: &str = "update";
        const RENDER: &str = "render";
    }

    app
        .add_stage(stage::UPDATE, SystemStage::parallel())
        .add_stage(stage::RENDER, SystemStage::parallel())
}

// user then builds on top

mod app_stage {
    const CUSTOM: &str = "custom";
}
app
    .add_stage_after(bevy::stage::UPDATE, app_stage::CUSTOM, SystemStage::parallel())
    .add_system_to_stage(bevy::stage::UPDATE, system_a)
    .add_system_to_stage(app_stage::CUSTOM, system_b)
  1. Low boilerplate
    • no derives, just define a constant in your module of choice (by convention in a module made specifically for stages)
  2. Easy to consume and extend
    • IDEs can autocomplete the items in a module. Modules have their own scoped docs generated
    • just insert your newly defined stage anywhere relative to other stages
  3. Convertible to human-friendly strings for things like logging, profiling, and schedule visualization
    • trivially true
  4. Easy to read and understand (I consider "consistency" across stage declarations to inform this)
    • trivially true
  5. Resilient to simple human errors
    • This is true, provided the user follows the conventions above. Inline strings are possible, but they should be considered an anti-pattern.

Your reasons to make the change

You might notice that my criteria lines up pretty similarly to your reasons to change.

  1. Better discoverability of available stages through exploring the enums and IDE autocomplete.

I honestly don't think there is a huge problem here (in the context of following the pattern defined above). IDEs already autocomplete stage names. Stage names are nicely scoped and discoverable in api docs. I think there are almost no relevant functional differences between enums and the "mod/const" pattern. Of course users aren't forced to follow the mod/const pattern.

  1. Adding systems to a stage that doesn't exist fails at compile time, rather than runtime.

This is also not a problem provided you follow the mod/const pattern.

  1. Better clarity on what type of argument is expected, rather than relying on magic constants.

This is a good point, and imo one of the strongest arguments to adopt a strict SystemStage<Label> design (where Label is a single enum).

However I'd like to point out that this strength is also the biggest weakness: each stage enum is (1) completely immutable by default and (2) only extensible manually via author-defined "custom" variants, which requires throwing away the static strictness in that context.

  1. No collisions between user-defined stages and those in other libraries that they're using. This is particularly bad: if GraphicsCrateA defines a 'postprocessing' stage and GraphicsCrateB defines a different 'postprocessing' stage, trying to use them both at once will fail in a catastrophic way that's very hard to debug, and basically impossible for the end user to fix.

This is really the "big one". Its currently easy for plugin developers to step on each other's toes. Bevy is intended to be modular to a high degree, so protecting against cases like this is important.

The designs suggested

First, to satisfy my "criteria 3" (and to perform the baseline functionality), each of these designs would need the following derives:

#[derive(Debug, Eq, Hash, Clone)]
  • Debug (for friendly string conversion)
  • Eq (for comparing stage values)
  • Hash (for storing / looking up stage values)
  • Clone (for ergo)

Notably this boilerplate produces functionality that we get "for free" by using strings.

  1. Change the internal representation to a pub enum, with a UserDefined variant that stores a string, and then handle those separately.

This is a partial solve to the problem, but it comes at the cost of consistency. I personally don't like that "downstream" user defined stages are defined as Custom(String), whereas "upstream" stages have their own enum variants

  1. Require users to composite all of the Stages in the crates they're using, then supply that back to bevy. Combine that with a derive macro to impl e.g. From<Lib1::Stage> into user_crate::Stage, then call .into()` within bevy itself. Leading to an interface like so:

The nesting and proc-macro feel like way too much abstraction for what we get (according to my own taste of course). I don't think people should need to go through that much "pomp and circumstance" just for stage labels. Unless I'm missing something, it also seems like it would force the App developer to fully construct the global schedule instead of incrementally constructing it via plugins, which puts too much burden on the user to understand the mechanics of their upstream dependencies.

  1. Allow users to construct their own enums of stages, then consume them using trait bounds wherever we're currently using strings for trait names.

This feels like the most viable alternative to me. It preserves the overall Bevy plugin structure and allows for consistent declarations of upstream vs downstream sibling stages. This is also the solution @Ratysz suggested (and provided sample code for here).

This approach does notably solve the "plugins stepping on each other" problem (when using user-defined enums) because it includes the TypeId in the hash.

Unfortunately it has some major downsides:

  1. By using traits instead of concrete types, we must allow any custom type. The bounds in the gist support basically anything: ints, floats, enums, structs, strings, etc.

This by itself is enough to make me hesitant to use it. We lose the "consistency" and "resilient to human error" criteria because it supports literally every possible value. By convention, we could encourage enums, but if we're back to "safety by convention", why not just use the "mod/const" convention?

We could constrain the values by only supporting types that implement a given trait (ex: StageKey), but that comes at the cost of additional abstraction / more derives the user needs to know about.

  1. The "constrained value space" benefit we get from enums goes away

Any value/type can be added anywhere

  1. Complex implementation

We're adding another 160 lines to bevy for what I consider to be marginal gains.

Collecting my thoughts

I have yet to encounter a perfect answer to this problem. Strings are simple to define and consume, but they come at the cost of "safety by convention", collision risk, and inducing a general "ew strings" response from seasoned developers (which is a healthy initial reaction, even if I consider it to be a bit of a false positive in this case). Every enum-based solution either:

  1. achieves "strict static safety" at the cost of simplicity, extensibility, and/or ergonomics
  2. does not achieve "strict static safety", which leaves us in basically the same boat as we were before

In the face of decisions like this, I generally bias toward the simpler solution (which in my mind is strings). If we roll with an alternative, the only one that seems viable to me (so far) is @Ratysz's "any key" implementation (with a StageLabel trait used as a constraint to protect against people using things like ints). Which would end up looking like this:

// bevy internally sets up some stages
mod bevy {
    #[derive(Debug, Hash, Eq, PartialEq, Clone, StageLabel)]
    enum Stage {
        Update,
        Render,
    }

    app
        .add_stage(bevy::Stage::Update, SystemStage::parallel())
        .add_stage(bevy::Stage::Render, SystemStage::parallel())
}

// user then builds on top
#[derive(Debug, Hash, Eq, PartialEq, Clone, StageLabel)]
enum AppStage {
    Custom
}

app
    .add_stage_after(bevy::Stage::Update, AppStage::Custom, SystemStage::parallel())
    .add_system_to_stage(bevy::Stage::Update, system_a)
    .add_system_to_stage(AppStage::Custom, system_b)

From a "typo safety" perspective I consider this to be almost identical to strings. There is still nothing stopping users that don't know/care from doing this:

#[derive(Debug, Hash, Eq, PartialEq, Clone, StageLabel)]
struct Label(String)

But in practice thats harder than lazily dropping in a string.

From a "plugin devs stepping on each other" perspective this is strictly better than strings because of TypeId hashing.

From a UX perspective I might actually give it to "any key". It requires more typing (30% more non-whitespace characters), but the stage names themselves are easier to read and the StageLabel derive helps encode intent.

So I guess the biggest cost is internal complexity cost, which is important, but takes a secondary role to UX.

Dang I guess "any key + type constraints + derives" is now in first place in my head. Thats not where my head was at when I started writing this.

@cart
Copy link
Member Author

cart commented Dec 8, 2020

@danieljpetersen

Are we able to change the run criteria for stages during runtime?

Not yet, but in the future I'd like to add the ability to queue up app changes somehow (see the "next steps" section in the description)

It's not entirely clear to me what nested stages do

To be clear, stages don't nest by default. You can build a stage that supports nesting and define the execution behavior arbitrarily. Currently the only Stage types that support nesting are:

  • Schedule: contains a list of named stages that run in order
  • StateStage<T>: contains a mapping from state to stage. Runs the current stage that matches the current state value.

It looks like with the current setup you can already accomplish this sort of composability by segregating out the state build calls into separate functions, and then just reusing those function with maybe passing in a different state name for use in the build call?

Yup that should work!

Thanks for the work you're doing, bevy is shaping up to be amazing.

❤️

@alice-i-cecile
Copy link
Member

@cart I agree with your analysis: Option 1 is verbose and doesn't offer enough benefits, while option 2 is a huge amount of overhead. I'm thrilled to see convergent thinking on @Ratysz's "any key + type constraints + derives" implementation, especially with some work fleshing out that it can work under the hood.

I agree that this work is largely orthogonal to this PR: I think it would be best to spin up a new issue and then PR to implement those changes and focus on the other elements of this PR here.

@Ratysz
Copy link
Contributor

Ratysz commented Dec 8, 2020

... I may have forgotten why exactly did I want to push for this when we talked on Discord earlier. Downstream string collisions was the chief reason; I'm glad this was brought up!

I have to clarify: while I did have the idea to try and use "any key" thing here, I didn't come up with the original implementation - I found it in this reddit comment.

I've tinkered with it some more, pulling in more snippets from other half-baked prototypes, and landed on an implementation that looks like this in use. Any + Eq + Hash + Clone + Debug is required and is sufficient for StageLabel to be implementable.

Here is the trait itself; it should be trivial to add a derive macro for it, and the whole CloneStageLabel bit can be cleanly removed if we don't need to clone the labels on Bevy's side. Side note here: I implemented this trait object cloning before checking if anyone's done it before - apparently, yes, someone did.

The methods of schedule mockup used in the example have delightfully simple bounds now: just impl StageLabel versus that impl Any + Eq + Hash + Debug mouthful in the gist. Note on the schedule, it's a bit inverted compared to one in this PR - boxed stages are stored in a vector, so ::run() has less indirection; naturally, it should be benchmarked, I wouldn't put it past the compiler to optimize down to this anyway.

@inodentry
Copy link
Contributor

inodentry commented Dec 8, 2020

I am concerned about the performance implications of fragmenting bevy apps into so many stages. This new API encourages users to have lots of stages.

System parallelism / parallel execution can only happen within a schedule. By fragmenting into more stages (smaller schedules with fewer systems in each), bevy apps would become more and more serialized.

This is contrary to one of the main benefits/motivations for using ECS, which is performance and scalability to easily run in parallel, to take advantage of modern CPUs with many cores.

I have noticed that Bevy's parallelism tends to be fairly limited in practice. In my experience, there is usually one thread running at 100% cpu and the others are in the single digits at best. I blame this on bevy's stage-focused design, as stages effectively form "global barriers" for parallel execution. I am worried that this new API could make this even worse.

Each state value has its own "enter", "update", and "exit" schedule

This really does not make me happy. It means that, to make use of states, game logic must become fragmented into many small schedules, which means they will have to run sequentially and only have parallelism within them.

I see a lot of potential for using states (specifically, multiple state types) to control the execution of different systems. It would be a shame if doing this comes with the caveat of big performance implications.

Why can't states just enable/disable systems within one big parallel schedule?

P.S: BTW, this is also why (some time ago) I was proposing adding "system dependencies" as an alternative to stages, to move away from relying on stages as much as possible and keep stages coarse-grained. It needs to be possible and ergonomic to get the right sequential execution where it matters, without ruining parallelism for everything else. In order to have as much parallel execution as possible (with the current paradigms), means having as many systems as possible in the same schedule/stage.

I understand why it's nice to have pre-update/update/post-update stages, to separate, say, rendering from update logic. But i don't like the trend of encouraging users to split up their update logic into finer-grained stages.

@Ratysz
Copy link
Contributor

Ratysz commented Dec 8, 2020

I am concerned about the performance implications of fragmenting bevy apps into so many stages. This new API encourages users to have lots of stages.

I wouldn't say so. Stages are largely optional, to be utilized by users when they actually need them - when they have a group of systems that has to be executed before or after another group of systems. Fleshing out the API to be more expressive and flexible is not encouragement to use the API to solve every problem. Although, how Bevy's own systems are split between stages is another matter.

By fragmenting into more stages [...]

Again, this is optional. If someone does that that's on them.

This is contrary to one of the main benefits/motivations for using ECS [...]

Not really. ECS itself has nothing to do with schedules or stages or, amusingly, systems (in the way they're implemented in Bevy). It's purely a data storage structure with certain efficient access patterns - Bevy's structuring of logic via systems, stages, and schedules is orthogonal to that. ECS benefits of compositional design, columnar access, and access parallelization are still fully utilized within a stage, where systems actually live.

I have noticed that Bevy's parallelism tends to be fairly limited in practice. [...] I blame this on bevy's stage-focused design [...]

This is not the cause (at least not the primary cause) - it's the overzealous-by-default system dependency inferrence mechanism of the executor (which I assume migrated to this new API intact?). Implementing more stages, with different scheduling algorithms and ways to specify/infer dependencies, and then hand-tuning the default schedule by shifting systems around said stages, will most definitely result in better utilization. This would've been impractical to do with the old API, and is something that should be done in a follow-up PR rather than here.

[re: states] [...] game logic must become fragmented into many small schedules [...]

That's unavoidable if the user's state transitions require execution of code. It's a concept identical to that of the startup schedule from the old API. Having a neat and centralized way to do that, instead of multitudes of conditional systems, is good.

Why can't states just enable/disable systems within one big parallel schedule?

I would actually prefer some version of that. A state should be able to add or remove systems to and from existing stages, not just specify its own fully-formed update stage. (Maybe there's a way to do that with this API already that I haven't seen yet.)

P.S [...]

You're not the only one. I even wrote an entire article on the options. And, again, this should be a follow-up PR - this one seems focused getting the schedule's API right.

@inodentry
Copy link
Contributor

inodentry commented Dec 8, 2020

Yeah I agree with you that stages are optional and it is up to users to decide how much they want to use them. Also, that we should have nice and ergonomic APIs (of course).

The core of my argument really was my reaction to this part of the API proposal:

Each state value has its own "enter", "update", and "exit" schedule

I don't like that the proposed API requires you to fragment your systems into schedules in order to use states. Why should my state-specific systems have to run separately from my state-agnostic systems?

That's what I was talking about when I said that the new API encourages (even requires) unnecessary fragmentation, hurting parallelism.

As you said, fragmenting the app into multiple schedules should be the user's choice. So it is important that the design of engine features (such as the new states) does not require users to do it.

To provide another example: imagine how awful it would have been if you had to separate the sending and receiving systems into separate stages in order to use events.

Why can't states just enable/disable systems within one big parallel schedule?

I would actually prefer some version of that. A state should be able to add or remove systems to and from existing stages, not just specify its own fully-formed update stage. (Maybe there's a way to do that with this API already that I haven't seen yet.)

So we agree that this should be addressed.

Not really. ECS itself has nothing to do [...]

Yes, ECS is, first and foremost, an efficient, elegant, and expressive way to structure data and logic. However, easy parallelism is also one of the major advantages that makes it so attractive.

ECS benefits of compositional design, columnar access, and access parallelization are still fully utilized within a stage, where systems actually live.

Of course, this is exactly why it is important to not fragment more than necessary (and why I was motivated to make my post). Having many small schedules takes away from those benefits.

@alice-i-cecile
Copy link
Member

Why should my state-specific systems have to run separately from my state-agnostic systems?

I think @jamadazi makes an excellent point. I think we can decouple this behaviour from stages entirely.

Sample use cases for conditional systems

  • pausing game
  • menu interactions
  • optional game rules
  • support for different system settings (i.e. alternate rendering backends)
  • periodic clean-up work

In each of these cases, there's no reason to wait for all of the other systems to catch up, and serious parallelization issues with doing so. This really seems to be a separate feature, and should go into a distinct pull request.

Alternate proposal for conditional systems

The idea of enter, update and exit is a good way to frame the discussion of states: when moving between them, there will often be initialization or cleanup steps that only need to happen once.

But we can make this fully orthogonal to the idea of stages. Instead of a group of update systems, designate a group of recurring systems, that repeat each frame in whichever stage they're added to, like usual. Initialize systems in this way through
add_system_to_state(state, my_system), add_system_to_state_to_stage(state, stage, my_system), add_enter_system_to_state(state, my_system) or add_exit_system_to_state(state, my_system).

At the beginning of each frame, we can check whether the run_criteria of the state is met. If this changed to true, we schedule those enter systems before any of the recurring systems can be run. If it changed to false, then we schedule those exit systems. Then, if it's true, we hand off the recurring systems to be run in their appropriate frames.

Benefits:

  1. Provides a clear conceptual separation between stages (determines what needs to be complete before we can move on) from states (internal game logic flags that change the rules).
  2. Prevents unnecessary blocking of parallelism, and avoids proliferating stages.
  3. Unifies conditional and unconditional systems. 'Unconditional systems' to just be constructed as a conditional system with a condition that is always true, rather than having to special case either. And startup systems become the enter systems of the unconditional group of systems :)"
  4. Allows us to directly access resources and components when checking for state, rather than using another unique sort of state, allowing for more straightforward logic flow and feature flags.
  5. Unifies states and .with_run_criteria, allowing users to start off with simple .with_run_criteria systems, then smoothly migrate them to coherent states once they have several systems that share a criteria (or complex enter / exit logic).

@cart
Copy link
Member Author

cart commented Dec 9, 2020

It sounds like in general we are all in agreement:

@Ratysz

Ooh nice. The patterns you're suggesting all seem like a good fit. I'm down to use the "vec for schedules" approach. In practice it might not actually matter, but in theory it does (and encodes that intent).

apparently, yes, someone did.

Haha we "kind of" did too in bevy_reflect:

fn clone_value(&self) -> Box<dyn Reflect>;

@jamadazi

I agree with @Ratysz's responses. To quickly parrot: stages are both "optional" and a useful (and often required) part of building a scheduler. They are "hard sync points" inserted to enable users to make logical guarantees (but should generally be kept to a minimum) and for "stage" logic to ensure it has unique access (which enables it to parallelize safely).

I absolutely intend to include manual system dependencies (and some form of @Ratysz's improved parallel scheduling approach) which should largely resolve the issues you are bringing up. However this pr intentionally doesn't solve the "parallel scheduling algorithm" problem. Instead it aims to encapsulate it and enable people to build custom strategies / combine multiple strategies within the same "schedule". It kind of sounds like you're proposing a "global parallel scheduling framework" that allows custom logic to be inserted inside. That sounds desirable, but it still fits into the system in this pr (it just needs to be implemented as a Stage).

Parallel State Execution (and other such things)

@jamadazi brought up (and @alice-i-cecile offered solutions to) the issue that "Schedule V2 + parallel scheduler with manual system dependencies" doesn't inherently solve the "running the systems of multiple States in parallel within a given stage" problem. First: its a reasonable ask. Unrelated states should be able to run their logic in parallel.

I'm not totally convinced we need to solve that now because:

  1. In general the new apis allow us to build whatever "parallel state execution system" we want on top.
  2. The current "enter" "exit" and "update" state pattern seems relatively uncontroversial from a high level design perspective. At the very least, we could hard code a special case into a new parallel scheduler that uses the same type of pattern, but assigns sets of systems to each lifecycle event instead of stages (because stages can't be run in parallel).

That being said, its worth trying to plan out some next steps here (which might inform whether or not we drop the current State implementation in this pr before merging).

I think @alice-i-cecile is on the right track here. However ideally states aren't 'special cased' when we solve the problem: the "sometimes a group of systems should run within a given stage and sometimes it shouldn't" problem should be solved generically. I can think of a few ways we could take it (roughly inspired by @alice-i-cecile's thoughts):

  1. A fully dynamic "run this system once in this stage on the next update" api:
// this system would run at some arbitrary point in the schedule. Its purpose is to select if / when a system should be scheduled 
fn my_state_system(schedule_queue: Res<ScheduleQueue>, state: Res<State<MyState>>) {
    let stage = schedule_queue.get_stage::<ParallelStage>("update").unwrap();
    match *state {
        MyState::A => stage.queue_systems(a_update_systems)
        MyState::B => stage.queue_systems(b_update_systems)
    }
    // also do the same sort of thing for "enter" and "exit" systems
}
  • this is "hyper flexible", which is good in some ways and bad in other ways. it would make it very hard to reason about (or visualize) a schedule.
  • this could be broken up into queue_this_update(system) or queue_next_update(system) to support scenarios like "running multiple state transitions in the same update"
  • we would need to be able to define dependencies here to ensure that "state lifecycle events" are run in the correct order when they are queued.
  1. A "system set" concept, which could be added to SystemStages much like normal systems are added. These could have custom run criteria
app.add_stage(Stage::Update, SystemStage::parallel()
    // MyState::A update
    .add_system_set(SystemSet::new()
        .with_run_criteria(|state: Res<State<MyState>>| state.should_run_update(MyState::A))
        .add_system(a_update_system_1)
        .add_system(a_update_system_2)
    )
    // MyState::B enter
    .add_system_set(SystemSet::new()
        .with_run_criteria(|state: Res<State<MyState>>| state.should_run_enter(MyState::B))
        .add_system(b_enter_system)
    )
)
  • I like this because it means schedules are still statically defined (with the pros and cons of such of an approach)
  • by itself this doesn't really solve the "run multiple state transitions in the same update" problem or the "transition execution order" problem
  1. Special-case parallel State evaluation within SystemStage. This would be much easier to design and implement, but only solves the problem for States (which would still be a decent win).

@inodentry
Copy link
Contributor

I like the idea of having systems and schedules statically-defined. Adding dynamism here seems like a major potential source of complexity and cognitive overhead, with questionable benefit. One of the big reasons why we have stages, schedules, states, etc. is to help clearly reason about the logic.

Therefore, I strongly prefer proposal (2) over proposal (1).

(3) would also be fine, if you would prefer to keep things easier.

@inodentry
Copy link
Contributor

Also BTW, I want to bring up another point: it seems like it would be very useful to have multiple state types for controlling different parts of an application. For example, UI state, network state, asset loading state, game state.

It might seem obvious, but I just wanted to point out this use case, just to make sure we are all aware of it.

This is one reason why I was concerned about fragmentation and reduction in parallelism. It would quickly compound in a large application using multiple state types with different sets of systems.

@alice-i-cecile
Copy link
Member

My preference is definitely towards proposal 2, system sets.

The syntax is clean, and I really like the way that it encourages bundling of systems, rather than the chaos of proposal 1.

You keep the nice "with_run_criteria", and can keep schedules static, like you said. In games that need it, you could instead treat the schedule as dynamic.

it seems like it would be very useful to have multiple state types for controlling different parts of an application. For example, UI state, network state, asset loading state, game state.

Partly for this reason, I don't think that it makes sense to have a special State type (or trait). Storing system state in a resource seems much more natural to me, because it gives us that global behavior without needing to special case things and integrates well with run criteria.

But the need to properly handle exiting and entering different states is important. I don't understand

"run multiple state transitions in the same update" problem or the "transition execution order" problem

well enough to see how the original proposal addressed it smoothly. What do we lose out on if we just add a system set in FIRST to check if we should enter the state, and then a second system set in LAST to check if we should leave the state? Use something like Res<MyState> plus Previous<Res<MyState>> in order to ensure that the information gets passed along.

I can see that the solution in the PR allows you to transition between states more quickly, after each stage. I don't see the practical benefit of doing so, do you expect sub-frame responsiveness to be critical for some use cases?

This also adds more burden to their use as you then need to ensure that each of your state-dependent systems works properly when started at any point in the frame, rather than relying on the fact that we enabled the state at the start of the frame, and that each of the systems in previous stages that depend on the state have already run.

@cart
Copy link
Member Author

cart commented Dec 10, 2020

I also prefer (2) by a pretty wide margin. The only issue is making it work well for multiple state transitions.

I don't see the practical benefit of doing so, do you expect sub-frame responsiveness to be critical for some use cases?

Yes exactly that. The "frames required to evaluate state changes" shouldn't get bigger as the size of the state transition queue increases. I think many people would consider "deferring updates to the next frame" like this to be hugely wasteful (I certainly do).

Consider the enum State { SetupA, SetupB, SetupC, InGame } case.
If the SetupA, SetupB, and SetupC transitions can all be finished on the first frame, I don't want to waste 3 frames moving through the state graph before I start running my Update stage. That type of latency is tangible. Ideally we don't need to wait any number of frames.

If someone encounters that problem (and cares about that latency), the solution shouldn't be "merge states" or "dont use states".

Partly for this reason, I don't think that it makes sense to have a special State type (or trait). Storing system state in a resource seems much more natural to me, because it gives us that global behavior without needing to special case things and integrates well with run criteria.

State<MyState> is a resource and can already be used in run_criteria. The wrapper type exists to allow for queuing multiple state changes in the same update (see the above rationale).

@alec-mccormick
Copy link

alec-mccormick commented Dec 10, 2020

I think the move toward states & stages is a great one and the syntax is very elegant and easy to understand. Proposal 2 certainly makes the most sense to me for expressing a system set as it is not only wrap your head around but it also encourages organizing your project into separate system sets.

I took a small crack at implementing EventStage.

How it functions is essentially equivalent to this:

SystemStage::parallel()
  .with_run_criteria(run_criteria::<T>)
  .with_system(next_event.chain(event_handler));

fn run_criteria<T>(mut reader: Local<EventReader<T>>, events: Res<Events<T>>) -> ShouldRun {
    if reader.earliest(&events).is_some() {
        ShouldRun::YesAndLoop
    } else {
        ShouldRun::No
    }
}

fn next_event<T>(mut reader: Local<EventReader<T>>, events: Res<Events<T>>) -> T {
    reader.earliest(&events).unwrap().clone()
}

I apologize I am not the most experienced Rust developer, however here are a few of my thoughts on the implementation:

  1. As I understand it, system inputs are required to be 'static, which causes obvious problems with attempting to pass a reference of an event from one system to another. I attempted to solve this by storing the events internally as Arc<T> and force systems to take In<Arc<T>>, however this did not mesh well with Events.drain() so I imagine this is not quite the right solution. I opted to just add a restriction of Clone to the event types for now to get the EventStage functional, but I'm sure there's a better solution.

  2. Currently every system added to EventStage is added as next_event_system.chain(event_handler_system), however this creates a new reader for every system added. I imagine it would be slightly more efficient to read the event once from EventStage and pass it in to the executor, however SystemStageExecutor does not currently allow for any input to be passed to its systems.

  3. I think it would be very useful to be able to build event stages that are composed of multiple events. I included an implementation of a stage named AnyEventStage which runs systems when any of its events has fired. It is written manually to work with 3 event types, however this is certainly something that would be taken care of with a macro. Event stages could be written to compose event streams in different ways like retaining the latest value for each stream and firing them all together.

@cart
Copy link
Member Author

cart commented Dec 12, 2020

I think I've addressed the relevant feedback so I'm transitioning out of draft mode.

@cart cart marked this pull request as ready for review December 12, 2020 04:20
.with_run_criteria(
FixedTimestep::step(2.0)
// labels are optional. they provide a way to access the current FixedTimestep state from within a system
.with_label(LABEL),
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Is there a duplication here? We have to invent a name for the Stage and a label for SystemStage.
Can these be unified?

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I don't think we should unify them:

  1. A StateStage has different "substages" for each State lifecycle handler. That means there can be a 1->Many relationship between stage name and run criteria.
  2. Not all stages have names (only stages that live inside a Schedule)
  3. The upcoming SystemSets will also have run criteria, and they definitely won't have an infer-able name.

@smokku
Copy link
Member

smokku commented Dec 12, 2020

RunCriteria gets initialized on first run of Stage. This means if I try accessing FixedTimestepState of a schedule that didn't run yet, it panics on unwrap().

It is reproducible if you change fixed_timestep example as follows:

fn update(mut last_time: Local<f64>, time: Res<Time>, fixed_timesteps: Res<FixedTimesteps>) {
    let fixed_timestep = fixed_timesteps.get(LABEL).unwrap();

Of course I can add if let Some(... to every system, but this is unnecessary boilerplate after the initialization - which makes it obvious, since the official example does not need it.

It used to be that initialize was a separate step, completed before run.
We even have a dangling comment now:

// in below app.initialize() have access to them.

which makes me worry whether it introduces #916 bug again.

@cart
Copy link
Member Author

cart commented Dec 12, 2020

This means if I try accessing FixedTimestepState of a schedule that didn't run yet, it panics on unwrap()

We could fix this by adding a Stage::initialize function, then calling Schedule::initialize recursively before calling the root Schedule::run.

The downsides:

  1. slightly more complicated api (not a huge problem imo)
  2. treats "initialization" as an atomic unit. The current approach allows each stage to initialize according to the results from a previous stage (although adding the "atomic" initialization doesn't stop non-atomic initialization in Stage::run)

I guess thats reasonable. I'll start adding it (but feel free to discuss alternatives)

which makes me worry whether it introduces #916 bug again.

Fortunately its not a problem because we still handle window creation events before running the startup systems (because they are now part of the main schedule). I just tested it on this branch and we can access windows from startup systems. It even works if we remove the line in question from the bevy_winit lib.rs! (which i think we should do)

@cart
Copy link
Member Author

cart commented Dec 12, 2020

@smokku : alrighty I made the changes mentioned above. The changes you made to the "fixed timestep" now work.

@cart
Copy link
Member Author

cart commented Dec 12, 2020

It increases code complexity more than I'd like, but it does cut down on user-facing surprises, so I guess its worth it.

@smokku
Copy link
Member

smokku commented Dec 12, 2020

@smokku : alrighty I made the changes mentioned above. The changes you made to the "fixed timestep" now work.

I tested my bv_soldank updated for schedule-v2 branch and with these changes, it does not crash anymore too.
Thanks. :)

@cart
Copy link
Member Author

cart commented Dec 13, 2020

Alrighty I'm going to merge this so we can get some testing before the 0.4 release. Thanks everyone!

@cart cart merged commit 509b138 into bevyengine:master Dec 13, 2020
@smokku
Copy link
Member

smokku commented Dec 13, 2020

Just realized, that there also was #690 that needed separate app.initialize() call.
I just tested current master with this scenario and it works just fine. 🎉 Great work!

@alec-deason
Copy link
Member

While testing this I've run across behavior that I find surprising and don't see mentioned in the discussion (though I may have missed it). on_state_update and related functions replace the behavior of the state rather than adding to it. This means that I can't add systems to the update stage of a state from multiple plugins. That's both unexpected and very awkward for my application which uses a lot of plugins.

It seems like we could add a second family of functions like add_system_to_state_* which do add to the state's lifecycle schedules rather than replacing. That keeps everything in the surface API of the AppBuilder which is consistent but maybe getting a bit crowded. Another option would be to allow the developer to retrieve a mutable reference to the state's lifecycle schedules.

@sapir
Copy link
Contributor

sapir commented Dec 13, 2020

While testing this I've run across behavior that I find surprising and don't see mentioned in the discussion (though I may have missed it). on_state_update and related functions replace the behavior of the state rather than adding to it. This means that I can't add systems to the update stage of a state from multiple plugins. That's both unexpected and very awkward for my application which uses a lot of plugins.

It seems like we could add a second family of functions like add_system_to_state_* which do add to the state's lifecycle schedules rather than replacing. That keeps everything in the surface API of the AppBuilder which is consistent but maybe getting a bit crowded. Another option would be to allow the developer to retrieve a mutable reference to the state's lifecycle schedules.

I did the latter in #1053 and then implemented the add_system_to_state_* functions in my own code.

@cart
Copy link
Member Author

cart commented Dec 13, 2020

@sapir @alec-deason

Hmm yeah that is definitely problematic. Given that the other "add system" apis are all additive, the on_state_X methods are kind of "foot-guns" right now.

The main issue is that on_state_X can't be an additive api because it takes a "IntoStage" value (and stages aren't additive).

I'll need to think about this for a bit 😄

DJMcNab added a commit to DJMcNab/bevy that referenced this pull request Dec 21, 2020
Extends bevyengine#1021
Fixes bevyengine#1117
This also allows avoiding the Clone bound on state

Possible future work:
- Make state use Eq instead
@cart cart mentioned this pull request Dec 23, 2020
@fopsdev fopsdev mentioned this pull request Jan 24, 2021
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
A-ECS Entities, components, systems, and events C-Feature A new feature, making something new possible
Projects
None yet
Development

Successfully merging this pull request may close these issues.