Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add polylines and spheres for handling large worlds #152

Open
wants to merge 44 commits into
base: main
Choose a base branch
from

Conversation

arjo129
Copy link
Member

@arjo129 arjo129 commented Jul 18, 2023

Addresses #134

This PR adds fixed-sized meshes for points and lanes. It also adds the logic required to select objects at a large distance. This means no matter how far away you are, you will always be able to manipulate anchors and lanes. We rely on bevy_polyline and bevy_points for drawing lanes and anchors in the real world. Selection was challenging with these hence I've cobbled together a color based selection system which assigns a unique color to each lane or anchor and draws the world to an offscreen buffer. This offscreen buffer can be seen when you put the system in debug mode by pressing d on the keyboard. You will see the offscreen buffers saved to 2 files called picking_layer_6.png and picking_layer_7.png. This color selection buffer handles only anchors and lanes based on bevy_points and bevy_polyline

For items that need to be scaled for instance the drag arrows, I added a system which scales the cue based on the distance of the cue from the camera here: https://github.com/open-rmf/rmf_site/blob/ae4b634ac63e94ebfe91d1237278e7e4ca2bc1d2/rmf_site_editor/src/interaction/scale_factor_limiting.rs

luca-della-vedova and others added 15 commits June 13, 2023 15:46
This is useful in a variety of scenarios including color based picking.

Signed-off-by: Arjo Chakravarty <[email protected]>
Each polyline is drawn to a new render layer with a unique color per
entity. The color selection buffer can be viewed in debug mode.
(Press "d" on the keyboard).

Todos:
[ ] Add support for remapping mouse events to the selection buffer.
[ ] Integrate meshes.
[ ] Rework selection system.

Signed-off-by: Arjo Chakravarty <[email protected]>
This commit resolves various incongruencies that had occured during
resizing. It also gets the mouse position in the logical camera
viewport.

Todo:
[ ] Figure out why the color mappings is incorrect.

Signed-off-by: Arjo Chakravarty <[email protected]>
The color picking shader now somewhat works. We are able to detect which
polyline our mouse is hovering over. We still have to integrate this
withour selection mechanism.
However it might be worth spinning this off into its own
`bevy_mod_picking` backend.

Signed-off-by: Arjo Chakravarty <[email protected]>
Signed-off-by: Arjo Chakravarty <[email protected]>
Signed-off-by: Arjo Chakravarty <[email protected]>
Signed-off-by: Arjo Chakravarty <[email protected]>
Signed-off-by: Arjo Chakravarty <[email protected]>
Signed-off-by: Arjo Chakravarty <[email protected]>
This commit makes anchors appear in the picking buffer as well.

Signed-off-by: Arjo Chakravarty <[email protected]>
Signed-off-by: Arjo Chakravarty <[email protected]>
Signed-off-by: Arjo Chakravarty <[email protected]>
Signed-off-by: Arjo Chakravarty <[email protected]>
Signed-off-by: Arjo Chakravarty <[email protected]>
Signed-off-by: Arjo Chakravarty <[email protected]>
Signed-off-by: Arjo Chakravarty <[email protected]>
@arjo129 arjo129 changed the title Add polylines and spheres for Add polylines and spheres for handling large worlds Jul 28, 2023
Signed-off-by: Arjo Chakravarty <[email protected]>
Signed-off-by: Arjo Chakravarty <[email protected]>
Signed-off-by: Arjo Chakravarty <[email protected]>
Signed-off-by: Arjo Chakravarty <[email protected]>
Signed-off-by: Arjo Chakravarty <[email protected]>
Signed-off-by: Arjo Chakravarty <[email protected]>
@arjo129 arjo129 marked this pull request as ready for review August 3, 2023 06:10
@xiyuoh
Copy link
Member

xiyuoh commented Aug 4, 2023

I'm trying this out briefly with the demo world, and I'm finding it a little difficult to manipulate anchors when zooming out further. When I'm selecting an anchor, it may only allow me to move another anchor.

site_editor_selection.mp4

Signed-off-by: Arjo Chakravarty <[email protected]>
Signed-off-by: Arjo Chakravarty <[email protected]>
Y-axis was flipped.

Signed-off-by: Arjo Chakravarty <[email protected]>
Signed-off-by: Arjo Chakravarty <[email protected]>
Signed-off-by: Arjo Chakravarty <[email protected]>
Signed-off-by: Arjo Chakravarty <[email protected]>
Signed-off-by: Arjo Chakravarty <[email protected]>
@arjo129
Copy link
Member Author

arjo129 commented Oct 6, 2023

This probably is ready for another round of review.

@luca-della-vedova
Copy link
Member

I was worried that doing this copying back and forth from CPU buffers might impact performance but I decided to quantify the performance to have some proper data.
For reference, I have a laptop with a 11th Gen Intel(R) Core(TM) i7-1185G7 with integrated GPU.

I used the instructions in the cheatbook to add a printout of the app FPS and ran the demo world.
Before the PR:

2023-10-06T08:35:23.399101Z  INFO bevy diagnostic: frame_count                     : 529.000000
2023-10-06T08:35:24.387194Z  INFO bevy diagnostic: frame_time                      :   25.348438ms (avg 25.126539ms)
2023-10-06T08:35:24.387227Z  INFO bevy diagnostic: fps                             :   39.457123   (avg 39.808914)

After the PR:

2023-10-06T08:42:15.563158Z  INFO bevy diagnostic: frame_count                     : 524.000000
2023-10-06T08:42:16.502928Z  INFO bevy diagnostic: frame_time                      :  119.953480ms (avg 136.121408ms)
2023-10-06T08:42:16.502951Z  INFO bevy diagnostic: fps                             :    8.336565   (avg 7.384954)

It seems the FPS dropped by approximately 80%.
I have some other feedback about the UX but thought this is probably a large enough issue that would make it hard to bring this to main unless it's either optimized a fair bit or hidden behind a feature flag / default off.
What do you think?

@arjo129
Copy link
Member Author

arjo129 commented Oct 6, 2023 via email

Signed-off-by: Arjo Chakravarty <[email protected]>
Signed-off-by: Arjo Chakravarty <[email protected]>
Signed-off-by: Arjo Chakravarty <[email protected]>
Copy link
Member

@luca-della-vedova luca-della-vedova left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I did a first round of review, it's not super thorough but I tried to address the high level points rather than dig too much into implementation details.

Other than what's in the review itself I found out a few other points:

I noticed that when zooming out a lot in the office demo a bunch of other anchors appear as below:

image

This seems to not actually be related to this PR but I'll try to keep a mental note that should try to figure out what is happening, the vertices do seem to exist but probably are not being scaled correctly when calculating the pixel to cartesian transform.

I also find it a bit hard at times to pick items but it's not easy to reproduce, sometimes picking seems to "just work" while sometimes it's very hard to do so without clicking right in the middle of the object, but I'm not sure how to really reproduce this since it only happens sometimes.

I would recommend matching the color of the polyline to the color of the lane, right now it is hardcoded to being red so users don't have a visual cue to know which graph it belongs to.

It would also be nice if there was some sort of visual cue to know that the anchors or lanes are being hovered or are selected. For example anchors / lanes had outlines that would hint the user when they were hovering / if one was selected. I don't know if this is possible with polylines or points. If it's not I supposed we could just change their color?

Drag and drop works for normal anchors but seems to not be implemented for points, can we add that back?

It also seems when zooming out a lot that anchors are selected even if I click outside of them, as in the video:
vokoscreenNG-2023-10-06_14-28-28.webm

This is the main parts I found for now, I'll do a more thorough review of the implementation itself once the largest points are addressed, since some of those might require a non negligible amount of code change.

};

if let Ok((camera, _)) = cameras.get(view_cam_entity) {
for _e in resize_event.iter() {
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We don't really need to iterate for each resize event since we don't need the information on the event but just to know if it happened, hence this could just become a if resize_event.iter().last().is_some().

Comment on lines +53 to +66
for (_handle, entity) in handles.iter() {
//if handle.0 == camera_entity.image {
commands.entity(entity).despawn();
//}
}

if let Some(copier) = render_buffer_details.copier_entity {
commands.entity(copier).despawn();
}

if let Some(image_buffer) = render_buffer_details.image_parameters {
commands.entity(image_buffer).despawn();
}
}
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Will throw it there, do you think it's possible to just access the components via an &mut reference and changing the properties (i.e. image size) in place rather than fully despawning and respawning them?

Comment on lines +22 to +29
.add_system_to_stage(
CoreStage::PostUpdate,
buffer_to_selection::<POINT_PICKING_LAYER>,
)
.add_system_to_stage(
CoreStage::PostUpdate,
buffer_to_selection::<LINE_PICKING_LAYER>,
)
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'm trying to understand the code so forgive me if my comments are off.

I noticed that it can be sometimes tricky to select anchors and I wonder if it's a system ordering issue and it could be fixed by "unifying" the buffers.
Specifically, I see here that you run two systems in parallel, one to check whether a point was clicked and one to check whether a line was clicked. Each of these systems will send a GPUPickItem event to be processed upstream.

This puts us in a pickle if we are in a point where there is both a point and a line, which happens very often in the anchors places at the end of lanes.
Specifically, if I understand correctly, in such scenarios each of these systems would fire an event. The final outcome then, would just be to select that one that happened to fire last, which is highly non-deterministic since they run in the same stage.
I wonder if we could try a unified picking approach with a single image instead, where:

  • Lines are written to an empty image.
  • Points are written to the same image, if they overlap they will automatically overwrite the pixels that were assigned to a line.
  • There is a single system in charge of picking that looks at the resulting image and sends a single picking event.

This should reduce the ambiguity and possible simplify the code / performance since we would have to deal with a single image.
Does this make sense?


image_copier.buffer.unmap();
}
.block_on();
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'm slightly concerned about this but, to be fair, I'm not sure the alternative I'll propose is worth the additional effort, or even whether this is a big bottleneck at all!
From what I understand, this system will poll the gpu for the buffer, copy it to the CPU, then copy it into an image, so at least 1 GPU - CPU copy (usually quite expensive) and 2 CPU copies (expensive but not as much).
The block_on() call would make this system only return when all of this has happened. My proposal was to remove the blocking and either:

  • Leave everything unchanged, it seems to do something sensible on my machine but honestly I'm not sure since picking is a bit fickle.
  • Keep the sender in this system but put the receiver in another system, so they are fully decoupled. The system would just poll the queue and update the image if it received one. It could still block if we want to be sure that we process a picking image per frame, but if we place it in a later stage at least the background task of copying the image to the CPU could have time to run without having to block on it.

Comment on lines +121 to +123
commands.insert_resource(ImageCopiers(
image_copiers.iter().cloned().collect::<Vec<ImageCopier>>(),
));
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'll admit not knowing much about the rendering systems, why do we need to do this? Is it because render graph nodes can't actually get data from the world but only from resources?

Comment on lines +214 to +217
Transform::IDENTITY,
GlobalTransform::IDENTITY,
ComputedVisibility::default(),
Visibility::VISIBLE,
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Do we need all of this? It seems to already be contained in PolylineBundle

RenderLayers::layer(LINE_PICKING_LAYER),
))
.id();
let picker = polyline.clone();
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

What's the difference between polyline and picker and do we need both?

/// but also ensures that the item is visible no matter how far
/// the user is from the mesh. This is really useful for things
/// like gizmos and tool tips.
pub fn limit_size(
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

What's the purpose of this system? I understand the docs but I wonder if it's a bit duplicated with the idea of color based picking through lines / points.

From what I understand, this could have helped with both making sure anchors don't disappear when zooming out too much and making sure that editing gizmos that are children of the anchor are also scaled up when the anchor transform is scaled up.
The first point shouldn't be relevant anymore since now points are used and we are not scaling anchors anymore. I wonder if making gizmos also in screen space would remove the need for this scaling at all?

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

No this handles the arrows. It scales them with distance. Otherwise arrows remain tiny.

Copy link
Member

@luca-della-vedova luca-della-vedova Oct 12, 2023

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yea by "gizmos" I really meant arrows in this case, so I wonder if a solution similar to what has been done to make the anchor size scale invariant could also be applied to arrows and have them in screen space? To be honest if we are scaling up based on distance might as well ditch the whole screenspace selection solution and just change the scale of all anchors as we zoom?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

When working with large worlds (Kilometer Scale) default hard coded anchor and lane sizes don't work well
3 participants