Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add polylines and spheres for handling large worlds #152

Open
wants to merge 44 commits into
base: main
Choose a base branch
from
Open
Show file tree
Hide file tree
Changes from 41 commits
Commits
Show all changes
44 commits
Select commit Hold shift + click to select a range
b67b072
Preliminary spawning of polylines for lanes
luca-della-vedova Jun 13, 2023
c3ae7fb
Adds support for offscreen rendering.
arjo129 Jun 21, 2023
2f5b6dc
Able to retrieve and debug a color selection buffer.
arjo129 Jun 23, 2023
a78996c
Get selection camera to follow main camera.
arjo129 Jun 23, 2023
983feac
Fix various resizing related issues.
arjo129 Jun 23, 2023
d91eb55
Adds a simple color picking shader.
arjo129 Jun 26, 2023
e48b0bb
Comments
arjo129 Jun 27, 2023
efa32f2
Clean ups
arjo129 Jul 7, 2023
6e35e8d
Minor fixes.
arjo129 Jul 11, 2023
96472de
Playing with bevy_points
arjo129 Jul 11, 2023
14418f5
Got fixed sized anchors working
arjo129 Jul 14, 2023
4929e7f
Make anchors pickable in screen space as well
arjo129 Jul 17, 2023
87f2994
Lane selecting works
arjo129 Jul 18, 2023
e9b1ccb
Rename
arjo129 Jul 18, 2023
6e00b78
Style
arjo129 Jul 18, 2023
5430079
Merge branch 'main' of github.com:open-rmf/rmf_site into arjo/feat/of…
arjo129 Jul 21, 2023
77198bd
Clean ups
arjo129 Jul 21, 2023
4c0b3ce
Get arrows to scale as well
arjo129 Jul 24, 2023
4596c76
style
arjo129 Jul 24, 2023
e53ea42
Refactoring out the scale factor
arjo129 Jul 28, 2023
cd75fc9
Refactoring out the scale factor
arjo129 Jul 28, 2023
4efd873
Refactoring the selection system out
arjo129 Jul 28, 2023
c0cbe57
Renaming files to make things more readable
arjo129 Jul 28, 2023
df93732
More refactors
arjo129 Jul 28, 2023
0c6179a
style
arjo129 Jul 28, 2023
4ec4d20
Yet another rename
arjo129 Jul 28, 2023
ae4b634
Remove expensive re-spawning that was being done.
arjo129 Aug 3, 2023
366adce
More refactoring
arjo129 Aug 3, 2023
795ca6e
Merge branch 'main' into arjo/feat/offscreen_rendering
arjo129 Aug 4, 2023
54e421e
Disable color picking
arjo129 Sep 29, 2023
23cfeae
Merge remote-tracking branch 'origin' into arjo/feat/offscreen_rendering
arjo129 Sep 29, 2023
6466fa0
Broke stuff
arjo129 Sep 29, 2023
425889d
Discovered why selection was not working.
arjo129 Oct 2, 2023
d5dbd97
ctually commit the sign flip code.
arjo129 Oct 2, 2023
a4083bf
Second attempt at integrating into the system
arjo129 Oct 2, 2023
33d2015
GPU picking seems to work for anchors
arjo129 Oct 2, 2023
dac8788
Style
arjo129 Oct 2, 2023
22ab07c
Style
arjo129 Oct 2, 2023
efaef70
Lane picking should work now
arjo129 Oct 3, 2023
667522e
Merge branch 'main' into arjo/feat/offscreen_rendering
arjo129 Oct 6, 2023
ef4d256
Clean ups
arjo129 Oct 6, 2023
0f1c1fc
Double perf with smaller texture
arjo129 Oct 6, 2023
c8dc723
Slight perf improvement
arjo129 Oct 6, 2023
3487b85
cargo fmt
arjo129 Oct 6, 2023
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
8 changes: 7 additions & 1 deletion rmf_site_editor/Cargo.toml
Original file line number Diff line number Diff line change
Expand Up @@ -19,7 +19,7 @@ path = "examples/extending_menu.rs"
bevy_egui = "0.19"
bevy_mod_picking = "0.11"
bevy_mod_raycast = "0.7"
bevy_mod_outline = "0.3.3"
bevy_mod_outline = { git = "https://github.com/arjo129/bevy_mod_outline.git", branch= "09_compatibility"}
bevy_infinite_grid = "0.6"
bevy_polyline = "0.4"
bevy_stl = "0.7.0"
Expand Down Expand Up @@ -49,6 +49,12 @@ rfd = "0.11"
urdf-rs = "0.7"
utm = "0.1.6"
sdformat_rs = { git = "https://github.com/open-rmf/sdf_rust_experimental", rev = "f86344f"}
pollster = "0.3.0"
futures-intrusive = "0.5.0"
wgpu="0.14.2"
image = "0.24.6"
rand = "0.8.5"
bevy_points = "0.1.3"
gz-fuel = { git = "https://github.com/open-rmf/gz-fuel-rs", branch = "first_implementation" }
pathdiff = "*"

Expand Down
14 changes: 13 additions & 1 deletion rmf_site_editor/src/interaction/anchor.rs
Original file line number Diff line number Diff line change
Expand Up @@ -20,7 +20,7 @@ use crate::{
interaction::IntersectGroundPlaneParams,
interaction::*,
keyboard::DebugMode,
site::{Anchor, Category, Delete, Dependents, SiteAssets, Subordinate},
site::{Anchor, Category, Delete, Dependents, PointAsset, SiteAssets, Subordinate},
};
use bevy::prelude::*;

Expand All @@ -38,6 +38,8 @@ pub fn add_anchor_visual_cues(
>,
categories: Query<&Category>,
site_assets: Res<SiteAssets>,
point_assets: Res<PointAsset>,
interaction_assets: Res<InteractionAssets>,
) {
for (e, parent, subordinate, anchor) in &new_anchors {
let body_mesh = match categories.get(parent.get()).unwrap() {
Expand All @@ -47,13 +49,23 @@ pub fn add_anchor_visual_cues(
};

let mut entity_commands = commands.entity(e);
entity_commands.insert(LimitScaleFactor {
distance_to_start_scaling: 10.0,
original_scale: 1.0,
});
let body = entity_commands.add_children(|parent| {
let mut body = parent.spawn(PbrBundle {
mesh: body_mesh,
material: site_assets.passive_anchor_material.clone(),
..default()
});
body.insert(Selectable::new(e));
body.insert(MaterialMeshBundle {
mesh: point_assets.bevy_point_mesh.clone(),
material: point_assets.bevy_point_material.clone(),
..default()
});
body.insert(ScreenSpaceSelection::Point(e));
if subordinate.is_none() {
body.insert(DragPlaneBundle::new(e, Vec3::Z));
}
Expand Down
12 changes: 9 additions & 3 deletions rmf_site_editor/src/interaction/camera_controls.rs
Original file line number Diff line number Diff line change
Expand Up @@ -49,13 +49,19 @@ pub const HOVERED_OUTLINE_LAYER: u8 = 4;
/// The X-Ray layer is used to show visual cues that need to be rendered
/// above anything that would be obstructing them.
pub const XRAY_RENDER_LAYER: u8 = 5;
/// The Line Picking layer contains an entity based color map for picking
/// polylines which are drawn in screen space.
pub const LINE_PICKING_LAYER: u8 = 6;
/// The Line Picking layer contains an entity based color map for picking
/// points which are drawn in screen space.
pub const POINT_PICKING_LAYER: u8 = 7;
/// The Model Preview layer is used by model previews to spawn and render
/// models in the engine without having them being visible to general cameras
pub const MODEL_PREVIEW_LAYER: u8 = 6;
pub const MODEL_PREVIEW_LAYER: u8 = 8;

#[derive(Resource)]
struct MouseLocation {
previous: Vec2,
pub struct MouseLocation {
pub previous: Vec2,
}

impl Default for MouseLocation {
Expand Down
184 changes: 184 additions & 0 deletions rmf_site_editor/src/interaction/color_based_picker/camera_capture.rs
Original file line number Diff line number Diff line change
@@ -0,0 +1,184 @@
// Taken from https://github.com/bevyengine/bevy/pull/5550/files
// Might be useful to move this to a utils folder.
use std::sync::Arc;

use bevy::prelude::*;
use bevy::render::render_asset::RenderAssets;
use bevy::render::render_graph::{self, NodeRunError, RenderGraph, RenderGraphContext};
use bevy::render::renderer::{RenderContext, RenderDevice, RenderQueue};
use bevy::render::{Extract, RenderApp, RenderStage};

use bevy::render::render_resource::{
Buffer, BufferDescriptor, BufferUsages, CommandEncoderDescriptor, Extent3d, ImageCopyBuffer,
ImageDataLayout,
};
use pollster::FutureExt;

use std::sync::atomic::{AtomicBool, Ordering};

pub fn receive_images(
image_copiers: Query<&ImageCopier>,
mut images: ResMut<Assets<Image>>,
render_device: Res<RenderDevice>,
) {
for image_copier in image_copiers.iter() {
if !image_copier.enabled() {
continue;
}
// Derived from: https://sotrh.github.io/learn-wgpu/showcase/windowless/#a-triangle-without-a-window
// We need to scope the mapping variables so that we can
// unmap the buffer
async {
let buffer_slice = image_copier.buffer.slice(..);

// NOTE: We have to create the mapping THEN device.poll() before await
// the future. Otherwise the application will freeze.
let (tx, rx) = futures_intrusive::channel::shared::oneshot_channel();
buffer_slice.map_async(wgpu::MapMode::Read, move |result| {
tx.send(result).unwrap();
});
render_device.poll(wgpu::Maintain::Wait);
rx.receive().await.unwrap().unwrap();
if let Some(mut image) = images.get_mut(&image_copier.dst_image) {
image.data = buffer_slice.get_mapped_range().to_vec();
}

image_copier.buffer.unmap();
}
.block_on();
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'm slightly concerned about this but, to be fair, I'm not sure the alternative I'll propose is worth the additional effort, or even whether this is a big bottleneck at all!
From what I understand, this system will poll the gpu for the buffer, copy it to the CPU, then copy it into an image, so at least 1 GPU - CPU copy (usually quite expensive) and 2 CPU copies (expensive but not as much).
The block_on() call would make this system only return when all of this has happened. My proposal was to remove the blocking and either:

  • Leave everything unchanged, it seems to do something sensible on my machine but honestly I'm not sure since picking is a bit fickle.
  • Keep the sender in this system but put the receiver in another system, so they are fully decoupled. The system would just poll the queue and update the image if it received one. It could still block if we want to be sure that we process a picking image per frame, but if we place it in a later stage at least the background task of copying the image to the CPU could have time to run without having to block on it.

}
}

pub const IMAGE_COPY: &str = "image_copy";

pub struct ImageCopyPlugin;
impl Plugin for ImageCopyPlugin {
fn build(&self, app: &mut App) {
let render_app = app.add_system(receive_images).sub_app_mut(RenderApp);

render_app.add_system_to_stage(RenderStage::Extract, image_copy_extract);

let mut graph = render_app.world.get_resource_mut::<RenderGraph>().unwrap();

graph.add_node(IMAGE_COPY, ImageCopyDriver::default());

graph
.add_node_edge(IMAGE_COPY, bevy::render::main_graph::node::CAMERA_DRIVER)
.unwrap();
}
}

#[derive(Clone, Default, Resource, Deref, DerefMut)]
pub struct ImageCopiers(pub Vec<ImageCopier>);

#[derive(Clone, Component)]
pub struct ImageCopier {
buffer: Buffer,
enabled: Arc<AtomicBool>,
src_image: Handle<Image>,
dst_image: Handle<Image>,
}

impl ImageCopier {
pub fn new(
src_image: Handle<Image>,
dst_image: Handle<Image>,
size: Extent3d,
render_device: &RenderDevice,
) -> ImageCopier {
let padded_bytes_per_row =
RenderDevice::align_copy_bytes_per_row((size.width) as usize) * 4;

let cpu_buffer = render_device.create_buffer(&BufferDescriptor {
label: None,
size: padded_bytes_per_row as u64 * size.height as u64,
usage: BufferUsages::MAP_READ | BufferUsages::COPY_DST,
mapped_at_creation: false,
});

ImageCopier {
buffer: cpu_buffer,
src_image,
dst_image,
enabled: Arc::new(AtomicBool::new(true)),
}
}

pub fn enable(&self) {
self.enabled.store(true, Ordering::Relaxed);
}

pub fn disable(&self) {
self.enabled.store(false, Ordering::Relaxed);
}

pub fn enabled(&self) -> bool {
self.enabled.load(Ordering::Relaxed)
}
}

pub fn image_copy_extract(mut commands: Commands, image_copiers: Extract<Query<&ImageCopier>>) {
commands.insert_resource(ImageCopiers(
image_copiers.iter().cloned().collect::<Vec<ImageCopier>>(),
));
Comment on lines +121 to +123
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'll admit not knowing much about the rendering systems, why do we need to do this? Is it because render graph nodes can't actually get data from the world but only from resources?

}

#[derive(Default)]
pub struct ImageCopyDriver;

impl render_graph::Node for ImageCopyDriver {
fn run(
&self,
_graph: &mut RenderGraphContext,
render_context: &mut RenderContext,
world: &World,
) -> Result<(), NodeRunError> {
let image_copiers = world.get_resource::<ImageCopiers>().unwrap();
let gpu_images = world.get_resource::<RenderAssets<Image>>().unwrap();

for image_copier in image_copiers.iter() {
if !image_copier.enabled() {
continue;
}

let src_image = gpu_images.get(&image_copier.src_image).unwrap();

let mut encoder = render_context
.render_device
.create_command_encoder(&CommandEncoderDescriptor::default());

let format = src_image.texture_format.describe();

let padded_bytes_per_row = RenderDevice::align_copy_bytes_per_row(
(src_image.size.x as usize / format.block_dimensions.0 as usize)
* format.block_size as usize,
);

let texture_extent = Extent3d {
width: src_image.size.x as u32,
height: src_image.size.y as u32,
depth_or_array_layers: 1,
};

encoder.copy_texture_to_buffer(
src_image.texture.as_image_copy(),
ImageCopyBuffer {
buffer: &image_copier.buffer,
layout: ImageDataLayout {
offset: 0,
bytes_per_row: Some(
std::num::NonZeroU32::new(padded_bytes_per_row as u32).unwrap(),
),
rows_per_image: None,
},
},
texture_extent,
);

let render_queue = world.get_resource::<RenderQueue>().unwrap();
render_queue.submit(std::iter::once(encoder.finish()));
}

Ok(())
}
}
Loading