-
-
Notifications
You must be signed in to change notification settings - Fork 3.6k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Add handling for more naga
capabilities
#9000
Conversation
naga
capabilities
5edca59
to
84a2584
Compare
Resolved changes with #5703 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Just saw this PR because I hit the MULTISAMPLED_SHADING issue. The prepass example wasn't really meant to show this but at the same time I'm not sure how to have an example that only shows this feature so I guess it's fine to have it there.
I haven't tested it yet, but the code looks good to me.
@IceSentry Thanks for the response - I've just fixed the merge conflict. |
https://github.com/bevyengine/bevy/assets/2632925/e046205e-3317-47c3-9959-fc94c529f7e0 # Objective - Adds per-object motion blur to the core 3d pipeline. This is a common effect used in games and other simulations. - Partially resolves #4710 ## Solution - This is a post-process effect that uses the depth and motion vector buffers to estimate per-object motion blur. The implementation is combined from knowledge from multiple papers and articles. The approach itself, and the shader are quite simple. Most of the effort was in wiring up the bevy rendering plumbing, and properly specializing for HDR and MSAA. - To work with MSAA, the MULTISAMPLED_SHADING wgpu capability is required. I've extracted this code from #9000. This is because the prepass buffers are multisampled, and require accessing with `textureLoad` as opposed to the widely compatible `textureSample`. - Added an example to demonstrate the effect of motion blur parameters. ## Future Improvements - While this approach does have limitations, it's one of the most commonly used, and is much better than camera motion blur, which does not consider object velocity. For example, this implementation allows a dolly to track an object, and that object will remain unblurred while the background is blurred. The biggest issue with this implementation is that blur is constrained to the boundaries of objects which results in hard edges. There are solutions to this by either dilating the object or the motion vector buffer, or by taking a different approach such as https://casual-effects.com/research/McGuire2012Blur/index.html - I'm using a noise PRNG function to jitter samples. This could be replaced with a blue noise texture lookup or similar, however after playing with the parameters, it gives quite nice results with 4 samples, and is significantly better than the artifacts generated when not jittering. --- ## Changelog - Added: per-object motion blur. This can be enabled and configured by adding the `MotionBlurBundle` to a camera entity. --------- Co-authored-by: Torstein Grindvik <[email protected]>
This will be superseded by gfx-rs/wgpu#5606 once we upgrade to wgpu 0.20 #13186. |
@JMS55 should we pursue this or simply close it out in favor of the linked PR? |
Close it in favor of the wgpu update PR, but we have to remember to make the change in that PR. |
Objective
Following on from #4824, this MR adds the capabilities for:
MULTISAMPLED_SHADING
TEXTURE_FORMAT_16BIT_NORM
MULTIVIEW
EARLY_DEPTH_TEST
Solution
RenderAdapter
is passed down, and then downlevel flags are translated into capabilities that are passed intowgpu
.shader_prepass
example has been changed to show that multisampled shading now works, by adding a controller for MSAA. The text color changes have also been removed, as otherwise the text couldn't be seen on the motion vectors screen.Changelog
Added support for more
naga
capabilities.Migration Guide
PipelineCache::new
now takes aRenderAdapter
.