Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Pipeline layouts never freed at runtime, leaking memory #2546

Closed
BinaryWarlock opened this issue Mar 21, 2022 · 2 comments
Closed

Pipeline layouts never freed at runtime, leaking memory #2546

BinaryWarlock opened this issue Mar 21, 2022 · 2 comments
Labels
area: correctness We're behaving incorrectly type: bug Something isn't working

Comments

@BinaryWarlock
Copy link

Description
If you create new render pipelines and pipeline layouts at runtime, they're never freed/destroyed, which causes a reliable memory leak.

Notably, even though the render pipeline and pipeline layouts are dropped, destroy_pipeline_layout is never called.

Since my actual program creates a new pipeline and pipeline layout often, it causes a massive memory leak that leaks 1 GB of RAM in ~30 seconds, making wgpu completely unusable.

Repro steps
I modified the cube example to showcase this issue. This patch makes it recreate the pipeline(s) and pipeline layouts every frame:

diff --git a/wgpu/examples/cube/main.rs b/wgpu/examples/cube/main.rs
index 60a4954..24b104d 100644
--- a/wgpu/examples/cube/main.rs
+++ b/wgpu/examples/cube/main.rs
@@ -108,6 +108,7 @@ struct Example {
     uniform_buf: wgpu::Buffer,
     pipeline: wgpu::RenderPipeline,
     pipeline_wire: Option<wgpu::RenderPipeline>,
+    config: wgpu::SurfaceConfiguration,
 }
 
 impl Example {
@@ -329,6 +330,7 @@ impl framework::Example for Example {
             uniform_buf,
             pipeline,
             pipeline_wire,
+            config: config.clone(),
         }
     }
 
@@ -354,6 +356,8 @@ impl framework::Example for Example {
         queue: &wgpu::Queue,
         spawner: &framework::Spawner,
     ) {
+        *self = Self::init(&self.config, unsafe { &*std::ptr::NonNull::dangling().as_ptr() }, device, queue);
+
         device.push_error_scope(wgpu::ErrorFilter::Validation);
         let mut encoder =
             device.create_command_encoder(&wgpu::CommandEncoderDescriptor { label: None });

This is very similar to what my real program is doing, and exhibits the exact same issue.

Expected vs observed behavior
It should drop the render pipeline and pipeline layouts, and eventually free the backed descriptors. It should not leak memory over time.

Instead we see a reliable memory leak of about 1 MB per second.

If we run a heap profiler, we see create_pipeline_layout is leaking huge amounts of memory.

If we set a breakpoint on destroy_pipeline_layout:

b wgpu_hal::vulkan::device::<impl wgpu_hal::Device<wgpu_hal::vulkan::Api> for wgpu_hal::vulkan::Device>::destroy_pipeline_layout

It's never called while the program is running, and not at all until the example is exited.

This seems related to #582 (cc @kvark), I suspect the refcounts may never be hitting 1 for some reason?

Platform
wgpu v0.12 on Linux with proprietary NVidia drivers

@jimblandy
Copy link
Member

@BinaryWarlock I think #2565 may fix this. Could you give it a try?

@kvark kvark added type: bug Something isn't working area: correctness We're behaving incorrectly labels Apr 1, 2022
@jimblandy
Copy link
Member

No reply from reporter, and I do think we fixed this.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
area: correctness We're behaving incorrectly type: bug Something isn't working
Projects
None yet
Development

No branches or pull requests

3 participants