You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Reverb is intrinsically dependent on spatial data, so this probably needs to be baked into Spatial.
One effective strategy is to use a "feedback delay network" where sound reflections are streamed into many buffers representing spatial regions or directions that do not rotate with the viewer. Playback then samples from both the original source (direct) and the buffers (indirect), and the buffers loop back into each other continuously to support ∞-order reflections.
For efficiency, only one such buffer network should be allocated, allowing reverb processing to be O(buffers) rather than O(buffers * sources). This will require either some sort of sharing mechanism between Spatial instances, or refactoring of Spatial into an abstraction that itself owns and mixes sources. I'm leaning towards the former in hope of avoiding the complexity inherent in an additional case of Worker-like source ownership, though care will be necessary to support shared mutable state without UB if a user tries to run multiple workers concurrently.
Scene-dependent reverb is also interesting, though potentially complex. For small, hand-authored scenes, a FDN could be defined with buffers at manually-placed points with precomputed interreflections, in the spirit of real-time radiance. This is toil-intensive, however, and scales poorly to large scenes. One interesting possibility is a hierarchy of toroidally addressed buffers (clipmap style) that could be related with real-time geometry queries. Initial implementation should focus on something much simpler, but there's fertile ground for exploration, perhaps motivating making the whole reverb pipeline pluggable to support application-layer experimentation.
Reverb is intrinsically dependent on spatial data, so this probably needs to be baked into
Spatial
.One effective strategy is to use a "feedback delay network" where sound reflections are streamed into many buffers representing spatial regions or directions that do not rotate with the viewer. Playback then samples from both the original source (direct) and the buffers (indirect), and the buffers loop back into each other continuously to support ∞-order reflections.
For efficiency, only one such buffer network should be allocated, allowing reverb processing to be O(buffers) rather than O(buffers * sources). This will require either some sort of sharing mechanism between
Spatial
instances, or refactoring ofSpatial
into an abstraction that itself owns and mixes sources. I'm leaning towards the former in hope of avoiding the complexity inherent in an additional case ofWorker
-like source ownership, though care will be necessary to support shared mutable state without UB if a user tries to run multiple workers concurrently.Scene-dependent reverb is also interesting, though potentially complex. For small, hand-authored scenes, a FDN could be defined with buffers at manually-placed points with precomputed interreflections, in the spirit of real-time radiance. This is toil-intensive, however, and scales poorly to large scenes. One interesting possibility is a hierarchy of toroidally addressed buffers (clipmap style) that could be related with real-time geometry queries. Initial implementation should focus on something much simpler, but there's fertile ground for exploration, perhaps motivating making the whole reverb pipeline pluggable to support application-layer experimentation.
A promising reference: https://signalsmith-audio.co.uk/writing/2021/lets-write-a-reverb/
The text was updated successfully, but these errors were encountered: