Pull audio waveform/frequency data into WebGL for realtime audio visualisation!
This is a wrapper around the more generic/flexible web-audio-analyser that makes it easier to just drop into a new WebGL project and get up and running quickly.
You can pull gl-audio-analyser
into your project using browserify.
See demo/index.js
for a full usage example.
Creates a new audio analyser given the following arguments:
gl
is the WebGL context you want to use.audio
is an audio node to analyse, be that an<audio>
tag,MediaStream
orAudioSourceNode
.ctx
is an (optional)AudioContext
instance to use. Note however there may only be one instance of this per page, and if not supplied one will be created for you.
Once created, you should then bind the waveform and/or frequency textures once per frame:
Uploads the audio's waveform data to a specific texture index
, which defaults to 0, returning the bound GL texture index.
Uploads the audio's frequency data to a specific texture index
, which defaults to 0, returning the bound GL texture index.
You can then read the audio data from a texture. For your convenience, this module can also be used with glslify, though the GLSL module source is relatively short.
See demo/index.frag
for an example.
uniform sampler2D audioTexture;
#pragma glslify: analyse = require(gl-audio-analyser)
void main() {
float amplitude = analyse(audioTexture);
gl_FragColor = vec4(vec3(amplitude), 1.0);
}
See stackgl/contributing for details.
MIT. See LICENSE.md for details.