Uncanny! I was literally just going to post this here in HN ;)
timdaub, do you know if it's possible to generate the sound file in a second thread and store the blob file in a shared array buffer? Am planning to run my own experiments, but just wondering if you tried anything like what I am describing to do real time sound synthesis? Thanks ;)
glad that you're interested. Actually, I've never done anything with real time sound synthesis myself.
To your question: Recording audio or store a blob file.
WASM-Synth continuously calls the audio generation code within the AudioWorklet [1].
To make it simple, you could just send this `sample` each time to the main process and store it there somehow. There's many examples how to send data between the AudioWorklet and the main thread [2].
WASM-Synth is open source [3]. I'd be happy to merge a PR that integrated a "recording" functionality! Hit me up on GH or email. You find it on my website [4].
A specialty of those is that they're interactive SVGs. Especially for building the ADSR graph, I found working with an approximate function that "drew itself" much easier than working with the canvas or webGL.
But I have plans to build a wave table (similar to the one in Ableton Live) in webGL.
I also wrote a long blog post about my journey building it here: https://timdaub.github.io/2020/02/19/wasm-synth/
Have fun jamming :)