Musings from my desk

Loop Supreme, part 5: Record and loop a track

2022-11-12 11:20:08 +0000 UTC

This is part 5 in a series about building a browser-based live looper

Goal

Be able to record audio from my local “media” (i.e. microphone or other input device), then loop the playback infinitely along with the beat of the metronome.

Also clean up some styling to keep things looking semi-consistent. I’m planning on a large styling refactor later when I have more essential functionality working as expected, but it feels nice to work on something that doesn’t look like total garbage.

Implementation

This was the most challenging task I’ve completed thus far (and I don’t think it is getting easier anytime soon!). It required reading a lot of documentation and putting together pieces that weren’t explicitly covered in the documentation.

The easiest part of this implementation was getting access to the user’s media devices. This is done with the getUserMedia() method which prompts the user for microphone access. Assuming the user grants permission, the method returns a MediaStream.

Eventually I learned that the MediaStream needed to be attached to a MediaRecorder to actually capture the audio. But here is where things got weird. It turns out that the only way to capture the data from a MediaRecorder is by listening to the dataavailable event, and pushing the event data to an array.

The purpose of this app of course is to loop the audio indefinitely. For that I was pretty confident I needed an AudioBufferSourceNode which supports a loop parameter, which seemed perfect for this case. However, converting a Blob array to an AudioBuffer is not a straightforward process! It requires a slightly odd conversion chain, but the main problem I ran into is that there was no definitive documentation for how to achieve this step. The BaseAudioContext.decodeAudioData documentation assumes that you are getting the data from an HTTP request, which naturally will exist as an ArrayBuffer. But for audio that was recorded locally, it was not intuitive to me that I had to create a Blob from the array of Blobs, then get the underlying ArrayBuffer from the Blob, then use that data to “decodeAudioData”. It felt like a weird pattern but once I hooked it up it seemed to work fine.

Learnings (and issues)

State of the app

Recording a Track

Next steps

  1. I need to make the clock more resilient. More research is needed on exactly how to achieve this.
  2. I’m not convinced that doing this inside a React app is the right approach. So many of the Web Audio API interfaces require mutability that doesn’t feel natural to plug into React. I’ll be thinking carefully about this in the coming sessions and considering a framework-less approach. I don’t want to do this because it would require redoing the dev/build pipeline, and re-writing a bunch of code. But it might be necessary.
  3. Improve the recording quality. I don’t know why so many samples were dropped but the current recorder is straight garbage.

Time log