Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

WebAudioApi StreamSource

I'd like to use the WebAudioApi with streams. Prelistening is very important and can't be realized when I have to wait for each audio-file to be downloaded.

Downloading the entire audio data is not intended, but the only way it can get it work at the moment:

request.open('GET', src, true);
request.responseType = 'arraybuffer';
request.onload = function() {
  var audioData = request.response;
  //audioData is the entire downloaded audio-file, which is required by the audioCtx anyway
  audioCtx.decodeAudioData(audioData, function(buffer) {
    source.buffer = buffer;
    source.connect(audioCtx.destination);
    source.loop = true;
    source.play();
  },

  function(e){"Error with decoding audio data" + e.err});
}
request.send();

I found a possibility to use a stream, when requesting it from the navigator mediaDevices:

navigator.mediaDevices.getUserMedia ({audio: true, video: true})
.then(function(stream) {
    var audioCtx = new AudioContext();
    var source = audioCtx.createMediaStreamSource(stream);
    source.play();

Is it possible to use the xhr instead of the navigator mediaDevices to get the stream:

 //fetch doesn't support a range-header, which would make seeking impossible with a stream (I guess)
 fetch(src).then(response => {
    const reader = response.body.getReader();
    //ReadableStream is not working with createMediaStreamSource
    const stream = new ReadableStream({...})
    var audioCtx = new AudioContext();
    var source = audioCtx.createMediaStreamSource(stream);
    source.play();

It doesn't work, because the ReadableStream does not work with createMediaStreamSource.

My first step is realizing the functionality of the html-audio element with seek-functionality. Is there any way to get a xhr-stream and put it into an audioContext?

The final idea is to create an single-track-audio-editor with fades, cutting, prelistening, mixing and export functionality.

EDIT:

Another atempt was to use the html audio and create a SourceNode from it:

var audio = new Audio();
audio.src = src;
var source = audioCtx.createMediaElementSource(audio);
source.connect(audioCtx.destination);
//the source doesn't contain the start method now
//the mediaElement-reference is not handled by the internal Context-Schedular
source.mediaElement.play();

The audio-element supports a stream, but cannot be handled by the context-schedular. This is important in order to create an audio editor with prelistening functionality.

It would be great to reference the standard sourceNode's buffer with the audio-element buffer, but I couldn't find out how to connect them.

like image 276
marcel Avatar asked Apr 22 '26 01:04

marcel


1 Answers

I experienced this problem before and have been working on a demo solution below to stream audio in chunks with the Streams API. Seeking is not currently implemented, but it could be derived. Because bypassing decodeAudioData() is currently required, custom decoders must be provided that allow for chunk-based decoding:

https://github.com/AnthumChris/fetch-stream-audio

like image 86
AnthumChris Avatar answered Apr 23 '26 13:04

AnthumChris



Donate For Us

If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!