Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

Getting L/R data with AnalyserNode and ChannelSplitter

I've been stuck on this all day. Trying to split the source from getUserMedia and visualize the left and right channels separately. No matter what I do, each visualiser is stuck in mono. The source I'm using is stereo (If I listen to it in windows it's clearly stereo). Minimum required to replicate.

        navigator.getUserMedia({audio: true}, analyse, function(e) {
                    alert('Error getting audio');
                    console.log(e);
                });
        }

        function analyse(stream){
            window.stream = stream;

            var input = audioContext.createMediaStreamSource(stream);
                splitter = audioContext.createChannelSplitter(2),
                lAnalyser = audioContext.createAnalyser(),
                rAnalyser = audioContext.createAnalyser();
            input.connect(splitter);
            splitter.connect(lAnalyser, 0, 0);
            splitter.connect(rAnalyser, 1, 0);
            var lArray = new Uint8Array(lAnalyser.frequencyBinCount),
                rArray = new Uint8Array(rAnalyser.frequencyBinCount);
            updateAnalyser()
            function updateAnalyser(){
                requestAnimationFrame(updateAnalyser);
                lAnalyser.getByteFrequencyData(lArray);
                rAnalyser.getByteFrequencyData(rArray);
            }
       }

lArray and rArray will be identical, even if I mute left or right channel. Am I doing something wrong? I've also tried doing input->splitter->leftmerger/rightmerger->leftanalyser/rightanalyser.

http://www.smartjava.org/content/exploring-html5-web-audio-visualizing-sound is the closest thing I can find that's similar, but it's not using user input and deals with audio buffers.

like image 826
Shadaez Avatar asked Oct 29 '25 17:10

Shadaez


1 Answers

According to https://code.google.com/p/chromium/issues/detail?id=387737

The behaviour is expected. In M37, we moved the audio processing from peer connection to getUserMedia, and the audio processing is turned on by default if you do no specify "echoCancellation : false" in the getUserMedia constraints, since the audio processing only support mono, we have to down sample the audio to mono before passing the data for processing.

If you want to avoid the down sampling, passing a constraint to getUserMedia, for example: var constraints = {audio: { mandatory: { echoCancellation : false, googAudioMirroring: true } }}; getUserMedia(constraints, gotStream, gotStreamFailed);

Setting the constraints to {audio: { mandatory: { echoCancellation: false}} stops the input downmixing.

like image 105
Shadaez Avatar answered Oct 31 '25 07:10

Shadaez



Donate For Us

If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!