I'm investigating aurioTouch2 sample code.
And I noticed, that when we analize audio data, we use only the first buffer of this data, and never other buffers.
in void FFTBufferManager::GrabAudioData(AudioBufferList *inBL) function:
UInt32 bytesToCopy = min(inBL->mBuffers[0].mDataByteSize, mAudioBufferSize - mAudioBufferCurrentIndex * sizeof(Float32));
memcpy(mAudioBuffer+mAudioBufferCurrentIndex, inBL->mBuffers[0].mData, bytesToCopy);
in function
static OSStatus PerformThru(
void *inRefCon,
AudioUnitRenderActionFlags *ioActionFlags,
const AudioTimeStamp *inTimeStamp,
UInt32 inBusNumber,
UInt32 inNumberFrames,
AudioBufferList *ioData)
if (THIS->displayMode == aurioTouchDisplayModeOscilloscopeWaveform)
{
AudioConverterConvertComplexBuffer(THIS->audioConverter, inNumberFrames, ioData, THIS->drawABL);
SInt8 *data_ptr = (SInt8 *)(THIS->drawABL->mBuffers[0].mData);
}
The question is why do we ignore data in inBL->mBuffers1.mData?
Since there's only 1 mic on your iPhone, the samples in the 2 stereo channel buffers (L and R) are identical. Since the second buffer is just redundant (or in some configurations empty), that data there doesn't need to be analyzed (again).
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With