In the aurioTouch sample app the RemoteIO audio unit is configured for 2 channel non-interleaved LPCM in the 8.24 fixed point format. This is the preferred format on the iOS platform and I assume thats what the hardware ADC is emitting. They even made a comment about this (source):
// set our required format - Canonical AU format: LPCM non-interleaved 8.24 fixed point
outFormat.SetAUCanonical(2, false);
So I would expect that when the application later receives an audio buffer it will have data for two channels packed in its mData member in some order. Something like this:
mData = [L1, L2, L3, L4, R1, R2, R3, R4];
Where L and R represent data from the left and right channel of a stereo microphone. Only it seems that cannot be the case because SetAUCannonical()
doesn't set up enough memmory to hold the additional channel:
void SetAUCanonical(UInt32 nChannels, bool interleaved)
{
mFormatID = kAudioFormatLinearPCM;
#if CA_PREFER_FIXED_POINT
mFormatFlags = kAudioFormatFlagsCanonical | (kAudioUnitSampleFractionBits << kLinearPCMFormatFlagsSampleFractionShift);
#else
mFormatFlags = kAudioFormatFlagsCanonical;
#endif
mChannelsPerFrame = nChannels;
mFramesPerPacket = 1;
mBitsPerChannel = 8 * sizeof(AudioUnitSampleType);
if (interleaved)
mBytesPerPacket = mBytesPerFrame = nChannels * sizeof(AudioUnitSampleType);
else {
mBytesPerPacket = mBytesPerFrame = sizeof(AudioUnitSampleType);
mFormatFlags |= kAudioFormatFlagIsNonInterleaved;
}
}
If 'interleaved' is false it doesn't multiply the 'mBytesPerPacket' and mBytesPerFrame' by the number of channels. There wont be enough bits in the frame to store the extra channel.
So is the sample code just slightly misleading when it asks for 2 channels? Should it just be asking for 1 channel, since thats what its going to get back anyway:
outFormat.SetAUCanonical(1, false);
Can I just 'fix' SetAUCannonical like this to make things clear?:
mChannelsPerFrame = nChannels;
if (!interleaved) {
mChannelsPerFrame = 1
mFormatFlags |= kAudioFormatFlagIsNonInterleaved;
}
mFramesPerPacket = 1;
mBitsPerChannel = 8 * sizeof(AudioUnitSampleType);
mBytesPerPacket = mBytesPerFrame = nChannels * sizeof(AudioUnitSampleType);
Or is there some other reason why you would ask for 2 channels? I dont even think the microphone is a stereo mic.
The built-in mic and headset mic input are both mono.
The Camera Connection kit may have allowed stereo audio input from some USB mics on some newer iOS devices running some previous OS versions, but I haven't seen any reports of this working with the current OS release.
Also, check to see whether 2 channel (stereo) non-interleaved format might return 2 buffers to the RemoteIO callback, instead of concatenated data in 1 buffer.
I think you're confusing "Interleaved" and "Non-Interleaved" and how CoreAudio gives you that data in ABLs. SetAUCanonical() is doing the right thing. An ABL has an variable array of buffers where in the non-interleaved case each buffer only holds the data for a single channel.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With