iOS has various audio frameworks from the higher-level that lets you simply play a specified file, to the lower level that lets you get at the raw PCM data, and everything in between. For our app, we just need to play external files (WAV, AIFF, MP3) but we need to do so in response to pressing a button and we need that latency to be as small as possible. (It's for queueing in live productions.)
Now the AVAudioPlayer and such work to play simple file assets (via their URL), but its latency in actually starting the sound is too great. With larger files of more than five minutes in length, the delay to start the sound can be over a second long which renders it all but useless for timing in a live performance.
Now I know things like openAL can be used for a very-low-latency playback, but then you're waist-deep into audio buffers, audio sources, listeners, etc.
That said, does anyone know of any frameworks that work at a higher-level (i.e. play 'MyBeddingTrack.mp3') with a very low latency? Pre-buffering is fine. It's just the trigger has to be fast.
Bonus if we can do things like set the start and end points of the playback within the file, or to change the volume or even to perform ducking, etc.
Although Audio Queue framework is relatively easy to use.. it packs a lot of DSP heavy lifting behind the scenes (ie if you supply it with VBR/compressed audio.. it automatically converts it to PCM before playing it on the speaker.. it also handles a lot of the threading issues for the end user opaquely).. which is good news someone doing a light weight non-real time application.
You mentioned that you need it for queuing in live productions. I'm not sure if that means that your app is real-time.. because if it is.. then Audio Queue's will struggle to meed your needs. A good article to read about this is Ross Bencina's. The take away is that you can't afford to let third party frameworks or libraries do anything that can be potentially expensive behind the scenes like thread locking or mallocing or deallocing etc etc.. that's simply too expensive and risky for developing real time audio apps.
That's where the Audio Unit framework come in. Audio Queue's are actually built on top of the Audio Unit framework (it automates a lot of it's work).. but Audio Units bring you as close to the metal as it gets with iOS. It as responsive as you want it to be, and can do a real time app easy. Audio Unit has a huge learning curve though. There are some Open Source wrappers around it that simplifies it though (see novocaine).
If I were you.. I'd at least skim through Learning Core Audio.. it's the go to book for any iOS core-audio developer.. it talks in detail about Audio Queues, Audio Units etc and has excellent code examples..
From my own experience.. I worked on a real-time audio app that had some intensive audio requirements.. i found the Audio Queue framework and thought it was too good to be true.. my app worked when i prototyped it with light restrictions.. but it simply choked upon stress testing it.. that's when i had to dive deep into audio units and change the architecture etc etc (it wasn't pretty). my advice: work with audio queue at least as an introduction to Audio Units.. stick with it if it meets your needs, but then don't be afraid to use Audio Units if it becomes clear that Audio Queue no longer meets your app's demands.
The lowest latency you can get is with Audio Units, RemoteIO.
Remote I/O Unit
The Remote I/O unit (subtype kAudioUnitSubType_RemoteIO) connects to device hardware for input, output, or simultaneous input and output. Use it for playback, recording, or low-latency simultaneous input and output where echo cancelation is not needed.
Take a look at this tutorials:
http://atastypixel.com/blog/using-remoteio-audio-unit/
http://atastypixel.com/blog/playing-audio-in-time-using-remote-io/
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With