I have a large project built with The Amazing Audio Engine 2
. I have struggled to get Inter-App-Audio integrated and would like to migrate to AudioKit 3.
Struggled, meaning, it integrates, but as soon as I select it as a generator the rendering just stops, the engine is in a disabled state.
What are the main differences with the audio systems? TAAE2 uses modules, each with a render block, that pushes and pops audio buffers from a render stack.
How does AudioKit render audio? What would be involved, on a high level, in migrating AEModules to Audiokit objects?
For the audio rendering, it's essentially a wrapper around AVAudioEngine, AUAudioUnit, AVAudioUnit, and AVAudioNode. It's conceptualized as a render chain rather than a stack, but the end result is the same. You can use the system Audio units, or you can register your own by creating an AUAudioUnit subclass.
The render chain works much the same way, but with a block based API. Instead of subclassing AEAudioUnitModule and setting the processFunction to a C function where you pull buffer lists and timestamps from your renderer, you subclass AUAudioUnit and implement internalRenderBlock where you return a block that gets called with the buffers and timestamps as arguments to your block. This block is where you could do most of your porting.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With