I am using android app native code and I am using some audio file and audio processing. I need to send audio data (raw audio frame) to webRTC. But I am not able to find any API of webRTC to add custom audio source(not default audio source like mic).
I understand that I need to add AudioTrackInterface and for that I need to add AudioSourceInterface.
This method is actual for version 66 of WebRTC. It is not so simple and maybe not clear, but it really works. I try to explain main idea:
I inherit webrtc::AudioDeviceModule and override some methods for emulate 'Virtual audio device' for virtual playout and recording. On calls I just call standard AudioDeviceModule base methods with some modifications:
int16_t PlayoutDevices()=> call base method, but return base + 1
int16_t RecordingDevices()=> return base + 1
int32_t PlayoutDeviceName=> return my virtual device name and GUID
int32_t RecordingDeviceName=> return my virtual device name and GUID
void SendFrameP=> return my virtual device audio data
void ReceiveFrameP=> use received audio data by my virtual deviceetc methods => just look at
webrtc::AudioDeviceModuleimplementation.
Then you can use your own AudioDeviceModule as parameter to webrtc::CreatePeerConnectionFactory function and provide audio data as recording device and receive data as playout device.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With