Now you understand why it could only handle one audio writing. It is because audio’s sample buffer is not like video’s sample buffer which you could upload to GPU, render it using shader, then pull it down and write to disk. AKA: there is no way you could merge two audio sample buffer to create a “combining” effect like video.

So basically we moved out the asset initialization code to loadAsset and now in startProcessing it only need to kick off processAsset(asset reading).

You may wonder what’s the parameter in loadAsset is doing. Yeah, it is quite simple as GPUImageMovie need load tracks async for asset, we need a way to notify others that my asset is fully loaded and ready and I’m really good to kick asset reading part.

GPUImageMovieWriter

Let’s move to GPUImageMovieWriter class.

This is gonna be more change comparing to GPUImageMovie, but it is quite simple. We start by adding two variables:

Again, nothing quite special here. One thing to note is that we dont’ call asset write finishing writing when audio wrting is done because that we need check video writing part, which means that we also need to add new method for handling video wrting finish.

We have used a dispatch group to make sure all GPUImageMovies has finished their initialization process, which at that time all assets are ready. Then we got all the audio tracks from movies for later use.

This is the video writing part, be careful with finishVideoRecordingWithCompletionHandler as it is totally wrong if you call original method finishRecordingWithCompletionHandler or finishRecording because we control how movie writer when and how it should be finished.

Audio handling is pretty much same like video.(ps: I really should rename those methods name to be more consistent. For example, startAudioRecording acutally should be renamed to startAudioReading, and startAudioWritingWithComplectionBlock should be finishAudioWritingWithComplectionBlock)

Last part is the final completion code that video and audio are both finished their writing:

dispatch_group_notify(self.recordSyncingDispatchGroup, dispatch_get_main_queue(), ^{
NSLog(@"vidoe and audio writing are both done-----------------");
[self.movieWriter finishRecordingWithCompletionHandler:^{
NSLog(@"final clean up is done :)");
//you could fire some UI work on main queue
}];
})

That’s it, give a shoot in XCode, you should be able to what you expected! :)

Conclusion

Man, working with low level threading and avfoundation is a challenage, working on a very complex existing codebase is damn super ultra challenge.

I help someone could give some suggestions to make this library even better :)