GPUImage is a great library for processing image and video on iOS. Recently I have used it for merging and blending two videos by applying chroma key filter. The idea is we create two GPUImageMovie objects, one GPUImageChromaKeyBlendFilter and finally the GPUImageMovieWriter object. The code is like following:
The code is pretty straightforward. There is one problem with GPUImage movie merging, which is it can not merge two audio into final output video, but that’s not the point of this blog.
So when you run the project and compare the video ouput with original video, you would find that the exposure and white balance get little weird, kinda like following picture:
That’s most likely due to a slight difference in the way that I convert YUV sources to RGB when loading from movies and video sources. Look at the matrix applied in the kGPUImageYUVVideoRangeConversionForLAFragmentShaderString and the like. I had been using Apple’s standard YUV conversion, then a couple of people changed it, saying that it didn’t match what it should be. Perhaps they were wrong, and the color matrix still needs to be adjusted here.
Then hmmm, I am thinking maybe I should look up for this kGPUImageYUVVideoRangeConversionForLAFragmentShaderString in library and only to find that it is defined in GPUVideoCamera.m class.
My first action is revert this commit, but failed because in order to make GPUImageMovie support audio play I already made big change to this class. So I just changed the definition of kGPUImageYUVVideoRangeConversionForLAFragmentShaderString the GPUImageVideoCamera.m to original one:
Not sure just replacing the shader string is gonna be okay or not, but at least we tried out and the exposure problem solved.
Hopefuly this could shed some light on others when using this library.