Here I have a MTKView and running a simple CIFilter live on camera feed. This works fine.
On older devices' selfie camera's, such as iPhone 5, iPad Air, the feed gets drawn on a smaller area. UPDATE: Found out that CMSampleBuffer fed to MTKView is smaller in size when this happens. I guess the texture in each update needs to be scaled up?
 import UIKit
 import MetalPerformanceShaders
 import MetalKit
 import AVFoundation
    final class MetalObject: NSObject, MTKViewDelegate {
        private var metalBufferView            : MTKView?
        private var metalDevice                = MTLCreateSystemDefaultDevice()
        private var metalCommandQueue          : MTLCommandQueue!
        private var metalSourceTexture         : MTLTexture?
        private var context                    : CIContext?
        private var filter                     : CIFilter?
        init(with frame: CGRect, filterType: Int, scaledUp: Bool) {
            super.init()
            self.metalCommandQueue = self.metalDevice!.makeCommandQueue()
            self.metalBufferView = MTKView(frame: frame, device: self.metalDevice)
            self.metalBufferView!.framebufferOnly = false
            self.metalBufferView!.isPaused = true
            self.metalBufferView!.contentScaleFactor = UIScreen.main.nativeScale
            self.metalBufferView!.delegate = self
            self.context = CIContext()
        }
        final func update (sampleBuffer: CMSampleBuffer) {
            var textureCache : CVMetalTextureCache?
            CVMetalTextureCacheCreate(kCFAllocatorDefault, nil, self.metalDevice!,  nil, &textureCache)
            var cameraTexture: CVMetalTexture?
            guard
                let cameraTextureCache = textureCache,
                let pixelBuffer = CMSampleBufferGetImageBuffer(sampleBuffer) else {
                    return
            }
            let cameraTextureWidth = CVPixelBufferGetWidthOfPlane(pixelBuffer, 0)
            let cameraTextureHeight = CVPixelBufferGetHeightOfPlane(pixelBuffer, 0)
            CVMetalTextureCacheCreateTextureFromImage(kCFAllocatorDefault,
                                                      cameraTextureCache,
                                                      pixelBuffer,
                                                      nil,
                                                      MTLPixelFormat.bgra8Unorm,
                                                      cameraTextureWidth,
                                                      cameraTextureHeight,
                                                      0,
                                                      &cameraTexture)
            if let cameraTexture = cameraTexture,
                let metalTexture = CVMetalTextureGetTexture(cameraTexture) {
                self.metalSourceTexture = metalTexture
                self.metalBufferView!.draw()
            }
        }
        //MARK: - Metal View Delegate
        final func draw(in view: MTKView) {
            guard let currentDrawable = self.metalBufferView!.currentDrawable,
                let sourceTexture = self.metalSourceTexture
                else {  return  }
            let commandBuffer = self.metalCommandQueue!.makeCommandBuffer()
            var inputImage = CIImage(mtlTexture: sourceTexture)!.applyingOrientation(self.orientationNumber)
            if self.showFilter {
                self.filter!.setValue(inputImage, forKey: kCIInputImageKey)
                inputImage = filter!.outputImage!
            }
            self.context!.render(inputImage, to: currentDrawable.texture, commandBuffer: commandBuffer, bounds: inputImage.extent, colorSpace: self.colorSpace!)
            commandBuffer.present(currentDrawable)
            commandBuffer.commit()
        }
        final func mtkView(_ view: MTKView, drawableSizeWillChange size: CGSize) {
        }
    }
It looks like the resolution of the front (selfie) camera on older devices is lower, so you'll need to scale the video up if you want it to use the full width or height. Since you're already using CIContext and Metal, you can simply instruct the rendering call to draw the image to whatever rectangle you like.
In your draw method, you execute
self.context!.render(inputImage,
                     to: currentDrawable.texture,
                     commandBuffer: commandBuffer,
                     bounds: inputImage.extent,
                     colorSpace: self.colorSpace!)
The bounds argument is the destination rectangle in which the image will be rendered. Currently, you are using the image extent, which means the image will not be scaled.
To scale the video up, use the display rectangle instead. You can simply use your metalBufferView.bounds since this will be the size of your display view. You'll end up with
 self.context!.render(inputImage,
                      to: currentDrawable.texture,
                      commandBuffer: commandBuffer,
                      bounds: self.metalBufferView.bounds,
                      colorSpace: self.colorSpace!)
If the image and the view are different aspect ratios (width/height is the aspect ratio), then you'll have to compute the correct size such that the image's aspect ratio is preserved. To do this, you'll end up with code like this:
CGRect dest = self.metalBufferView.bounds;
CGSize imageSize = inputImage.extent.size;
CGSize viewSize = dest.size; 
double imageAspect = imageSize.width / imageSize.height;
double viewAspect = viewSize.width / viewSize.height;
if (imageAspect > viewAspect) {
    // the image is wider than the view, adjust height
    dest.size.height = 1/imageAspect * dest.size.width;
} else {
    // the image is taller than the view, adjust the width
    dest.size.width = imageAspect * dest.size.height;
    // center the tall image
    dest.origin.x = (viewSize.width - dest.size.width) / 2;
}
Hope this is useful, please let me know if anything doesn't work or clarification would be helpful.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With