I am trying to stitch images together right now to create panoramas. The approach that I have tried so far was to warp the first image and align the second image with it and repeat for n number of images. That seems to be working fine but when I try concatenating the two images together by creating a binary black and white mask using numpy slicing, there is a definite seam that differentiates the two images. I am thinking that if I could either have a feathered mask in the region where black meets white with a transition area or even just a linear gradient mask going from left side of the image to the right cross-fading from black to white, it would help make the seams blend in a little better. I tried using Gaussian Blur to blur the boundaries of my binary mask experimenting with different kernel sizes but it kinda made the situation worse since the Border of the mask started showing up in the images. I just can't seem to figure out a way using numpy and openCV to create such a mask and blend the images. I would even be happy if I can create a mask as shown below so I can use that to blend in the images to improve the results. Any suggestions would be appreciated
So, I have/had quite the same ideas as fmw42 mentions in the comments, but instead of alpha blending I was thinking of plain linear blending using appropriate "blend masks" (which are the inverted masks you would use for alpha blending).
For the sake of simplicity, I assume two images with identical image sizes here. As fmw42 mentioned, you should use the "interesting" image parts here, for example obtained by cropping. Let's have a look at the code:
import cv2
import numpy as np
# Some input images
img1 = cv2.resize(cv2.imread('path/to/your/image1.png'), (400, 300))
img2 = cv2.resize(cv2.imread('path/to/your/image2.png'), (400, 300))
# Generate blend masks, here: linear, horizontal fading from 1 to 0 and from 0 to 1
mask1 = np.repeat(np.tile(np.linspace(1, 0, img1.shape[1]), (img1.shape[0], 1))[:, :, np.newaxis], 3, axis=2)
mask2 = np.repeat(np.tile(np.linspace(0, 1, img2.shape[1]), (img2.shape[0], 1))[:, :, np.newaxis], 3, axis=2)
# Generate output by linear blending
final = np.uint8(img1 * mask1 + img2 * mask2)
# Outputs
cv2.imshow('img1', img1)
cv2.imshow('img2', img2)
cv2.imshow('mask1', mask1)
cv2.imshow('mask2', mask2)
cv2.imshow('final', final)
cv2.waitKey(0)
cv2.destroyAllWindows()
These are the inputs and masks:
This would be the output:
The linear "blend masks" are created by NumPy's linspace
method, and some repeating of the vector by NumPy's tile
and repeat
methods. Maybe, that part can be further optimized.
Caveat: At least for the presented linear blending, ensure for every pixel you generate by
mask1[y, x] * img1[y, x] + mask2[y, x] * img2[y, x]
that
mask1[y, x] + mask2[y, x] <= 1
or you might get some "over-exposure" for these pixels.
Hope that helps!
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With