Let's suppose I have this distorted image taken from a fisheye camera with 185º FoV.
Image taken from Proc. IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), pp. 1541-1545, Shanghai, China, March 2016.

I want to undistort it using the FoV model explained in Frederic Devernay, Olivier Faugeras. Straight lines have to be straight: automatic calibration and removal of distortion from scenes of structured enviroments. Machine Vision and Applications, Springer Verlag, 2001, 13 (1), pp.14-24, concretely in equations 13 and 14.
rd = 1 / ω * arctan (2 * ru * tan(ω / 2)) // Equation 13
ru = tan(rd * ω) / (2 * tan(ω / 2)) // Equation 14
I've implemented it in OpenCV and I can't achieve it to work. I interpret rd as the distorted distance of a point from the optical center, and ru as the new undistorted distance.
I let you a full minimal project.
#include <opencv2/opencv.hpp>
#define W (185*CV_PI/180)
cv::Mat undistortFishEye(const cv::Mat &distorted, const float w)
{
cv::Mat map_x, map_y;
map_x.create(distorted.size(), CV_32FC1);
map_y.create(distorted.size(), CV_32FC1);
int Cx = distorted.cols/2;
int Cy = distorted.rows/2;
for (int x = -Cx; x < Cx; ++x) {
for (int y = -Cy; y < Cy; ++y) {
double rd = sqrt(x*x+ y*y);
double ru = tan(rd*w) / (2*tan(w/2));
map_x.at<float>(y+Cy,x+Cx) = ru/rd * x + Cx;
map_y.at<float>(y+Cy,x+Cx) = ru/rd * y + Cy;
}
}
cv::Mat undistorted;
remap(distorted, undistorted, map_x, map_y, CV_INTER_LINEAR);
return undistorted;
}
int main(int argc, char **argv)
{
cv::Mat im_d = cv::imread(<your_image_path>, CV_LOAD_IMAGE_GRAYSCALE);
cv::imshow("Image distorted", im_d);
cv::Mat im_u = undistortFishEye(im_d, W);
cv::imshow("Image undistorted", im_u);
cv::waitKey(0);
}
I only skimmed through the paper you linked, so I am not sure I got it right, but it looks like three things are wrong with your implementation:
You should use only half of your FOV angle as the W parameter (the algorithm operates in some radial coordinates, calculating distance from the center, so the angle should also be from the center, which gives half the angle).
You calculate ru and rd wrong: ru should be the distance, then rd should be according to Eq.(13). This is because you do inverse mapping: you create an empty image, then for every (x, y)-point of it you have to pick a color from the distorted image - you do that by distorting the (x, y) and look where it points at the distorted image, then map that color onto the original non-distorted (x, y) coordinates. Doing direct mapping (e. g. for every (x, y) of the distorted image move it into calculated location on the non-distorted image) gives visual artifacts, because not all target pixels are necessarily covered.
You forgot to normalize the radial coordinates, have to divide them by Cx, Cy, respectively, do the transform, then de-normalize by multiplying back.
There could also be some implicit conversion of double to int, but I am not sure - could never remember the rules about that, I just try not to mix int's and double's in the same equation, feel free to convert those Cx, Cy back to int if it works for you. Anyhow, this seems to work (both versions of undistortFishEye function give the same result, so use whichever you like better):
#include <opencv2/opencv.hpp>
#define W (185/2*CV_PI/180)
cv::Mat undistortFishEye(const cv::Mat &distorted, const float w)
{
cv::Mat map_x, map_y;
map_x.create(distorted.size(), CV_32FC1);
map_y.create(distorted.size(), CV_32FC1);
double Cx = distorted.cols / 2.0;
double Cy = distorted.rows / 2.0;
for (double x = -1.0; x < 1.0; x += 1.0/Cx) {
for (double y = -1.0; y < 1.0; y += 1.0/Cy) {
double ru = sqrt(x*x + y*y);
double rd = (1.0 / w)*atan(2.0*ru*tan(w / 2.0));
map_x.at<float>(y*Cy + Cy, x*Cx + Cx) = rd/ru * x*Cx + Cx;
map_y.at<float>(y*Cy + Cy, x*Cx + Cx) = rd/ru * y*Cy + Cy;
}
}
cv::Mat undistorted;
remap(distorted, undistorted, map_x, map_y, CV_INTER_LINEAR);
return undistorted;
}
cv::Mat undistortFishEye2(const cv::Mat &distorted, const float w)
{
cv::Mat map_x, map_y;
map_x.create(distorted.size(), CV_32FC1);
map_y.create(distorted.size(), CV_32FC1);
double cx = distorted.cols / 2.0;
double cy = distorted.rows / 2.0;
for (int x = 0; x < distorted.cols; ++x)
{
for (int y = 0; y < distorted.rows; ++y)
{
double rx = (x - cx) / cx;
double ry = (y - cy) / cy;
double ru = sqrt(rx*rx + ry*ry);
//TODO: check for ru == 0.0
double rd = (1.0 / w)*atan(2.0*ru*tan(w/2.0));
double coeff = rd / ru;
rx *= coeff;
ry *= coeff;
map_x.at<float>(y, x) = rx*cx + cx;
map_y.at<float>(y, x) = ry*cy + cy;
}
}
cv::Mat undistorted;
remap(distorted, undistorted, map_x, map_y, CV_INTER_LINEAR);
return undistorted;
}
int main(int argc, char **argv)
{
cv::Mat im_d = cv::imread("C:/projects/test_images/fisheye/library.jpg", CV_LOAD_IMAGE_GRAYSCALE);
cv::imshow("Image distorted", im_d);
cv::Mat im_u = undistortFishEye(im_d, W);
cv::imshow("Image undistorted", im_u);
cv::waitKey(0);
}

Big parts of the original image are lost during the transformation - is it supposed to be so? Or should the algorithm still map them somewhere? I tried transforming it onto a bigger target image, it got really stretched at the edges:

If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With