I would like some help in continuing my code using openCV library in order to find the depth values of objects seen in the cameras.
I have already done the calibration and found the dispaity map, i can't find a clear help of how to calculate the depth values of each pixel seen in the two photos taken by the cameras.
Can anyone help me? Thank you
You can use these formula to calculate point cloud 3D coordinates:
Z = fB/D
X = (col-w/2)*Z/f
Y = (h/2-row)*Z/f
where X, Y, Z are world coordinates, f - focal length of the camera in pixels after calibration, B is a base line or camera separation and D is disparity; col, row represent the column and row coordinates of a pixel in the image with dimensions h, w.
However, if you managed to calibrate your cameras and get a disparity map you have to already know this. Calibration and disparity map calculation is an order of magnitude more complex task than above mentioned calculations.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With