I want to detect objects in an image and measure the distance between them. This works as long as the objects do not come too close. Unfortunately the lighting of the image is not optimal, so it looks like objects are touching although they are not. I am trying to determine the distance with the help of a line, which represents the object. Problem is that as soon as the object contours join, I cannot determine the lines which represent the objects, so no distance can be calculated.
Input Image:

Code:
import cv2
import numpy as np
#import image
img = cv2.imread('img.png', 0)
#Thresh
_, thresh = cv2.threshold(img, 200, 255, cv2.THRESH_BINARY)
#Finding the contours in the image
_, contours, _ = cv2.findContours(thresh, cv2.RETR_TREE, cv2.CHAIN_APPROX_SIMPLE)
#Convert img to RGB and draw contour
img = cv2.cvtColor(img, cv2.COLOR_GRAY2RGB)
cv2.drawContours(img, contours, -1, (0,0,255), 2)
#Object1
v = np.matrix([[0], [1]])
rect = cv2.minAreaRect(contours[0])
#determine angle
if rect[1][0] > rect[1][1]:
    ang = (rect[2] + 90)* np.pi / 180
else:
    ang = rect[2]* np.pi / 180
rot = np.matrix([[np.cos(ang), -np.sin(ang)],[np.sin(ang), np.cos(ang)]])
rv = rot*v
#draw angle line
lineSize = max(rect[1])*0.45                #length of line
p1 = tuple(np.array(rect[0] - lineSize*rv.T)[0].astype(int))
p2 = tuple(np.array(rect[0] + lineSize*rv.T)[0].astype(int))
cv2.line(img, p1, p2, (255,0,0), 2)
#Object2
if len(contours) > 1:
    rect = cv2.minAreaRect(contours[1])
    #determine angle
    if rect[1][0] > rect[1][1]:
        ang = (rect[2] + 90)* np.pi / 180
    else:
        ang = rect[2]* np.pi / 180
    rot = np.matrix([[np.cos(ang), -np.sin(ang)],[np.sin(ang), np.cos(ang)]])
    rv = rot*v
    #draw angle line
    lineSize = max(rect[1])*0.45                #length of line
    p1 = tuple(np.array(rect[0] - lineSize*rv.T)[0].astype(int))
    p2 = tuple(np.array(rect[0] + lineSize*rv.T)[0].astype(int))
    cv2.line(img, p1, p2, (255,0,0), 2)
#save output img
cv2.imwrite('output_img.png', img)
Output Image:

This works fine but as soon as I use an image with joined contours this happens:
 

Is there a way to divide contours or maybe a workaround?
Edit
Thanks to the suggestion of B.M. I tried if erosion is a solution but unfortunately new problems come into place. It does not seem to be possible to find a balance between erosion and thresholding/contours.
Examples:
 
 
 

How about if you first search for contours and check if there are in fact two. If there is only one you could make a loop to erode and search for contours on the eroded image until you get two contours. When the event happens make a black bounding box that is bigger for the amount of kernel used on the eroded image and draw in on the "original image which will physically divide and create 2 contours. Then apply your code to the resulting image. Maybe you can upload the images you have the most dificulty with before processing? Hope it helps a bit or gives you a new idea. Cheers!
Example code:
import cv2
import numpy as np
img = cv2.imread('cont.png')
gray = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY)
_, threshold = cv2.threshold(gray, 200, 255, cv2.THRESH_BINARY)
_, contours, hierarchy = cv2.findContours(threshold,cv2.RETR_TREE,cv2.CHAIN_APPROX_NONE)
k = 2
if len(contours)==1:
    for i in range (0,1000):
        kernel = np.ones((1,k),np.uint8)
        erosion = cv2.erode(threshold,kernel,iterations = 1)
        _, contours, hierarchy = cv2.findContours(erosion,cv2.RETR_TREE,cv2.CHAIN_APPROX_NONE)
        if len(contours) == 1:
            k+=1
        if len(contours) == 2:
            break
        if len(contours) > 2:
            print('more than one contour')
x,y,w,h = cv2.boundingRect(contours[0])
cv2.rectangle(threshold,(x-k,y-k),(x+w+k,y+h+k), 0, 1)
_, contours, hierarchy = cv2.findContours(threshold,cv2.RETR_TREE,cv2.CHAIN_APPROX_NONE)
cv2.drawContours(img, contours, -1, (0,0,255), 2)
#Object1
v = np.matrix([[0], [1]])
rect = cv2.minAreaRect(contours[0])
#determine angle
if rect[1][0] > rect[1][1]:
    ang = (rect[2] + 90)* np.pi / 180
else:
    ang = rect[2]* np.pi / 180
rot = np.matrix([[np.cos(ang), -np.sin(ang)],[np.sin(ang), np.cos(ang)]])
rv = rot*v
#draw angle line
lineSize = max(rect[1])*0.45                #length of line
p1 = tuple(np.array(rect[0] - lineSize*rv.T)[0].astype(int))
p2 = tuple(np.array(rect[0] + lineSize*rv.T)[0].astype(int))
cv2.line(img, p1, p2, (255,0,0), 2)
#Object2
if len(contours) > 1:
    rect = cv2.minAreaRect(contours[1])
    #determine angle
    if rect[1][0] > rect[1][1]:
        ang = (rect[2] + 90)* np.pi / 180
    else:
        ang = rect[2]* np.pi / 180
    rot = np.matrix([[np.cos(ang), -np.sin(ang)],[np.sin(ang), np.cos(ang)]])
    rv = rot*v
    #draw angle line
    lineSize = max(rect[1])*0.45                #length of line
    p1 = tuple(np.array(rect[0] - lineSize*rv.T)[0].astype(int))
    p2 = tuple(np.array(rect[0] + lineSize*rv.T)[0].astype(int))
    cv2.line(img, p1, p2, (255,0,0), 2)
#save output img
cv2.imshow('img', img)
cv2.waitKey(0)
cv2.destroyAllWindows()
Result:

You can use erosion techniques like cv2.erode provides. After
from cv2 import erode
import numpy as np    
kernel = np.ones((5,25),dtype=np.uint8) # this must be tuned 
im1=erode(im0,kernel)
You obtain a image (im0 is your second image) where bright zones are shrank :

Now you will be able to measure a distance, even if the effect of erosion must be took in account.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With