I am currently trying to train a dataset using OpenCV 4.2.2, I scoured the web but there are only examples for 2 params. OpenCV 4.2.2 loadDatasetList requires 4 parameters but there have been shortcomings which I did my best to overcome with the following. I tried with an array at first but loadDatasetList complained that the array was not iterable, I then proceeded to the code below with no luck. Any help is appreciated thank you for your time, and hope everyone is being safe and well.
The prior error passing in an array without iter()
PS E:\MTCNN> python kazemi-train.py No valid input file was given, please check the given filename. Traceback (most recent call last): File "kazemi-train.py", line 35, in status, images_train, landmarks_train = cv2.face.loadDatasetList(args.training_images,args.training_annotations, imageFiles, annotationFiles) TypeError: cannot unpack non-iterable bool object
The current error is:
PS E:\MTCNN> python kazemi-train.py Traceback (most recent call last): File "kazemi-train.py", line 35, in status, images_train, landmarks_train = cv2.face.loadDatasetList(args.training_images,args.training_annotations, iter(imageFiles), iter(annotationFiles)) SystemError: returned NULL without setting an error
import os
import time
import cv2
import numpy as np
import argparse
if __name__ == '__main__':
parser = argparse.ArgumentParser(description='Training of kazemi facial landmark algorithm.')
parser.add_argument('--face_cascade', type=str, help="Path to the cascade model file for the face detector",
default=os.path.join(os.path.dirname(os.path.realpath(__file__)),'models','haarcascade_frontalface_alt2.xml'))
parser.add_argument('--kazemi_model', type=str, help="Path to save the kazemi trained model file",
default=os.path.join(os.path.dirname(os.path.realpath(__file__)),'models','face_landmark_model.dat'))
parser.add_argument('--kazemi_config', type=str, help="Path to the config file for training",
default=os.path.join(os.path.dirname(os.path.realpath(__file__)),'models','config.xml'))
parser.add_argument('--training_images', type=str, help="Path of a text file contains the list of paths to all training images",
default=os.path.join(os.path.dirname(os.path.realpath(__file__)),'train','images_train.txt'))
parser.add_argument('--training_annotations', type=str, help="Path of a text file contains the list of paths to all training annotation files",
default=os.path.join(os.path.dirname(os.path.realpath(__file__)),'train','points_train.txt'))
parser.add_argument('--verbose', action='store_true')
args = parser.parse_args()
start = time.time()
facemark = cv2.face.createFacemarkKazemi()
if args.verbose:
print("Creating the facemark took {} seconds".format(time.time()-start))
start = time.time()
imageFiles = []
annotationFiles = []
for file in os.listdir("./AppendInfo"):
if file.endswith(".jpg"):
imageFiles.append(file)
if file.endswith(".txt"):
annotationFiles.append(file)
status, images_train, landmarks_train = cv2.face.loadDatasetList(args.training_images,args.training_annotations, iter(imageFiles), iter(annotationFiles))
assert(status == True)
if args.verbose:
print("Loading the dataset took {} seconds".format(time.time()-start))
scale = np.array([460.0, 460.0])
facemark.setParams(args.face_cascade,args.kazemi_model,args.kazemi_config,scale)
for i in range(len(images_train)):
start = time.time()
img = cv2.imread(images_train[i])
if args.verbose:
print("Loading the image took {} seconds".format(time.time()-start))
start = time.time()
status, facial_points = cv2.face.loadFacePoints(landmarks_train[i])
assert(status == True)
if args.verbose:
print("Loading the facepoints took {} seconds".format(time.time()-start))
start = time.time()
facemark.addTrainingSample(img,facial_points)
assert(status == True)
if args.verbose:
print("Adding the training sample took {} seconds".format(time.time()-start))
start = time.time()
facemark.training()
if args.verbose:
print("Training took {} seconds".format(time.time()-start))
If I only use 2 parameters this error is raised
File "kazemi-train.py", line 37, in status, images_train, landmarks_train = cv2.face.loadDatasetList(args.training_images,args.training_annotations) TypeError: loadDatasetList() missing required argument 'images' (pos 3)
If I try to use 3 parameters this error is raised
Traceback (most recent call last): File "kazemi-train.py", line 37, in status, images_train, landmarks_train = cv2.face.loadDatasetList(args.training_images,args.training_annotations, iter(imagePaths)) TypeError: loadDatasetList() missing required argument 'annotations' (pos 4)
Documentation on loadDatasetList

The figure you provided refers to the C++ API of loadDatasetList(), whose parameters usually cannot be mapped to that of Python API in many cases. One reason is that a Python function can return multiple values while C++ cannot. In the C++ API, the 3rd and 4th parameters are provided to store the output of the function. They store the paths of the images after reading from the text file at imageList, and the paths of the annotations by reading another text file at annotationList respectively.
Going back to your question, I cannot find any reference for that function in Python. And I believe the API is changed in OpenCV 4. After multiple trials, I am sure cv2.face.loadDatasetList returns only one Boolean value, rather than a tuple. That's why you encountered the first error TypeError: cannot unpack non-iterable bool object even though you filled in four parameters.
There is no doubt that cv2.face.loadDatasetList should produce two lists of file paths. Therefore, the code for the first part should look something like this:
images_train = []
landmarks_train = []
status = cv2.face.loadDatasetList(args.training_images, args.training_annotations, images_train, landmarks_train)
I expect images_train and landmarks_train should contain the file paths of the images and landmark annotations but it does not work as expected.
After understanding the whole program, I wrote a new function my_loadDatasetList to replace the (broken) cv2.face.loadDatasetList.
def my_loadDatasetList(text_file_images, text_file_annotations):
status = False
image_paths, annotation_paths = [], []
with open(text_file_images, "r") as a_file:
for line in a_file:
line = line.strip()
if line != "":
image_paths.append(line)
with open(text_file_annotations, "r") as a_file:
for line in a_file:
line = line.strip()
if line != "":
annotation_paths.append(line)
status = len(image_paths) == len(annotation_paths)
return status, image_paths, annotation_paths
You can now replace
status, images_train, landmarks_train = cv2.face.loadDatasetList(args.training_images,args.training_annotations, iter(imageFiles), iter(annotationFiles))
by
status, images_train, landmarks_train = my_loadDatasetList(args.training_images, args.training_annotations)
I have tested that images_train and landmarks_train can be loaded by cv2.imread and cv2.face.loadFacePoints respectively using the data from here.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With