Is it possible to apply video data augmentation on a dataset using Keras ? I know that this is a possibility for images, like it is explained here, but I didn't find the equivalent for video clips.
My dataset contains video clips of 500 frames. When I apply a transformation on one frame, I need it to be the same for the 499 following frames.
If you want to use ImageDataGenerator class in keras, I think you need to use apply_transform functions in every frame manually. For example
gen = ImageDataGenerator()
for i in range(length_video):
new_frames[i] = gen.apply_transform(frames[i], {'ty':100, 'theta':10})
Anotherway, you can try this https://github.com/okankop/vidaug it's make for video augmentation
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With