I just want to know if it's possible to generate caption for thousands jpeg images stored in the same folder with im2txt. Should I just go with something like that ? Generating caption then writing results into a .txt file.
# Directory containing model checkpoints.
CHECKPOINT_DIR="${HOME}/im2txt/model/train"
# Vocabulary file generated by the preprocessing script.
VOCAB_FILE="${HOME}/im2txt/data/mscoco/word_counts.txt"
# JPEG image file to caption.
IMAGE_FILE="${HOME}/im2txt/data/lotofimages/
# Build the inference binary.
bazel build -c opt im2txt/run_inference
# Ignore GPU devices (only necessary if your GPU is currently memory
# constrained, for example, by running the training script).
export CUDA_VISIBLE_DEVICES=""
# Run inference to generate captions.
bazel-bin/im2txt/run_inference >> 1.txt \
--checkpoint_path=${CHECKPOINT_DIR} \
--vocab_file=${VOCAB_FILE} \
--input_files=${IMAGE_FILE}
Thanks !
I use a such approach:
IMAGE_FILE="/path/to/image1.jpg,/path/to/image2.jpg/,...,/path/to/image1000.jpg"
I put this script into a file script.sh and run it with a command:
./script.sh > captions.txt
The file captions.txt will contain all the captions for all the images.
Maybe there is a better way but this one works fine for me.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With