Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

inotifywait with a fifo queue in Bash

I wrote a small Bash script which uses inotify-tools and the inotify interface. My problem is, that one of the commands in this function can block the execution until it's finished. This way the function gets stuck.

To solve this I would like to queue up detected files (by the close event), and read the queue from another function. Does anybody have a clue how to do this in Bash?

The variables in the following are simple strings to find directories or to assign file names.

inotifywait -mrq -e close --format %w%f /some/dir/ | while read FILE
do
    NAME=$(echo $CAP)_"`date +"%F-%H-%M-%S"`.pcap"
    logger -i "$FILE was just closed"
    # cp "$FILE" "$DATA/$CAP/$ENV/$NAME"
    rsync -avz --stats --log-file=/root/rsync.log "$FILE" "$DATA/$CAP/$ENV/$NAME" >> /root/rsync_stats.log
    RESULT=$?
    if [ $RESULT -eq 0 ] ; then
        logger -i "Success: $FILE copied to SAN $DATA/$CAP/$ENV/$NAME, code $RESULT"
    else
        logger -i "Fail:    $FILE copy failed to SAN for $DATA/$CAP/$ENV/$NAME, code $RESULT"
    fi

    rm "$FILE"
    RESULT=$?
    if [ $RESULT -eq 0 ] ; then
        logger -i "Success: deletion successfull for $FILE, code $RESULT"
    else
        logger -i "Fail:    deletion failed for $FILE on SSD, code $RESULT"
    fi

    do_something()
    logger -i "$NAME was handled"
    # for stdout
    echo "`date`: Moved file" 
done

I am copying the files to a SAN volume which sometimes has answering time variations. That is the reason why this function can get stuck for a while. I replaced cp with Rsync because I need the throughput stats. Cp (from coreutils) apparently doesn't do this.

like image 223
wishi Avatar asked Sep 07 '25 14:09

wishi


1 Answers

A couple of ideas:

1) You could use a named pipe as a limited-size queue:

mkfifo pipe

your_message_source | while read MSG
do
  #collect files in a pipe 
  echo "$MSG" >> pipe
done &

while read MSG 
do
 #Do your blocking work here
done < pipe

This would block on echo "$MSG" >> pipe when the pipe's buffer gets filled (you can get the size of that buffer with ulimit -p (multiply by 512). This might be sufficient for some cases.

2) You could use a file as a message queue and file lock it on each operation:

 #Feeder part
    your_message_source | while read MSG     
       do
            (
            flock 9
            echo "$MSG" >> file_based_queue 
            ) 9> file_based_queue 
       done &

   # Worker part
   while :
   do 
    #Lock shared queue and cut'n'paste it's content to the worker's private queue
    (
      flock 9
      cp file_based_queue workers_queue
      truncate -s0 file_based_queue   
    ) 9> file_based_queue

    #process private queue
    while read MSG 
    do
     #Do your blocking work here   
    done < workers_queue 
   done

You're only blocking inotifywait if you're in the worker loop in the (flock ... ) 9>file_based_queue subshell and after the flock command at the same time. You could have the queues in a RAMdisk (/dev/shm) to minimize the time you spend there so that you don't miss out on FS events.

3) Or you could use some bash interface to (or execute scripts in languages that have an interface to) database-backed message-queues or to the SysV message queue.

like image 194
PSkocik Avatar answered Sep 10 '25 21:09

PSkocik