I have 2 bash scripts..
The first one (begin.sh) is receiving variables from an external PHP script via SSH2, (not visible in this script as they are dynamic) :
But they look something like this :
$1 = myfile.mp3
$2 = artwork.jpg
$3 = my title - my artist
Here is the first script (begin.sh):
#!/bin/bash
. ./process.sh
And here is the second (process.sh):
#!/bin/bash
wget -O /root/incoming/shows/$1 http://remoteserver.com/files/$1;
exec lame --decode /root/incoming/shows/$1 - | /root/incoming/stereo_tool_cmd_64 - - -s /usr/incoming/settings/settings.sts | lame -b 128 - /root/incoming/processing/$1;
wait
mv /root/incoming/processing/$1 /var/www/html/processed/$1;
#Send Email when process is complete
recipients="[email protected]"
subject="Podcast Submission"
from="[email protected]"
importLink="http://thisserveraddress.com/processed/$1"
artwork="http://anotherserver.com/podcast-art/$2"
message_txt=$(echo -e "A new podcast has been submitted.\n\nTitle : $3\n\nImport : $importLink")
/usr/sbin/sendmail "$recipients" << EOF
subject:$subject
from:$from
$message_txt
EOF
The process in the script above is time consuming (takes about 8 minutes to complete) and is very processor intensive (uses about 50% CPU), so I only want to run one of these processes at a time. The trouble is, the entire process can be executed remotely at any time by multiple users. So I need to find a way of running these jobs serially in the order that they come in.
I thought sourcing the process script would effectively queue the job's, but it doesn't. If the script is executed again while it's already running nothing happens.
Any suggestions?
Further explanation of what the process.sh script is doing for clarity....
First the host downloads the mp3 file from remoteserver.com
Then it takes the downloaded mp3 file and uses lame to decode it to wav, then another app performs a whole bunch of audio processing on the file after which it re-encodes it back to mp3.
When that's done it moves the new mp3 file to a publicly accessible folder.
Once that's done it sends an email to inform that all this has taken place, and outlines the various links where everything can be downloaded from.
The lock principle could be as follow : When your script starts, the first thing it does is creating an empty script.lock file in its working folder. And when it finishes it deletes the script.lock file. EDIT : Even better, you can create a script.lock DIRECTORY with mkdir, as sugested by Dror Cohen in his comment
That's the general idea. In order to work it in fact needs to really start only if there is no current script.lock that exists. If it does, it instead creates a new file containing the parameter of the call in a /queue/ folder.
So in the end you would have a begin.sh that would be like :
First check if script.lock exists.
- If it does, then write a new file in /queue/ and stop
- If not, create script.lock and proceed
In the very end of the script, it checks if there are any files in the /queue/ folder.
- If there are none, it deletes the script.lock and stops
- If there is a file in /queue/, it takes the older one, deletes it and starts itself again with the parameters saved in the file.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With