I recently posted a question asking if it was possible to prevent PID's from being re-used.
So far the answer appears to be no. (Which is fine.)
However, the user Diego Torres Milano added an answer to that question, and my question here is in regards to that answer.
Diego answered,
If you are afraid of reusing PID's, which won't happen if you wait as other answers explain, you can use
echo 4194303 > /proc/sys/kernel/pid_maxto decrease your fear ;-)
I don't actually understand why Diego has used the number 4194303 here, but that's another question.
My understanding was that I had a problem with the following code:
for pid in "${PIDS[@]}"
do
wait $pid
done
The problem being that I have multiple PIDs in an array, and that the for loop will run the wait command sequentially with each PID in the array, however I cannot predict that the processes will finish in the same order that their PIDs are stored in this array.
ie; the following could happen:
wait terminates as PID in array index 0 exitswait is currently waiting for never terminates. Perhaps it is the PID of a mail server or something which a system admin has started.wait keeps waiting until the next serious linux bug is found and the system is rebooted or there is a power outageDiego said:
which won't happen if you wait as other answers explain
ie; that the situation I have described above cannot happen.
Is Diego correct?
Or is Diego not correct?
It has occured to me that this question might be confusing, unless you are aware that the PID's are PID's of processes launched in the background. ie;
my_function &
PID="$!"
PIDS+=($PID)
Let's go through your options.
for i in 1 2 3 4 5; do
cmd &
done
wait
This has the benefit of being simple, but you can't keep your machine busy. If you want to start new jobs as old ones complete, you can't. You machine gets less and less utilized until all the background jobs complete, at which point you can start a new batch of jobs.
Related is the ability to wait for a subset of jobs by passing multiple arguments to wait:
unrelated_job &
for i in 1 2 3 4 5; do
cmd & pids+=($!)
done
wait "${pids[@]}" # Does not wait for unrelated_job, though
for i in 1 2 3 4 5; do
cmd & pids+=($!)
done
for pid in "${pids[@]}"; do
wait "$pid"
# do something when a job completes
done
This has the benefit of letting you do work after a job completes, but
still has the problem that jobs other than $pid might complete first, leaving your machine underutilized until $pid actually completes. You do, however, still get the exit status for each individual job, even if it completes before you actually wait for it.
bash 4.3 or later)for i in 1 2 3 4 5; do
cmd & pids+=($!)
done
for pid in "${pids[@]}"; do
wait -n
# do something when a job completes
done
Here, you can wait until a job completes, which means you can keep your machine as busy as possible. The only problem is, you don't necessarily know which job completed, without using jobs to get the list of active processes and comparing it to pids.
The shell by itself is not an ideal platform for doing job distribution, which is why there are a multitude of programs designed for managing batch jobs: xargs, parallel, slurm, qsub, etc.
Starting with Bash 5.1, there is now an additional way of waiting for and handling multiple background jobs thanks to the introduction of wait -p.
Here's an example:
#!/usr/bin/env bash
# Spawn background jobs
for ((i=0; i < 10; i++)); do
secs=$((RANDOM % 10)); code=$((RANDOM % 256))
(sleep ${secs}; exit ${code}) &
echo "Started background job (pid: $!, sleep: ${secs}s, code: ${code})"
done
# Wait for background jobs to finish
while true; do
wait -n -p pid; code=$?
[[ -z "${pid}" ]] && break
echo "Background job ${pid} finished with exit code ${code}"
done
The novelty here is that you now know exactly which one of the background jobs finished.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With