Is there a way to ensure that blocked threads get woken up in the same order as they got blocked? I read somewhere that this would be called a "strong lock" but I found no resources on that.
On Mac OS X one can design a FIFO queue that stores all the thread ids of the blocked threads and then use the nifty function pthread_cond_signal_thread_np() to wake up one specific thread - which is obviously non-standard and non-portable.  
One way I can think of is to use a similar queue and at the unlock() point send a broadcast() to all threads and have them check which one is the next in line.
But this would induce lots of overhead.  
A way around the problem would be to issue packaged_task's to the queue and have it process them in order. But that seems more like a workaround to me than a solution.
Edit:
As pointed out by the comments, this question may sound irrelevant, since there is in principle no guaranteed ordering of locking attempts.
As a clarification:  
I have something I call a ConditionLockQueue which is very similar to the NSConditionLock class in the Cocoa library, but it maintains a FIFO queue of blocked threads instead of a more-or-less random pool.
Essentially any thread can "line up" (with or without the requirement of a specific 'condition' - a simple integer value - to be met). The thread is then placed on the queue and blocks until it is the frontmost element in the queue whose condition is met.
This provides a very flexible way of synchronization and I have found it very helpful in my program.
Now what I really would need is a way to wake up a specific thread with a specific id.
But these problems are almost alike.
You can use a fair mutex to solve your task, i.e. a mutex that will guarantee the FIFO order of your operations.
Mutexes are used to prevent multiple threads from causing a data race by accessing the same shared resource at the same time. Sometimes, when locking mutexes, multiple threads hold each other's lock, and the program consequently deadlocks.
Queue-based spin locks guarantee fairness by maintaining a queue of waiters and by granting the lock to the first waiter during an unlock operation.
std::lock. Locks all the objects passed as arguments, blocking the calling thread if necessary. The function locks the objects using an unspecified sequence of calls to their members lock , try_lock and unlock that ensures that all arguments are locked on return (without producing any deadlocks).
Its pretty easy to build a lock object that uses numbered tickets to insure that its completely fair (lock is granted in the order threads first tried to acquire it):
#include <mutex>
#include <condition_variable>
class ordered_lock {
    std::condition_variable  cvar;
    std::mutex               cvar_lock;
    unsigned int             next_ticket, counter;
public:
    ordered_lock() : next_ticket(0), counter(0) {}
    void lock() {
        std::unique_lock<std::mutex> acquire(cvar_lock);
        unsigned int ticket = next_ticket++;
        while (ticket != counter)
            cvar.wait(acquire);
    }
    void unlock() {
        std::unique_lock<std::mutex> acquire(cvar_lock);
        counter++;
        cvar.notify_all();
    }
};
edit
To fix Olaf's suggestion:
#include <mutex>
#include <condition_variable>
#include <queue>
class ordered_lock {
    std::queue<std::condition_variable *> cvar;
    std::mutex                            cvar_lock;
    bool                                  locked;
public:
    ordered_lock() : locked(false) {};
    void lock() {
        std::unique_lock<std::mutex> acquire(cvar_lock);
        if (locked) {
            std::condition_variable signal;
            cvar.emplace(&signal);
            signal.wait(acquire);
        } else {
            locked = true;
        }
    }
    void unlock() {
        std::unique_lock<std::mutex> acquire(cvar_lock);
        if (cvar.empty()) {
            locked = false;
        } else {
            cvar.front()->notify_one();
            cvar.pop();
        }
    }
};
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With