I have a configuration of my application stored in a singleton class, like this (simplified):
class Conf
{
Conf();
Conf(const Conf&);
Conf& operator=(const Conf&);
~Conf();
public:
static Conf& instance()
{
static Conf singleton;
return singleton;
};
static void setProperty(const std::string& name,
const std::string& value);
static std::string getProperty(const std::string& name);
private:
QMutex _mutex;
std::map<std::string, std::string> _properties;
};
Because the configuration class can be accessed from many threads, I use mutex for synchronization:
void Conf::setProperty(const std::string& name,
const std::string& value)
{
QMutexLocker(&Conf::instance()._mutex);
Conf::instance()._properties[name]=value;
}
std::string Conf::getProperty(const std::string& name)
{
QMutexLocker(&Conf::instance()._mutex);
return Conf::instance()._properties[name];
}
Does the Conf::instance() method also need a lock?
I have found similar question: does a getter function need a mutex?, but in my case there is no setter method (lets assume that the instance of the singleton is created before the threads start).
If you're using c++11 or better the creation of the static singleton is guaranteed to be thread safe.
If you're still using c++03 then you need to provide your own mechanism.
by request:
section 6.7 of the c++11 standard:
such a variable is initialized the first time control passes through its declaration; such a variable is considered initialized upon the completion of its initialization. [...] If control enters the declaration concurrently while the variable is being initialized, the concurrent execution shall wait for completion of the initialization.
footnote:
The implementation must not introduce any deadlock around execution of the initializer.
in C++11 instance() does not need mutex, in C++98 it does as two threads might enter it at once an start constructing object. C++11 ensures single initialization of static 'local' variables.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With