To perform lock-free and wait-free lazy initialization I do the following:
private AtomicReference<Foo> instance = new AtomicReference<>(null);
public Foo getInstance() {
Foo foo = instance.get();
if (foo == null) {
foo = new Foo(); // create and initialize actual instance
if (instance.compareAndSet(null, foo)) // CAS succeeded
return foo;
else // CAS failed: other thread set an object
return instance.get();
} else {
return foo;
}
}
and it works pretty well except for one thing: if two threads see instance null
, they both create a new object, and only one is lucky to set it by CAS operation, which leads to waste of resources.
Does anyone suggest another lock-free lazy initialization pattern, which decrease probability of creating two expensive objects by two concurrent threads?
If you want true lock-freedom you will have to do some spinning. You can have one thread 'win' creation rights but the others must spin until it's ready.
private AtomicBoolean canWrite = new AtomicBoolean(false);
private volatile Foo foo;
public Foo getInstance() {
while (foo == null) {
if(canWrite.compareAndSet(false, true)){
foo = new Foo();
}
}
return foo;
}
This obviously has its problems of busy spinning (you can put a sleep or yield in there), but I would probably still recommend Initialization on demand.
I think you need to have some synchronization for the object creation itself. I would do:
// The atomic reference itself must be final!
private final AtomicReference<Foo> instance = new AtomicReference<>(null);
public Foo getInstance() {
Foo foo = instance.get();
if (foo == null) {
synchronized(instance) {
// You need to double check here
// in case another thread initialized foo
Foo foo = instance.get();
if (foo == null) {
foo = new Foo(); // actual initialization
instance.set(foo);
}
}
}
return foo;
}
This is a very common pattern especially for lazy singletons. Double checked locking minimizes the number of times the synchronized
block is actually executed.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With