This page examines and gives a very clear example of how to dynamically load and use a class, there is something that I have a hard time understanding though:
I understand why is the "create" function needed, but why is a "destroy" function needed? why is not declaring the interface destructor as pure virtual enough?
I made an identical example with the exception of:
~polygon() = 0;
The destructor for triangle
is:
triangle::~triangle() {
std::cout << "triangle Dtor is called" <<std::endl;
}
then when I use:
delete poly;
the message is indeed shown (GCC 5.4.0 under linux).
I tried to look for other examples but they all mention and use the "destroy" function, there were no example using simply pure virtual destructors, which makes believe I'm missing something here, so .. what is it?
The background of not wanting to use a destroy function is that I want to use the allocated object in a shared_ptr
and not care later about its lifetime, working with a "destroy" function will be tricky, therefore I need to know if it's necessary.
Read a little further in the same link:
You must provide both a creation and a destruction function; you must not destroy the instances using delete from inside the executable, but always pass it back to the module. This is due to the fact that in C++ the operators new and delete may be overloaded; this would cause a non-matching new and delete to be called, which could cause anything from nothing to memory leaks and segmentation faults. The same is true if different standard libraries are used to link the module and the executable.
The keyword here is new and delete may be overloaded
and therefore do something different in the code of the caller than in the code of the shared object, if you use delete
from inside the binary it will call the destructor and it will deallocate the memory according to the destructor in the shared object, but that might not be the behavior of delete operator in the shared object, maybe new
in the shared object did not allocate any memory and therefore you will have a possible segmentation fault, and maybe new
is doing something more than allocate the memory for that object and by not calling the matching delete
in the shared object there is a leak, there is also the possibility of different heap handling between the shared object and binary.
In any event shared_ptr
can be given a custom deleter fairly easily with a lambda function that calls the custom deleter; true, it's mildly annoying that shared_ptr
can't include the deleter in its template arguments, but you can write a simple wrapper to make it simpler/less verbose to create it with a consistent deleter in all locations (no compiler available right now, forgive any typos):
shared_ptr<triangle> make_shared_triangle(triangle *t) {
return std::shared_ptr<triangle>(t, [](triangle *t) { destroy_triangle(t); });
}
If you really want to go by the example you linked to you can use a custom function to be used when the smart pointer should delete it's object.
std::shared_ptr<class> object(create_object(), //create pointer
[=](class* ptr)
{
destroy_object(ptr);
});
With this instead of delete
the lambda will be called when the shared pointer should delete itself.
Note: I copied the function pointer to the destroy_object
function in the lambda ([=]
will do this). As long as you don't call dlclose()
when this is used in context of dynamic loading this should be valid. When you use dlclose
though this will cause errors.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With