What is wrong in this code? I am trying to remove duplicate elements from the user defined container.
template <typename Item>
struct TList
{
typedef std::vector <Item> Type;
};
template <typename Item>
class GenericContainer
{
protected:
typename TList <Item>::Type items;
};
There is a method removing duplicate elements in container specialized for Item * (ie items are dynamically allocated):
template <typename Item>
template <typename Sort, typename Equal>
void Container <Item * >::remove ( typename TList <Item *>::Type ::iterator it_begin, typename TList <Item *>::Type ::iterator it_end, Sort sort, Equal equal )
{ //Sort items, ok
std::sort ( it_begin, it_end, sort );
//Apply unique, OK
typename TList <Item *>::Type ::iterator i_new_end = std::unique ( it_begin, it_end, equal );
//Delete items after new end of container, OK
for (typename TList <Item *>::Type ::iterator i_item = i_new_end ; i_item != it_end; i_item ++)
{
delete *i_item; //call destructor
}
//Erase duplicate items
this->items.erase ( i_new_end, it_end );
//Another sort, Exception: Acces in free memory
std::sort ( it_begin, i_new_end, sort );
);
I can not find the problem in line
//Another sort, Exception: Acces in free memory
std::sort ( it_begin, i_new_end, sort );
or lines before...
Error log:
Access in freed memory in process: sw.exe(line 796)
Thanks for your advice.
Updated question:
I translate the code with another compiler (MSVS 2010) and checked items of vector. Here are the results:
Input dataset: 628 items.
A) std::sort ( it_begin, it_end, sort );
628 items
B) typename TList <Item *>::Type ::iterator i_new_end = std::unique ( it_begin, it_end, equal );
612 unique items
C) for (typename TList <Item *>::Type ::iterator i_item = i_new_end ; i_item != it_end; i_item ++)
{
delete *i_item; //call destructor
}
To my surprise items since item [595] have been deleted (why not item[613]???). I do not understand such strange behavior...
My bet is that not only do some values appear more than once, some individual objects appear more than once in the sequence.
When you delete all the duplicate values, you destroy some of the objects that remain in the sequence.
The second sort (which is unnecessary, since unique does not rearrange things as it removes duplicates) accesses every object, so it immediately steps on the ones which were just deleted.
One possible solution is to sort the pointers in both ranges resulting from unique. Use set_difference( i_new_end, it_end, i_begin, i_new_end, i_sequence_inserter ) to find the objects that actually need to be freed — assuming they aren't still being used anywhere else.
Or, just use smart pointers, or no pointers at all :v) .
See my comment — the best solution is likely to eliminate the use of pointers entirely.
Anyway, here is a sample set_difference solution, this one using a custom iterator instead of a sequence inserter.
template <typename Item>
template <typename Sort, typename Equal>
void Container <Item * >::remove ( typename TList <Item *>::Type ::iterator it_begin, typename TList <Item *>::Type ::iterator it_end, Sort sort, Equal equal )
{ //Sort items, ok
std::sort ( it_begin, it_end, sort );
//Apply unique, OK
typename TList <Item *>::Type ::iterator i_new_end = std::unique ( it_begin, it_end, equal );
// Now sort the sub-ranges on pointer values to identify duplicate pointers
std::sort( it_begin, i_new_end );
std::sort( i_new_end, it_end );
// delete all pointers that appear only in the set of duplicate values to be erased
struct deleter {
deleter &operator *() { return *this; } // assignment to target is assgn. to iter
deleter &operator =( Item *rhs ) { delete rhs; return *this; }
deleter &operator ++() { return *this; } // increment is no-op
deleter &operator ++(int) { return *this; } // increment is no-op
};
std::set_difference( i_new_end, it_end, it_begin, i_new_end, deleter() );
//Erase duplicate items
this->items.erase ( i_new_end, it_end );
//Another sort, Exception: Acces in free memory
std::sort ( it_begin, i_new_end, sort );
);
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With