5 Answers
5

Not only is the assignment operator required, you also need to implement a copy constructor. Otherwise the compiler will provide default implementations that will result in both copies (after assignment / copy construction) containing pointers to the same longlife instances. Destructors of both copies will then delete these instances leading to undefined behavior.

z a;
a.hold.push_back( heap_item );
z a2;
a2 = a;

Both a.hold[0] and a2.hold[0] contain a pointer to the same heap_item; thus causing double deletion during destruction.

The easy way to avoid having to implement the assignment operator and copy constructor is to use a smart pointer to hold the longlife instances in the vector.

std::vector<std::unique_ptr<longlife>> hold;

Now there's no need to even write a destructor for your class.

For C++03, your options are to use std::tr1::shared_ptr (or boost::shared_ptr) instead of unique_ptr or use boost::ptr_vector (of course, this is also an option for C++11) instead of std::vector.

@ViniyoShouta: shared_ptr was introduced with TR1. Your compiler might have it in std::tr1::shared_ptr if it has TR1 support. If it does not (or you prefer to) you can use the boost::shared_ptr from the boost libraries.
–
David Rodríguez - dribeasOct 10 '12 at 21:05

Because without an assignment operator and a copy constructor you may end up with multiple hold vectors pointing to the same heap item, resulting in undefined behavior upon destruction:

z firstZ;
if (somethingIsTrue) {
z otherZ = firstZ;
// play with otherZ...
// now otherZ gets destructed, along with longlife's of the firstZ
}
// now it's time to destroy the firstZ, but its longlife's are long gone!

Of course you would not have this problem had you used a vector of objects or a vector of "smart pointers", rather than a vector of "plain old" pointers.

It would cause a double delete (and crash) on the destructor of a or a2 (whichever was destroyed second) because the default assignment constructor would do a binary copy of the memory state of hold. So each object a and a2 would end up deleting the exact same memory.

@Xeo, I understand what the rule of three is, the question is mostly
why is it a rule

Consider what happens here:

z& operator=( const z&ref )
{
hold = ref.hold;
return *this;
}

Lets say you have an instance of z:

z myz;
myz.a.hold.push_back( new long_life );

...and then you create a copy of this myz:

z my_other_z;
// ...
my_other_z = myz;

The operator= implementation you have provided above simply copies the contents of the vector. If the vector has pointers, it doesn't make copies of whatever's being pointed to -- it just makes a literal copy of the pointer itself.

So after operator= returns, you will have 2 instances of z that have pointers pointing to the same thing. When the first of those zs is destructed, it will delete the pointer:

~z(){ for( auto it=hold.begin();it!=hold.end() ++it ) delete(*it); };

When it comes time for the second z to be destroyed, it will try to delete the same pointer a second time. This results in Undefined Behavior.

The solution to this problem is to make deep copies when you assign or copy objects that maintain resources that need to be allocated and deleted. That means providing an assignment operator and a copy constructor.

That is why the rule of three is a RULE.

EDIT:

As others have mentioned, this is all better avoided altogether by using value semantics and RAII. Reengineering your objects to use the Rule of Zero, as others have called it, is a much better approach.

STL containers store objects, not references. In your case object is a pointer. Pointers are simply copied. Your line a2 = a; will duplicate pointer in the vector. After that each destructor will release the pointer.

Double free is much more dangerous than the memory leak. It causes nasty undefined behavior:

It may turn out that memory allocator will assign to p2 exactly the same value that was used for p1, because it is free after the first call to delete p1. A while later second delete p1 will also go fine because allocator thinks that this is a legitimate pointer that was given out for p2. The problem will appear only at p2->function();. Looking at the code of thread 2 it is absolutely impossible to understand what went wrong and why. This is extremely difficult to debug, especially if the system is big.