using (var dbConn = new DbConnection(connStr)) {
// do stuff with dbConn
}

has the C++ equivalent:

{
DbConnection dbConn(connStr);
// do stuff with dbConn
}

meaning that remembering to enclose the use of resources like DbConnection in a using block is unnecessary in C++ ! This seems to a major advantage of C++. This is even more convincing when you consider a class that has an instance member of type DbConnection, for example

We're looking for long answers that provide some explanation and context. Don't just give a one-line answer; explain why your answer is right, ideally with citations. Answers that don't include explanations may be removed.

5

I am not sure what you are talking about with not enclosing resources in C++. The DBConnection object probably handles closing all resources in its destructor.
– maple_shaft♦Nov 7 '11 at 14:54

16

@maple_shaft, exactly my point! That is the advantage of C++ that I am addressing in this question. In C# you need to enclose resources in "using"... in C++ you do not.
– JoelFanNov 7 '11 at 14:55

12

My understanding is that RAII, as a strategy, was only understood once C++ compilers were good enough to actually use advanced templating, which is well after Java. the C++ that was actually available for use when Java was created was a very primitive, "C with classes" style, with maybe basic templates, if you were lucky.
– Sean McMillanNov 7 '11 at 17:48

6

"My understanding is that RAII, as a strategy, was only understood once C++ compilers were good enough to actually use advanced templating, which is well after Java." - That's not really correct. Constructors and destructiors have been core features of C++ since day one, well before widespread use of templates and well before Java.
– Jim In TexasNov 7 '11 at 18:15

8

@JimInTexas: I think Sean has a basic seed of truth in there somewhere (Though not templates but exceptions is the crux). Constructors/Destructors were there from the beginning, but there importance and the concept of RAII was not initially (whats the word I am looking for) realized. It took a few years and some time for the compilers to get good before we realized how crucial the whole RAII is.
– Martin YorkNov 7 '11 at 20:15

11 Answers
11

Now looking at C# and its Java roots I am wondering... did the developers of Java fully appreciate what they were giving up when they abandoned the stack in favor of the heap, thus abandoning RAII?

(Similarly, did Stroustrup fully appreciate the significance of RAII?)

I am pretty sure Gosling did not get the significance of RAII at the time he designed Java. In his interviews he often talked about reasons for leaving out generics and operator overloading, but never mentioned deterministic destructors and RAII.

Funny enough, even Stroustrup wasn't aware of the importance of deterministic destructors at the time he designed them. I can't find the quote, but if you are really into it, you can find it among his interviews here: http://www.stroustrup.com/interviews.html

@maple_shaft: In short, it's not possible. Except if you invented a way to have deterministic garbage collection (which seems impossible in general, and invalidates all the GC optimizations of the last decades in any case), you'd have to introducing stack-allocated objects, but that opens several cans of worms: these objects need semantics, "slicing problem" with subtyping (and hence NO polymorphism), dangling pointers unless perhaps if you place significant restrictions on it or make massive incompatible type system changes. And that's just off the top of my head.
– user7043Nov 7 '11 at 16:49

13

@DeadMG: So you suggest we go back to manual memory mangement. That's a valid approach to programming in general, and of course it allows deterministic destruction. But that doesn't answer this question, which concerns itself with a GC-only setting that wants to provide memory safety and well-defined behaviour even if we all act like idiots. That requires GC for everything and no way to kick off object destruction manually (and all Java code in existance relies at least on the former), so either you make GC deterministic or you're out of luck.
– user7043Nov 7 '11 at 17:10

26

@delan. I would not call C++ smart pointers manual memory management. They are more like a deterministic fine grain controllable garbage collector. If used correctly smart pointers are the bees knees.
– Martin YorkNov 7 '11 at 17:54

10

@LokiAstari: Well, I'd say they're slightly less automatic than full GC (you have to think about which kind of smartness you actually want) and implementing them as library requires raw pointers (and hence manual memory management) to build on. Also, I'm not aware of any smart pointer than handles cyclic references automatically, which is a strict requirement for garbage collection in my books. Smart pointers are certainly incredibly cool and useful, but you have to face they can't provide some guarantees (whether you consider them useful or not) of a fully and exclusively GC'd language.
– user7043Nov 7 '11 at 18:04

11

@delan: I have to disagree there. I think they are more automatic than GC as they are deterministic. OK. To be efficient you need to make sure you use the correct one (I will give you that). std::weak_ptr handles cycles perfectly well. Cycles is always trotted out but in reality it is actually hardly ever a problem because the base object is usually stack based and when this goes it will tidy the rest up. For the rare cases were it can be a problem std::weak_ptr.
– Martin YorkNov 7 '11 at 18:14

First, the idea of different semantics for an object based on whether its stack- or heap-based is certainly counter to the unifying design goal of both languages, which was to relieve programmers of exactly such issues.

Second, even if you acknowledge that there are advantages, there are significant implementation complexities and inefficiencies involved in the book-keeping. You can't really put stack-like objects on the stack in a managed language. You are left with saying "stack-like semantics," and committing to significant work (value types are already hard enough, think about an object that is an instance of a complex class, with references coming in and going back into managed memory).

Because of that, you don't want deterministic finalization on every object in a programming system where "(almost) everything is an object." So you do have to introduce some kind of programmer-controlled syntax to separate a normally-tracked object from one that has deterministic finalization.

In C#, you have the using keyword, which came in fairly late in the design of what became C# 1.0. The whole IDisposable thing is pretty wretched, and one wonders if it would be more elegant to have using work with the C++ destructor syntax ~ marking those classes to which the boiler-plate IDisposable pattern could be automatically applied?

What about what C++ / CLI (.NET) has done, where the objects on the managed heap also have a stack-based "handle", which provides RIAA?
– JoelFanNov 7 '11 at 18:48

3

C++/CLI has a very different set of design decisions and constraints. Some of those decisions mean that you can demand more thought about memory allocation and performance implications from the programmers: the whole "give em enough rope to hang themselves" trade-off. And I imagine that the C++/CLI compiler is considerably more complex than that of C# (especially in its early generations).
– Larry OBrienNov 7 '11 at 20:48

@Peter Taylor -- right. But I feel that C#'s non-deterministic destructor is worth very little, since you cannot rely on it to manage any kind of constrained resource. So, in my opinion, it might have been better to use the ~ syntax to be syntactic sugar for IDisposable.Dispose()
– Larry OBrienNov 7 '11 at 22:59

3

@Larry: I agree. C++/CLI does use ~ as syntactic sugar for IDisposable.Dispose(), and it's much more convenient than the C# syntax.
– dan04Nov 8 '11 at 1:59

Keep in mind that Java was developed in 1991-1995 when C++ was a much different language. Exceptions (which made RAII necessary) and templates (which made it easier to implement smart pointers) were "new-fangled" features. Most C++ programmers had come from C and were used to doing manual memory management.

So why use reference semantics instead of value semantics?

There is no need for a syntactic distinction between Foo and Foo* or between foo.bar and foo->bar.

There is no need for overloaded assignment, when all assignment does is copy a pointer.

There is no need for copy constructors. (There is occasionally a need for an explicit copy function like clone(). Many objects just don't need to be copied. For example, immutables don't.)

There is no need to declare private copy constructors and operator= to make a class noncopyable. If you don't want objects of a class copied, you just don't write a function to copy it.

There is no need for swap functions. (Unless you're writing a sort routine.)

There is no need for C++0x-style rvalue references.

There is no need for (N)RVO.

There is no slicing problem.

It's easier for the compiler to determine object layouts, because references have a fixed size.

The main downside to reference semantics is that when every object potentially has multiple references to it, it becomes hard to know when to delete it. You pretty much have to have automatic memory management.

Java chose to use a non-deterministic garbage collector.

Can't GC be deterministic?

Yes, it can. For example, the C implementation of Python uses reference counting. And later added tracing GC to handle the cyclic garbage where refcounts fail.

But refcounting is horribly inefficient. Lots of CPU cycles spent updating the counts. Even worse in a multi-threaded environment (like the kind Java was designed for) where those updates need to be synchronized. Much better to use the null garbage collector until you need to switch to another one.

You could say that Java chose to optimize the common case (memory) at the expense of non-fungible resources like files and sockets. Today, in light of the adoption of RAII in C++, this may seem like the wrong choice. But remember that much of the target audience for Java was C (or "C with classes") programmers who were used to explicitly closing these things.

But what about C++/CLI "stack objects"?

They're just syntactic sugar for Dispose (original link), much like C# using. However, it doesn't solve the general problem of deterministic destruction, because you can create an anonymous gcnew FileStream("filename.ext") and C++/CLI won't auto-Dispose it.

Also, nice links (especially the first one, which is highly relevant to this discussion).
– BlueRaja - Danny PflughoeftNov 8 '11 at 23:11

The using statement handles many cleanup-related problems nicely, but many others remain. I would suggest that the right approach for a language and framework would be to declaratively distinguish between storage locations which "own" a referenced IDisposable from those which do not; overwriting or abandoning a storage location which owns a referenced IDisposable should dispose the target in the absence of a directive to the contrary.
– supercatJul 13 '12 at 1:29

1

"No need for copy constructors" sounds nice, but fails badly in practice. java.util.Date and Calendar are perhaps the most notorious examples. Nothing lovelier than new Date(oldDate.getTime()).
– kevin clineMay 15 '13 at 3:35

2

iow RAII was not "abandoned", it simply didn't exist to be abandoned :) As to copy constructors, I've never liked them, too easy to get wrong, they're a constant source of headaches when somewhere deep down someone (else) forgot to make a deep copy, causing resources to be shared between copies that should not be.
– jwentingMay 16 '13 at 8:41

a try statement that declares one or more resources. A resource is as an object that must be closed after the program is finished with it. The try-with-resources statement ensures that each resource is closed at the end of the statement. Any object that implements java.lang.AutoCloseable, which includes all objects which implement java.io.Closeable, can be used as a resource...

So I guess they either didn't consciously choose not to implement RAII or they changed their mind meanwhile.

Interesting, but it looks like this only works with objects that implement java.lang.AutoCloseable. Probably not a big deal but I don't like how this feels somewhat constrained. Maybe I have some other object that should be relased automatically, but it's very semantically weird to make it implement AutoCloseable...
– FrustratedWithFormsDesignerNov 7 '11 at 19:03

9

@Patrick: Er, so? using is not the same as RAII - in one case the caller worries about disposing resources, in the other case the callee handles it.
– BlueRaja - Danny PflughoeftNov 7 '11 at 20:52

1

+1 I didn't know about try-with-resources; it should be useful in dumping more boilerplate.
– jpreteNov 7 '11 at 22:06

3

-1 for using/try-with-resources not being the same as RAII.
– Sean McMillanNov 7 '11 at 22:35

4

@Sean: Agreed. using and it's ilk are nowhere near RAII.
– DeadMGNov 8 '11 at 0:29

Java intentionally does not have stack-based objects (aka value-objects). These are necessary to have the object automatically destructed at the end of the method like that.

Because of this and the fact that Java is garbage-collected, deterministic finalization is more-or-less impossible (ex. What if my "local" object became referenced somewhere else? Then when the method ends, we don't want it destructed).

However, this is fine with most of us, because there's almost never a need for deterministic finalization, except when interacting with native (C++) resources!

Why does Java not have stack-based objects?

(Other than primitives..)

Because stack-based objects have different semantics than heap-based references. Imagine the following code in C++; what does it do?

return myObject;

If myObject is a local stack-based object, the copy-constructor is called (if the result is assigned to something).

If myObject is a local stack-based object and we're returning a reference, the result is undefined.

If myObject is a member/global object, the copy-constructor is called (if the result is assigned to something).

If myObject is a member/global object and we're returning a reference, the reference is returned.

If myObject is a pointer to a local stack-based object, the result is undefined.

If myObject is a pointer to a member/global object, that pointer is returned.

If myObject is a pointer to a heap-based object, that pointer is returned.

Now what does the same code do in Java?

return myObject;

The reference to myObject is returned. It doesn't matter if the variable is local, member, or global; and there are no stack-based objects or pointer cases to worry about.

The above shows why stack-based objects are a very common cause of programming errors in C++. Because of that, the Java designers took them out; and without them, there is no point in using RAII in Java.

I don't know what you mean by "there is no point in RAII"... I think you mean "there is no ability to provide RAII in Java"... RAII is independent of any language... it does not become "pointless" because 1 particular language does not provide it
– JoelFanNov 7 '11 at 23:36

3

That's not a valid reason. An object does not have to actually live on the stack to use stack-based RAII. If there is such a thing as a "unique reference", the destructor can be fired once it goes out of scope. See for instance, how it works with D programming language: d-programming-language.org/exception-safe.html
– Nemanja TrifunovicNov 8 '11 at 1:44

3

@Nemanja: An object doesn't have to live on the stack to have stack-based semantics, and I never said it did. But that's not the problem; the problem, as I mentioned, is the stack-based semantics themself.
– BlueRaja - Danny PflughoeftNov 8 '11 at 17:46

4

@Aaronaught: THe devil is in "almost always" and "most of the time". If you don't close your db connection and leave it to the GC to trigger the finalizer, it will work just fine with your unit-tests and break severily when deployed in production. Deterministic cleanup is important regardless of the language.
– Nemanja TrifunovicNov 9 '11 at 13:52

8

@NemanjaTrifunovic: Why are you unit testing on a live database connection? That's not really a unit test. No, sorry, I'm not buying it. You shouldn't be creating DB connections all over the place anyway, you should be passing them in through constructors or properties, and that means you don't want stack-like auto-destruct semantics. Very few objects that depend on a database connection should actually own it. If non-deterministic cleanup is biting you that often, that hard, then it's because of bad application design, not bad language design.
– AaronaughtNov 9 '11 at 13:56

I agree that RAII is the bees knees. But the using clause is a great step forward for C# over Java. It does allow deterministic destruction and thus correct resource management (its not quite as good as RAII as you need to remember to do it, but its definitely a good idea).
– Martin YorkNov 7 '11 at 17:58

@KonradRudolph: It is worse than malloc and free. At least in C you don't have exceptions.
– Nemanja TrifunovicNov 7 '11 at 18:52

1

@Nemanja: Let's be fair, you can free() in the finally.
– DeadMGNov 7 '11 at 19:10

4

@Loki: The base class problem is much more important as a problem. For example, the original IEnumerable didn't inherit from IDisposable, and there were a bunch of special iterators which could never be implemented as a result.
– DeadMGNov 7 '11 at 19:10

I'm pretty old. I've been there and seen it and banged my head about it many times.

I was at a conference in Hursley Park where the IBM boys were telling us how wonderful this brand new Java language was, only someone asked ... why isn't there a destructor for these objects. He didn't mean the thing we know as a destructor in C++, but there was no finaliser either (or it had finalisers but they basically didn't work). This is way back, and we decided Java was a bit of a toy language at that point.

now they added Finalisers to the language spec and Java saw some adoption.

Of course, later everyone was told not to put finalisers on their objects because it slowed the GC down tremendously. (as it had to not only lock the heap but move the to-be-finalised objects to a temp area, as these methods could not be called as the GC has paused the app from running. Instead they would be called immediately before the next GC cycle)(and worse, sometimes the finaliser would never get called at all when the app was shutting down. Imagine not having your file handle closed, ever)

Then we had C#, and I remember the discussion forum on MSDN where we were told how wonderful this new C# language was. Someone asked why there was no deterministic finalisation and the MS boys told us how we didn't need such things, then told us we needed to change our way of designing apps, then told us how amazing GC was and how all our old apps were rubbish and never worked because of all the circular references. Then they caved in to pressure and told us they'd added this IDispose pattern to the spec that we could use. I thought it was pretty much back to manual memory management for us in C# apps at that point.

Of course, the MS boys later discovered that all they'd told us was... well, they made IDispose a bit more than just a standard interface, and later added the using statement. W00t! They realised that deterministic finalisation was something missing from the language after all. Of course, you still have to remember to put it in everywhere, so its still a bit manual, but it's better.

So why did they do it when they could have had using-style semantics automatically placed on each scope block from the start? Probably efficiency, but I like to think that they just didn't realise. Just like eventually they realised you still need smart pointers in .NET (google SafeHandle) they thought that the GC really would solve all problems. They forgot that an object is more than just memory and that GC is primarily designed to handle memory management. they got caught up in the idea that the GC would handle this, and forgot that you put other stuff in there, an object isn't just a blob of memory that doesn't matter if you don't delete it for a while.

But I also think that the lack of a finalise method in the original Java had a bit more to it - that the objects you created were all about memory, and if you wanted to delete something else (like a DB handle or a socket or whatever) then you were expected to do it manually.

Remember Java was designed for embedded environments where people were used to writing C code with lots of manual allocations, so not having automatic free wasn't much of a problem - they never did it before, so why would you need it in Java? The issue wasn't anything to do with threads, or stack/heap, it was probably just there to make memory allocation (and therefore de-alloc) a bit easier. In all, the try/finally statement is probably a better place to handle non-memory resources.

So IMHO, the way .NET simply copied Java's biggest flaw is its biggest weakness. .NET should have been a better C++, not a better Java.

IMHO, things like 'using' blocks are the right approach for deterministic cleanup, but a few more things are needed as well: (1) a means of ensuring that objects get disposed if their destructors throw an exception; (2) a means of auto-generating a routine method to call Dispose on all fields marked with a using directive, and specifying whether IDisposable.Dispose should automatically call it; (3) a directive similar to using, but which would only call Dispose in case of an exception; (4) a variation of IDisposable which would take an Exception parameter, and...
– supercatJul 13 '12 at 1:34

...which would be used automatically by using if appropriate; the parameter would be null if the using block exited normally, or else would indicate what exception was pending if it exited via exception. If such things existed, it would be much easier to manage resources effectively and avoid leaks.
– supercatJul 13 '12 at 1:36

Bruce Eckel, author of "Thinking in Java" and "Thinking in C++" and a member of the C++ Standards Committee, is of the opinion that, in many areas (not just RAII), Gosling and the Java team didn't do their homework.

...To understand how the language can be both unpleasant and complicated, and well designed at the same time, you must keep in mind the primary design decision upon which everything in C++ hung: compatibility with C. Stroustrup decided -- and correctly so, it would appear -- that the way to get the masses of C programmers to move to objects was to make the move transparent: to allow them to compile their C code unchanged under C++. This was a huge constraint, and has always been C++'s greatest strength ... and its bane. It's what made C++ as successful as it was, and as complex as it is.

It also fooled the Java designers who didn't understand C++ well enough. For example, they thought operator overloading was too hard for programmers to use properly. Which is basically true in C++, because C++ has both stack allocation and heap allocation and you must overload your operators to handle all situations and not cause memory leaks. Difficult indeed. Java, however, has a single storage allocation mechanism and a garbage collector, which makes operator overloading trivial -- as was shown in C# (but had already been shown in Python, which predated Java). But for many years, the partly line from the Java team was "Operator overloading is too complicated." This and many other decisions where someone clearly didn't do their homework is why I have a reputation for disdaining many of the choices made by Gosling and the Java team.

There are plenty of other examples. Primitives "had to be included for efficiency." The right answer is to stay true to "everything is an object" and provide a trap door to do lower-level activities when efficiency was required (this would also have allowed for the hotspot technologies to transparently make things more efficient, as they eventually would have). Oh, and the fact that you can't use the floating point processor directly to calculate transcendental functions (it's done in software instead). I've written about issues like this as much as I can stand, and the answer I hear has always been some tautological reply to the effect that "this is the Java way."

When I wrote about how badly generics were designed, I got the same response, along with "we must be backwards compatible with previous (bad) decisions made in Java." Lately more and more people have gained enough experience with Generics to see that they really are very hard to use -- indeed, C++ templates are much more powerful and consistent (and much easier to use now that compiler error messages are tolerable). People have even been taking reification seriously -- something that would be helpful but won't put that much of a dent in a design that is crippled by self-imposed constraints.

This sounds like a Java versus C++ answer, rather than focusing on RAII. I think C++ and Java are different languages, each with its strengths and weaknesses. Also the C++ designers didn't do their homework in many areas (KISS principle not applied, simple import mechanism for classes missing, etc). But the focus of the question was RAII: this is missing in Java and you have to program it manually.
– GiorgioNov 7 '11 at 20:15

4

@Giorgio: The point of the article is, Java seems to have missed the boat on a number of issues, some of which relate directly to RAII. Regarding C++ and its impact on Java, Eckels notes: "You must keep in mind the primary design decision upon which everything in C++ hung: compatibility with C. This was a huge constraint, and has always been C++'s greatest strength... and its bane. It also fooled the Java designers who didn't understand C++ well enough." The design of C++ influenced Java directly, while C# had the opportunity to learn from both. (Whether it did so is another question.)
– GnawmeNov 7 '11 at 20:29

2

@Giorgio Studying existing languages in a particular paradigm and language family is indeed a part of the homework required for new language development. This is one example where they simply whiffed it with Java. They had C++ and Smalltalk to look at. C++ didn't have Java to look at when it was developed.
– JeremyNov 7 '11 at 20:33

1

@Gnawme: "Java seems to have missed the boat on a number of issues, some of which relate directly to RAII": can you mention these issues? The article you posted does not mention RAII.
– GiorgioNov 7 '11 at 21:22

2

@Giorgio Sure, there have been innovations since the development of C++ that account for many of the features you find lacking there. Are any of those features that they should have found looking at languages established before the development of C++? That's the kind of homework we are talking about with Java - there is no reason for them not to consider every C++ feature in the develpoment of Java. Some like multiple inheritance they intentionally left out - others like RAII they seem to have overlooked.
– JeremyNov 8 '11 at 18:24

Stop and think about that. Keep thinking.... Now C++ didn't have threads when everyone got so keen in RAII. Even Erlang ( separate heaps per thread) gets icky when you pass too many objects around. C++ only got a memory model in C++2011; now you can almost reason about concurrency in C++ without having to refer to your compiler's "documentation".

Java was designed from (almost) day one for multiple threads.

I've still got my old copy of "The C++ Programming language" where Stroustrup assures me I won't need threads.

Java being designed for multiple threads also explains why the GC isn't based on reference counting.
– dan04Nov 8 '11 at 1:01

4

@NemanjaTrifunovic: You can't compare C++/CLI to Java or C#, it was designed almost for the express purpose of interoperating with unmanaged C/C++ code; it's more like an unmanaged language that happens to give access to the .NET framework than vice versa.
– AaronaughtNov 8 '11 at 20:58

3

@NemanjaTrifunovic: Yes, C++/CLI is one example of how it can be done in a way that is totally inappropriate for normal applications. It's only useful for C/C++ interop. Not only should normal developers not need to be saddled with a totally irrelevant "stack or heap" decision, but if you ever try to refactor it then it's trivially easy to accidentally create a null pointer/reference error and/or a memory leak. Sorry, but I have to wonder if you've ever actually programmed in Java or C#, because I don't think anyone who has would actually want the semantics used in C++/CLI.
– AaronaughtNov 9 '11 at 14:05

2

@Aaronaught: I've programmed with both Java (a little) and C# (a lot) and my current project is pretty much all C#. Believe me, I know what I am talking about, and it has nothing to do with "stack vs. heap" - it has everything to do with making sure that all your resources are released as soon as you don't need them. Automatically. If they are not - you will get into trouble.
– Nemanja TrifunovicNov 9 '11 at 14:19

4

@NemanjaTrifunovic: That's great, really great, but both C# and C++/CLI require you to explicitly state when you want this to happen, they just use a different syntax. Nobody's disputing the essential point that you're currently rambling about (that "resources are released as soon as you don't need them") but you're making a gigantic logical leap to "all managed languages should have automatic-but-only-sort-of call-stack-based deterministic disposal". It just doesn't hold water.
– AaronaughtNov 9 '11 at 15:14

In C++, you use more general-purpose, lower-level language features (destructors automatically called on stack-based objects) to implement a higher-level one (RAII), and this approach is something the C# / Java folks seem not to be too fond of. They'd rather design specific high-level tools for specific needs, and provide them to the programmers ready-made, built into the language. The problem with such specific tools is that they are often impossible to customize (in part that's what makes them so easy to learn). When building from smaller blocks, a better solution may come around with time, while if you only have high-level, built-in constructs, this is less likely.

So yeah, I think (I wasn't actually there...) it was a concious decision, with the goal of making the languages easier to pick up, but in my opinion, it was a bad decision. Then again, I generally prefer the C++ give-the-programmers-a-chance-to-roll-their-own philosophy, so I'm a bit biased.

The "give-the-programmers-a-chance-to-roll-their-own philosophy" works fine UNTIL to you need to combine libraries written by programmers who each rolled their own string classes and smart pointers.
– dan04Nov 8 '11 at 0:18

@dan04 so the managed languages that give you pre-defined string classes, then allow you to monkey-patch them, which is a recipe for disaster if you're the kind of guy who can't cope with a different own-rolled string class.
– gbjbaanbMar 9 '13 at 18:17

You already called out the rough equivalent to this in C# with the Dispose method. Java also has finalize. NOTE:I realize that Java's finalize is non-deterministic and different from Dispose, I am just pointing out that they both have a method of cleaning resources alongside the GC.

If anything C++ becomes more of a pain though because an object has to be physically destroyed. In higher level languages like C# and Java we depend on a garbage collector to clean it up when there are no longer references to it. There is no such guarantee that DBConnection object in C++ doesn't have rogue references or pointers to it.

Yes the C++ code can be more intuitive to read but can be a nightmare to debug because the boundaries and limitations that languages like Java put in place rule out some of the more aggravating and difficult bugs as well as protect other developers from common rookie mistakes.

Perhaps it comes down to preferences, some like the low-level power, control and purity of C++ where others like myself prefer a more sandboxed language that are much more explicit.

First of all Java's "finalize" is non-deterministic... it is not the equivalent of C#'s "dispose" or of C++'s destructors... also, C++ also has a garbage collector if you use .NET
– JoelFanNov 7 '11 at 15:20

2

@DeadMG: The problem is, you might not be an idiot, but that other guy who just left the company (and who wrote the code that you now maintain) might have been.
– KevinNov 7 '11 at 16:36

7

That guy is going to write shitty code whatever you do. You can't take a bad programmer and make him write good code. Dangling pointers are the least of my concerns when dealing with idiots. Good coding standards use smart pointers for memory that has to be tracked, so smart management should make it obvious how to safely de-allocate and access memory.
– DeadMGNov 7 '11 at 17:06

3

What DeadMG said. There are many bad things about C++. But RAII isn’t one of them by a long stretch. In fact, the lack of Java and .NET to properly account for resource management (because memory is the only resource, right?) is one of their biggest problems.
– Konrad RudolphNov 7 '11 at 17:57

8

The finalizer in my opinion is a disaster design wise. As your are forcing the correct usage of an object from the designer on to the user of the object (not in terms of memory management but resource management). In C++ it is the responsibility of the class designer to get resource management correct (done only once). In Java it is the responsibility of the class user to get resource management correct and thus must be done each time the class us used. stackoverflow.com/questions/161177/…
– Martin YorkNov 7 '11 at 18:05

Thank you for your interest in this question.
Because it has attracted low-quality or spam answers that had to be removed, posting an answer now requires 10 reputation on this site (the association bonus does not count).