There is also another problem which realy isn't a problem, it more an effeincy problem. it the returmn type it's not a pointer(ie. the object is made twice).

That's not really a problem. Firstly, CString is ref-counted which makes it cheap to copy. Copying it doesn't do a memory allocation. (This is one of many reasons to prefer it over std::string.) Secondly, a good compiler (eg VC8) will optimise away the copy anyway.

That is a mis-quote - I did not write that, please double-check your quoting next time. However, your remarks are correct (the last optimization would likely be the Return Value Optimization, I believe).

Well, what happens if the CString constructor throws an exception? Better to use a smart pointer for instead of a raw char * for path.

Ok, so you already noticed that. All I can do is ditto... and point out boost::scoped_array.

Anonymous:

That's not really a problem. Firstly, CString is ref-counted which makes it cheap to copy. Copying it doesn't do a memory allocation. (This is one of many reasons to prefer it over std::string.) Secondly, a good compiler (eg VC8) will optimise away the copy anyway.

Rubbish! First, std::string can be implemented using ref-counting with copy-on-write as well (see More Exceptional C++, Herb Sutter). Second, it rarely pays off in terms of performance (see same book). Third, even less so in multi-threaded applications. Fourth, a really "smart" implementation will use different policies depending on the length of the string anyway (see Alexei Alexandrescu's treatment of the topic).

Fifth... VC8's a good compiler? They greatly improved standard compliance and fixed many crash bugs for sure, but VC++ is still one freaking big memory hog.

Anonymous:

The real WTF is people still use languages where they have to give a crap about pointers, buffer overruns and orphaned memory.

Nice troll posting... I guess I'll have to write another list.

First, pointers are around in Java and C# as well, they're just called differently. What's not around for safety reasons is (except in unsafe C#) pointer arithmetic, which is a whole 'nuther thing (not that you'd understand, of course).

Second, this is C++ and you don't have to care about pointers and buffer overruns if you use strings in C++ (e.g. std::string and CString). The reason why the author had to care about pointers is thus not the language but the Win32 API. You cannot even call this API in Java, whether that's an improvement is unclear to me - you'd have to use JNI, i.e. C, which is no better. In C#, you probably need to use unsafe mode, which is not much better either. Of course you could use replacement APIs which don't require the use of pointers in either C# or Java, but the same can be said about C++, in other words: So fucking what?

Third, with respect to orphaned memory: It was once hard to avoid resource leaks in C++ especially if exception safety is taken into account. Not any more - boost::scoped_ptr, boost::scoped_array, boost::shared_ptr etc. come to the rescue. The nice thing is that these also handle deterministic resource cleanup, i.e. for things other than memory. Whereas in C# or Java, for instance, deterministic cleanup functions, finalization, weak references and multi-threading combine to create a problem complex so confusing that even "gurus" often get it wrong. To say that these interactions are still poorly understood would be more than an understatement. The suggestion that a bad coder would better handle these issues than simple smart pointers is just hilarious to anyone in the know.

Fourth, you're probably also one of the gazillion clueless C#/Java coders who thinks the garbage collector frees memory that is no longer used. This, of course, is very wrong. The garbage collector frees memory that is no longer referenced. And that is why I still have to see one sufficiently large (50,000 LoC or more) C# or Java project that has never suffered from a memory leak.

The real WTF is people still use languages where they have to give a crap about pointers, buffer overruns and orphaned memory

darin wrote the following post at 11-08-2006 5:59 PM:

Of course they do. Not all systems can afford the the initial overhead
of garbage collector overhead of Lisp, Smalltalk, or Java. And even
those that can afford it still have operating systems that are written
in C/C++/Pascal/Assembler, where these issues still are important. Or
maybe the real WTF is that there are people who still think that
everyone who doesn't follow their own way of doing things is wrong.

You're right to an extent of course. But you're also the first intelligent response to the original poster.

Other people have responded (with an air of machismo) as if the problem is that people don't know how to use pointers. The truth is, even if you have both the knowledge and intellect to use pointers correctly and to clean up after yourself, you're wasting time.

One of the posters in this thread talked about being able to see the memory leak easily because he was in "Leak detection mode". Hard to say much time he wastes in this mode, but it's time that he's not adding functionality to his project or tracking down real logic errors.

Every line of code you write that frees a memory structure, every time you scan over a function to make sure that no path leaves memory lying around or frees it more than once, every time you run tools to detect memory leaks, every time you scan your code for buffer overflows that could compromise security, you're wasting time.

If you're writing real time systems or operating systems, you're might not have a choice. You might not be able to use a language that uses garbage collection (although languages like ERLANG are designed for real-time environments).

But the truth is that most of us are not writing such systems. How many people out there are writing desktop applications in C++? A lot more than should be. They do it not because it's the right approach, but because it's what they're familiar with. That's the way they did it in the past, and that's the way they're doing it now. They haven't bothered to keep up with language technology and faster processors.

The real WTF is people still use languages where they have to give a crap about pointers, buffer overruns and orphaned memory.

Most of the other language's use pointers without
you knowing. C++ can be used this way as well (i.e use references
instead of pointers).

C/C++ is a high level, low loevel language. It's the lowest
level above assembly(excluding HLA) that is currently avaliable and is
why C is used in OSes (I've seen OSes written in assembly(some with out
comments, it takes a couple of days just to read the boot loader, and
longer for the memory manager ) and pascal, in fact I've actualy
written one myself in C). Although I hate the C for it's string handling
functions because of the posible buffer overflows, I love C because I
have the power to do what ever I fell like, but still be processor
indepandent.

I think the real WTF is the fact that it is only compatible with the M$ API.

The real WTF is that Microsoft's own example code allocates 192k worth of buffers on the stack.

-Which is EXACTLY why I always tell people to <font color="#ff0000">never</font> use MS/MSDN provided sample code as examples of "the right way" to do things! Just because it is in print (paper or otherwise) does not mean that it is correct, regardless of who wrote it.

Right on. This is a greek root, that's why it starts with a 'k'. also, it is singular (greek ending!) and therefore has no english plural-s, so there's nothing like 'one kudo' (as I've seen already...)

I was wondering if someone would mention that. Does anyone actually check the return value of new? I know we "should", but I have yet to see production code that does so that wasn't intended to run within a very memory limited environment.

And then I'd have to worry about memory fragmentation, not just exhaustion. If it's really important that the program not crash, you have to be more careful than just checking the return from malloc() ... unfortunately.