Posted
by
Unknown Lamer
on Thursday August 18, 2011 @01:31PM
from the natives-are-revolting dept.

snydeq writes with an editorial in InfoWorld about the resurgence of native code. From the article: "Modern programmers have increasingly turned away from native compilation in favor of managed-code environments such as Java and .Net, which shield them from some of the drudgery of memory management and input validation. Others are willing to sacrifice some performance for the syntactic comforts of dynamic languages such as Python, Ruby, and JavaScript. But C++11 arrives at an interesting time. There's a growing sentiment that the pendulum may have swung too far away from native code, and it might be time for it to swing back in the other direction. Thus, C++ may have found itself some unlikely allies."

I can only find this to be true for people coming from certain backgrounds. I don't think anyone without previous programming experience would agree that C++ is easier to understand.
Well, maybe it's easier to reach the point where one thinks he understand, then definitely easier to realize that nothing is actually understood even if somehow things worked:-)

I tried to teach Java to my son. It quickly got stuck because of OO concepts. We went to Python. Life was good. And as you rightly point out, the OO taught by Java is crappy at best. If you really want OOP, Java isn't a good choice for that either.

I think most people view C++ as C with mystical object-y things. C (imho) is really easy to get. C++, that is, *real*, modern C++ is quite complex to use and isn't something "any decent programmer could read a book and understand it in a few weeks." Of course, I don't feel that way about Java either and it appears the/. crowd still views Java as a toy.

I disagree about C being easier to get into. The stuff you do in toy programs, playing with strings and arrays and such, is difficult and alien in C if you've never seen pointers, or manual memory management. In C++ you can start with string and vector, and get toy programs working with just STL stuff, worry about pointers and memory leaks later on.

Electronic engineering students. While C++ is taking over the world, I know of at least one course where understanding microprocessor design (let alone assembly language) is a requirement. Unless the course has changes in the short 2 years since I took it...

Java has its purposes. Write-once, Run-Almost-Anywhere is a good concept.

It is a good concept. Unfortunately, several studies (at least one was covered here on slashdot) indicate the vast majority of Java development runs on the same platform on which it was written. Furthermore, the vast majority of this Java software can not run anywhere without additional code changes because of programmer short sightedness or just simple mistakes.

So while its a nice "have", pragmatically speaking, it doesn't apply to most Java software.

Which means, at the end of the day, your development cycle of something like Java vs C++ isn't all too entirely different for code which actually is, "Write-one, Run-Almost-Anywhere."

There is always something that needs changing. For example, I recently wrote an experimental compression program. Everything is plain, common C. No libraries beyond the common ones. No use of OS-specific APIs. Nothing but stdio and stdlib. But it's still not going to run on windows without a minor change, because it tries to store a temporary file in/tmp/

Gimme an algorithm or any other job and I'll implement it in 'C' - I don't need no pussy language that makes parsing text easier (Perl) or web back ends easier (Python) or worry about the mythical write once run everywhere languages like Java.

And back in the day of the old PS2, every goddamn game development house started out their dev cycle by reimplementing mip-maps because it wasn't supported directly by the hardware. Fucking insanity. If there's a tool that has been developed for text parsing, and 99%

It's not only about being easier, but also proven. I would rather use a lockless thread-safe multi-reader/writer queue than implement it myself only to have a possible race-condition. It would be fun to learn it some day when I have free time, but enterprise code? No Way. I'll let engineers with PHDs and tons of testing figure out the hard stuff.

Yes, you get a little bruised when it comes to pointers, but that is very much worth it in the long run.

Students that don't learn about pointers and such early on (e.g. got Java first) tend to have a harder time in the upper classes where lower level languages are required. So, you can either "man up" and get the hard stuff learned early when it won't interfere with the classwork, or have your upper level classes being diverted in order to teach the stuff they should have learned earlier on and then not get to learn as much of the upper level material as they should have.

Modern programmers have increasingly turned away from native compilation in favor of managed-code environments such as Java and.Net, which shield them from some of the drudgery of memory management and input validation.

So if you wanted to be shielded from the drudgery of memory management in C++ why weren't you using the features of the language that provide you automatic memory management to make this less of a pain? It's not as if you don't still have to do memory management in say.NET especially since the IDisposable pattern is needed all over the place in user-written code to clean up non-memory resources like file handles, GDI handles, etc held within your objects.

So if you wanted to be shielded from the drudgery of memory management in C++ why weren't you using the features of the language that provide you automatic memory management to make this less of a pain?

Because it doesn't provide full automatic memory management (smart pointers don't handle cycles), and because you have to write reams and reams of angle brackets to use them.

It's not as if you don't still have to do memory management in say.NET especially since the IDisposable pattern is needed all over the place in user-written code to clean up non-memory resources like file handles, GDI handles, etc held within your objects.

I can't speak for.net, but in Java you have very little need for it. It's better to close your files and connections, but they'll be garbage-collected for you if you don't.

I can speak for vb.net ( may be true in c#.net, not sure ), but you can leak memory as well as other resources.I bought into the "garbage collection handles it" mindset, until it was rudely pushed into my face that some UI elements have to have dispose called, or they stay in memory ( IIRC, forms will not collect, so incompetent programmers that put properties on their forms can come later and get those properties from the form after it is closed, other things like that ).

Because it doesn't provide full automatic memory management (smart pointers don't handle cycles), and because you have to write reams and reams of angle brackets to use them.

If you're creating cycles then you didn't understand RAII. The objects in your cycle don't 'own' each other.

If you're typing lots of angle brackets then maybe you need to learn about typedef.

It's better to close your files and connections, but they'll be garbage-collected for you if you don't.

You seem to be saying that after I save a file I have to wait for the garbage collector to run before I can open the file in another application or move it somewhere else on the disk. How will I know when the garbage collector has run (assuming it runs at all...)? Am I supposed to quit your application to force it to cl

Some applications interact with garbage collection by using finalization and weak/soft/phantom references. These features can create performance artifacts at the Java programming language level. An example of this is relying on finalization to close file descriptors, which makes an external resour

Because it doesn't provide full automatic memory management (smart pointers don't handle cycles), and because you have to write reams and reams of angle brackets to use them.

Then maybe you need to look at shared_ptr from boost and the equivalent introduced in C++0x? The very classes designed for the situation you are talking about? Oh right, it's more fun to make ignorant comments!

You don't have that right, and you're probably writing code with memory leaks. From the Boost documentation: "Because the implementation uses reference counting, cycles of shared_ptr instances will not be reclaimed." You need to specially craft your cycle with a weak_ptr.

Last I checked you could use.NET in C++. So use.NET for the gui and networking frameworks and use C++ to do hardcore number crunching. Also there are native data structures in.NET and Java you can use in your program if you need performance. Most amature programmers never look in the math or collections libraries.

Managed code has been the single biggest disaster at least where I work, stalls, huge memory consumption, unpredictable.. the dreaded 'garbage collection', I am glad we are out of it.. and if you fear crashes then you could use C++ exceptions, then you can divide by zero or do other bad stuff and never experience a hard crash... or even better, use the complete threaded sandbox (see Chromium sandbox). that means C++ is totally safe and the fastest at the same time - best of both worlds; that is why C++ is used internally by Google, Ebay, Oracle.. etc.

You should look at the coding guidelines Google uses for C++. They only use a subset of C++. For them a of their guidelines make sense where they don't for 99% of the programming world. There is literally only a handful of companies which have the issues of scale Google must contend. As such, Google is rarely a good example to look at anything for general purpose computing.

I don't recommend that. Anyone reading the Google coding guidelines will get a very wrong picture of how C++ is supposed to be done. Hell, their guidelines specifically disallow the use of RAII which is one of the dumbest things ever.

I just had a falling out with another developer because he told me I was doing things dumbly because it didn't comply with Google's coding guidelines or Linus' irrational hatred of C++; amongst many other insanely stupid and completely irrational justifications. Of course, the falling out was caused by me politely explaining to him that coding standards have significantly progressed in the last several decades, followed by inviting him to join me. in at least t

Last I checked you could use.NET in C++. So use.NET for the gui and networking frameworks and use C++ to do hardcore number crunching. Also there are native data structures in.NET and Java you can use in your program if you need performance. Most amature programmers never look in the math or collections libraries.

As to the native data structures in.NET and Java - you incur performance penalties when you want to use them directly as you have to thunk back and forth between the managed and unmanaged portions of the system. I think Java is better about it than.NET, but its a pain nonetheless. It's easier, and more performance, to just stay in either unmanaged or managed code all the time.

Carmack remarked about this on his Twitter account today: "iOS did a lot to 'save' native code on mobile platforms. Prior, there was a sense that only Neanderthals didn’t want a VM."

Apple is even backing down on Cocoa garbage collection with their new Automatic Reference Counting feature, in which the compiler determines object lifetimes and inserts the needed memory management calls. ARC will be the default for new Xcode projects. I think there was a hope that computing power would catch up and make VMs a competitive alternative to native code.

With ARC, there really isn't a need for a garbage collector. I've used both, and the only things that happen in ARC that bite you are things that happen in Java, et al. I.e. you can still use a null pointer and such and get an error.

The only place I have been truly surprised is that some of the Foundation stuff can perform weird or unexpectedly. That's more that ARC is fully Cocoa ready and that you need to tread carefully when using toll-free stuff. But then ARC warns you, and then you need to just fol

I think there was a hope that computing power would catch up and make VMs a competitive alternative to native code.

While you're right there's a computing power issue here, the issue is battery life not lack of CPU cycles. VMs add overhead, as you add overhead you'll run longer and burn more power on the CPU. If you want to squeeze all you can out of a limited battery you need to optimize your code and in the end that's going to mean native code with very explicit memory management. VMs just don't play well in embedded environments.

i was under the impression that iOS tablets do not support garbage collection and that you had to manually use retain counts. Are you saying I will not need to use retain counts and rely on ARC instead?

As far as I understand, objects will still have their retain counts, but the compiler will analyze your code, then add the release calls for you. If you try and make your own release calls, you'll get a compiler error. Crazy stuff.

Apple really did 'save', but their own ass. We all remember iOS (iPhone OS at the time) with the web apps only, these were limited and terrible. Then the moment came when Jobs&Co. decided to release the real native SDK for iOS.. and from that point forward the iOS went batman.

I always quit learning Objective C because I think the syntax is ugly as hell. Smalltalk was also disgusting (especially those If constructs), and Erlang is one yucky language too (Erlangers acknowledge this and even tell you to suck it up on the homepage). One of the best thing C and C++ had is a somewhat aesthetical syntax (although there are messups like "=" and "=="). Pascal is really pleasant to read, and so is ALGOL (I've never programmed in it and I can understand it, although the "OD" is awkward). P

I coded for AWT and Swing in the 90s. The app you use to demonstrate looks a lot nicer than anything I recall seing use those toolkits back then.

Also, honestly, the app you link too looks like it has a reasonable UI for the role it performs. I would love to see current apps that work well for power users*, but which have "sleek" UIs. 90% of "good UI design" is moving 90% of advanced features and behaviors away from the first impression.

...choose the tool that's best for the job, don't choose the job that's best for the tools you know already.

Game developers, for instance, are among the guys who write the most performance sensitive code out there, and they use a mix of C, C++, C#, Lua/Python for the various parts of the game. Usually the inner, tight loop is written in C/C++, higher level modules are written in C# and designer/modder scripts are written in a very high level language such as Lua. There is no best language in general, and whoever says otherwise is often an idiot.

I very much doubt that C++11 heralds any kind of new interest in native code. Rather, native code in general has been getting more attention recently and C++11 just happened to be finalized around the same time. (Disclaimer: C++ is my second-favorite language. I want it to be liked and used, but I'm realistic.)

Nearly off-topic in the article is this gem of a paragraph:

But the most important thing to remember is to always choose the right tool for the job. No one wants to go back to the bad old day

But does anyone know if or when there's going to be a book (you know, one of those paper things that you physically hold in your hands and actually have to turn pages to read instead of looking glaze-eyed into the glow of a computer monitor) that covers C++11 fairly exhuastively, such as how, for comparison, Stroustrup's "C++ Programming Language" covered the previous standardized version relatively thoroughly?

There probably will be a dead tree book--but I seriously doubt that you will be able to physically hold it in your hands for a significant interval unless you're a world-class bodybuilder. Note that the special edition of The C++ Programming Language is over 1,000 pages, and it's over a decade old!

C++ got a very bad reputation, Do to the helpless little script kiddies, That thinks memory management is something a real programmer is concerned about. And now you tell me that; these dime a' dozen script kiddies are coming back! ARGH!:(

The article perpetuates the myth that native code has to be "unsafe". That's an artifact of C and C++. It's not true of Pascal, Ada, Modula, Delphi, Eiffel, Erlang, or Go.

Nor does subscript and pointer checking have to be expensive. Usually, it can be hoisted out of loops and checked once. Or, for many FOR loops, zero times, if the upper bound is derived from the array size.

One of the sad facts of programming is that there should have been a replacement for C/C++ by now. But nothing ever overcame the legacy code base of the UNIX/Linux world. Every day, millions of programs crash and millions of compromised machines have security breaches because of this.

The crashes are solely due to using C++ as C, i.e. manual memory management, C casts, pointer arithmetic and C arrays. If none of the C features are used, then C++ is as safe as the languages you mention.

Why not just make a new user account that has access only to those resources that a given program may access, and then run the program as that user? That's what iOS for iPhone and IOS for Wii do, I've read.

1: allows both RAII and close to bare metal coding to coexist as appropriate for each peice of the code in question without having to resort to the mess of mixing languages2: Has a high quality FOSS implementation3: Is widely ported

Is there anyone these days writing applications in one language in exclusion to any other? I'm feeling old. I wrote applications in ASM because it was exclusive. Then I wrote applications in Fortran because it was easy. Then basica because it was way easer. Then Pascal because it was the shiznit. But then applications became more complex because of these GUI things and stuff. That is when the OO languages like C++ kicked ass. Now days it is so "normal" to write something that communicates with the a

Write the damn code according to the rules and idioms of the language in use, let the language implementation deal with the rest. If you're an application developer and care about *how* your code is being run, you're doing it wrong.

People tend to lump lots of things as if they were all the same thing, but they're really completely independent:

Does the language run in a virtual machine, or is compiled down to native assembly in advance?
Is memory management done explicitly, or is there a garbage collector?
Does it allow direct access to memory (necessary for some parts of system programming)?
Does it check for common errors, such as going past the ends of arrays?

There are garbage collectors for C++. C# runs in a virtual machine, but still permits direct access to memory. STL collections in C++ check for out of bounds indices. So here is how I would categorize different languages, roughly ordered from "most native" to "most managed":

C++: Incredibly complex, lots of bug opportunities, very verbose, very fast, suitable for system programming
D: Some complex, some bug opportunities, some verbose, very fast, suitable for system programming
Objective C: Some complex, some bug opportunities, some verbose, fast, suitable for system programming
C#: Some complex, some bug opportunities, some verbose, fast, suitable for system programming
Java: Some complex, some bug opportunities, some verbose, fast, not suitable for system programming
Scala: Very complex, few bug opportunities, not at all verbose, fast, not suitable for system programming
Python: Fairly simple, some bug opportunities, not at all verbose, slow, not suitable for system programming

This keeps getting brought up, but I've written commercial C++ code for years and I've not had memory management issues. There have been problems with legacy 3rd party libraries, but if you religiously apply the RAII ( https://secure.wikimedia.org/wikipedia/en/wiki/RAII [wikimedia.org] ) idiom you will usually be fine.
I can't remember the last time I worked with a raw pointer and had to new/delete my own memory.

Yep, I don't recall having memory management issues with C++ in the last ten years or so. Smart pointers take care of freeing RAM and the std::vector I use has bounds checking and extensive iterator checking turned on by default (even on operator[]).

Done properly C++ is as safe as Java, i.e. the only memory error is null pointer.

Java, OTOH has no stack unwinding for timely release of resources. Garbage collection is useless for anything other than RAM. Want a file or a network connection closed? You have to

I maintain professional (though ancient) C++ code, and actually, the whole new/delete and raw pointers seems rather elegant to me... Or perhaps (horror of horrors), I am being slowly transformed into an ancient Unix nerd...

But I have to admit there is one bug that I am struggling with. Somewhere someone is deleting an object during shutdown too quickly and causing a crash(and visual C 6.0 doesn't help me trace this easily).... So your way is possibly better...

The corollary of this is that you *need* super hardware to run the latest applications because they are so inefficient.

Instead of dumbing-down programming (which is a false economy anyway as poor devs write poor apps regardless of the ease-of-use of their programming environment), we should be increasing the skills of the developers. That means stopping from making things so easy that my manager can do it, still making a hash of it, only now thinking that any outsourced cheap developer can do my job!

Yes, because managed code has no memory leaks. Please. I work on a mixed C++/Java Android codebase. I haven't found a memory leak on the C++ side in months. The Android framework decides to hold onto random references every new version.

Quite frankly, memory management is not hard. If you don't understand the simple idea of allocate, use, release, then you are a complete incompetent and should not be programming professionally. I'll go so far as to say it's better for a language NOT to automatically manage your memory- in general the first sign of a bad or failing architecture is that object life cycles and memory allocation start to be non-trivial. Managing your own memory catches those architecture bugs and leads to cleaner, easier to understand code. And the cost is absolutely minimal, I doubt I've spent 10 minutes in the past 2 or 3 years actually debugging memory problems in C++.

One could argue that the less deletes you have, more are the chances you have that there is a memory leak...

<grammar_nazi> fewer </grammar_nazi>

One could argue that, it's true, but it would most likely be evidence that the person arguing was utterly ignorant of anything that's happened in C++ since the early nineties. Between the container classes of the STL and Boost, and the RAII model, having very few deletes is generally a very good sign, since it shows you're not resorting to too much error prone manual memory management. If the few deletes you have are all in destructors, that only makes it more clear. (And even there, using autoptr or sharedptr is generally vastly superior to trying to figure out when you can issue a manual delete.)

In modern C++, you can manage vast amounts of memory properly without ever once using delete. In fact, you're more likely to be managing your memory properly if you don't.

I work on a mixed C++/Java Android codebase. I haven't found a memory leak on the C++ side in months. The Android framework decides to hold onto random references every new version.

Then the android platform is a piece a crap. Or your developers suck. Neither of which is very relevant when discussing whether managed code is or isn't a good thing, as there are plenty of good platforms.

Quite frankly, memory management is not hard.

Not much in programming is hard. But anything you have to think about is bug prone. Including manual memory management.

If you don't understand the simple idea of allocate, use, release, then you are a complete incompetent and should not be programming professionally.

Thing I've found with memory management is, that while it "just works" and you don't have to think of it for relatively straight forward stuff.. when you start getting into more complex OO designs, memory management actually becomes more of a pain in some situation!

Memory leaks in a managed language _are_ possible! When you start having references flying left and right.. you get into situations where two objects you are finished with are referencing each other (or themselves) and thus trick the garbage coll

>>Thing I've found with memory management is, that while it "just works" and you don't have to think of it for relatively straight forward stuff.. when you start getting into more complex OO designs, memory management actually becomes more of a pain in some situation!

Absolutely. When I tested some Java code, it ran correctly on all platforms except one obscure one (WinNT on SGI). On that platform, which was unfortunately something I had to support, it wouldn't unallocate memory correctly, it'd leak ou

New hardware has bought us the ability to use managed code for most (not all) software. Isn't this much better than expecting every programmer to perfectly manage his memory every time? Just wait a couple more years and we won't be feeling the hardware pinch even on phones.

Personally I hope new machines start learning from the past [wikipedia.org] and implementing processor instructions to make GC easier and support things like runtime type checking. ARM has ThumbEE [arm.com] which is definitely a step in the right direction. Basically, I see the proliferation of "Managed" run time environments as a consequence of computer architectures remaining dangerous to write code for — we can pack a lot of transistors onto a die nowadays so why not use some of that space for features people have been impl

Memory management is a red herring; even managed application require it. Garbage collection will just hide poor application design and the inefficiencies that make it difficult. You can still crash on null pointers, leak references and most certainly leak external resources quite easily.

C++ just actually makes you actually have to think about these things. You actually have to pay attention to your allocations, scratch space, ownership, etc, and quite frankly applications are often better for it. Well..

Just wait a couple more years and we won't be feeling the hardware pinch even on phones.

You will always feel the hardware pinch, because every level of abstraction gets people thinking about how to make new levels of abstraction over it. So something even more bloaty will surely come along to soak up your "faster CPU" or occupy your "more memory."

Phones will always only be as fast as the market will bear. As soon as Joe sixpack, who has no idea that it should not take ten seconds to save a calander entry to his personal organizer app, is happy, the phone will ship.

Again as a counterpoint, just look how much code written in.NET has to implement IDisposable which if not done means that programmers will leak all sorts of non-memory resources such as file handles, etc

Your counterpoint misses the point.

Pretending the problem doesn't exist doesn't make it go away.

Nobody is pretending the problem of resource management doesn't exist.

Car analogy:

Your argument comes across as saying drivers shouldn't use automatic transmissions to reduce the effort they need to spend to main

c/c++ have always been perfectly useable for doing both system and application programming. And with the inevitable increased emphasis on energy efficiency, we're going to ask "how much energy can we save by converting [insert application of your choice] from a managed or interpreted language to a program compiled in c or c++?

For some large-scale applications, not only is it the best option, but the only option if you want to ever finish before the next [load of data/batch of requests] comes in.

Not that C++ really ever went away. With its older cousin C, it remains one of the most popular languages for systems programming and for applications that call for performance-intensive native code, such as 3D game engines.

I fail to see any problem with this statement, except perhaps that it implies that C++ was designed only for performance applications, when really it was designed to handle anything you throw at it (of course, other languages are better at certain things, meaning people will pick that language because of their needs, but no language is as good as C++ at absolutely everything) and handle it efficiently.

If your code is to be maintained by junior programmers, then choose a managed code environment.

With that one criterion, you may have completely struck those languages from candidacy for any project whatsoever.

All C and C++ programmers are "junior programmers" while they become proficient with the language, and nobody becomes a "senior programmer" without having learned from their own mistakes. You can't jump to C (or even C++) from languages like Python or PHP without undergoing a set of major paradigm shifts, and I'm not just talking about memory management.

Nintendo dropped the ball big time when they used a string comparison function for evaluating the digital signature hash. String functions terminate early (with success!) for strings where the first byte is null. All an attacker had to do to fakesign a Wii application is brute force change some random unused byte until it caused the first byte of the hash to be 00.