DevX interviewed Bjarne Stroustrup about C++0x, the new C++ standard that is due in 2009. Bjarne Stroustrup has classified the new features into three categories Concurrency, Libraries and Language. The changes introduced in the Concurrency makes C++ more standardized and easy to use on multi-core processors. It is good to see that some of the commonly used libraries are becoming standard (eg: unordered_maps and regex).

For years I have struggled to understand where C++ fits into the landscape.

Can someone give me examples of problems where C++ is by far the best solution than other languages like C or Java or Python or Scheme or ...

For low level coding, sure C is ideal. For number crunching, Fortran makes good use of the limited scope of certain computational problems. For some practical scenarios, you can't beat a rapid development and rapid evolution language like Python. For interactive GUIs something like Smalltalk can be fun.

Every major piece of commercial software is implemented in C++ including big open source projects [e.g. KDE].

I really do not know if it is implemented in C++, but I do not imagine Microsoft Office implemented in C [and worse implemented in Python, Ruby or C#].

I see all higher level programming languages to "tailor-made" software for business and enterprise operations, but C++ for the rest and I do not see a world full of commercial software implemented just in C# or Java.

C++ abstraction is several orders of magnitude higher than C's. Template metaprogramming is the most elegant and powerful way to deal with a lot of problems.

I see all higher level programming languages to "tailor-made" software for business and enterprise operations, but C++ for the rest and I do not see a world full of commercial software implemented just in C# or Java.

Hey, as long as Sun's Hotspot Java VM is written in C++, there clearly is a place for C++ . For an extensive (and impressive) list of C++ applications see:

Just to give the one example where I have first-hand experience: mathematical software, such as vector/matrix libraries.

Here, one pursues two goals that are generally considered incompatible:

On the one hand, one wants to imitate math notation : "a = b+c*d" etc... this can be achieved with any object-oriented language through operator overloading and method chaining. The problem is that the passing of arguments and of return values, the evaluation of temporaries, etc, kill the performance.

On the other hand, one wants not only the best performance, but also optimal memory usage. For example, when you have a million of small vectors, you really want to make sure that each vector takes the minimum possible number of bytes. This rules out many object-oriented features, which are mandatory in many languages. For example, if your small vectors take only 8 bytes each, you definitely can't afford a vtable pointer which by itself takes 4 or 8 bytes!

This was just one example where C++ shines.

More generally, the full control that you get over the generated code, the template meta-programming (btw the C++0x spec is obviously reflecting the increasing importance of that paradigm), the "you only pay for what you use" design, make it a killer language for performance. For math software, it allows optimal performance (like C) with 5% the lines of code (I'm not making this up -- comparing the library i'm co-developing with existing C libraries. See http://eigen.tuxfamily.org/index.php?title=Benchmark)

- because even if you do a C function doing this, like
void foo(float x1, const float *v1, float x2, const float *v2, float *result);
then you will have to redo the work all over again when you want another function that takes not 2 but 3 vectors, returning x1*v1+x2*v2+x3*v3

- because C++ template metaprogramming techniques (specifically, expression templates) allow you have this API,
result = x1*v1+x2*v2+x3*v3;
without the introduction of temporaries so that the code will compile to completely optimized assembly (in particular, the arrays traversed only once). I'm not saying that the trivial implementation does this, I'm saying that C++ makes it possible to make a clever implementation.

- because C++ template metaprogramming doesn't stop here, the compile-time metadata on expression types that you gather can be used to make explicit use of SSE instructions where appropriate (it works very well in above example), and to intelligently determine where to introduce temporaries.

To see that in action, see our lib (link i gave above, don't want to give more links as i'd be self-advertising. In my defense this is LGPL'd software).

Here's one of the reasons why C++ is at a turning point. For a long time, c++ template metaprogramming has been known to be possible, but too heavy for the compiler. Recent compilers change that (e.g. GCC >= 4.2) -- current C++ frontends are becoming very clever and robust. C++0x will make template metaprogramming much more convenient from the programmer's point of view (concepts, static asserts, compile-time constants, variadic templates, template typedefs, rvalue references... we're spoiled)

Precision is not a language feature, it's a library feature. Don't know if by precision you mean number of significant digits or numerical stability of algorithms, but in both cases this should go into libraries. The language itself should only provide the bare minimum i.e. the IEEE754 floating-point types supported by CPUs.

Fortran indeed used to have by far the best math libraries, but, mark my words! We're a new generation of C++ math software coming to challenge that situation, and template metaprogramming will be a huge strength here.

We're a new generation of C++ math software coming to challenge that situation, and template metaprogramming will be a huge strength here.

I think it is kind of missing the point. Template meta-programming does not bring much: it gives you speed and memory savings, at the cost of totally unreadable and uncomprehensible code. The problem of meta-programming is that it is a really bad tool at what it is used for (parsing mathematical operations). I agree that dealing with complex linear algebra kind of things wo using a lot of memory is still an unsolved problem, but I think using a much higher level to parse mathematical expressions is a much better way to solve the problem (e.g. you would have pseudo-code mathematics parsed by something like lisp).

For example, fftw, the reference open source fft library, although implemented in C, is actually generated by a specialized ocaml generator (the meat of fftw is in the ocaml code). This is certainly a more elegant, more powerful approach than C++ meta programming, which is really just an hack with an awful syntax.

At most, your arguments against C++ template metaprogramming are arguments from the implementer's point of view.

From the user's point of view, I don't think there exists any significant drawback! There used to be the too long compile times but that changed a lot in recent years as C++ compilers improved.

Coming back to the implementor's point of view... If you want to see how template metaprogramming allowed 2 devs to write only 11 KLOC and surpass 500+ KLOC libraries, see my blog here, http://bjacob.livejournal.com/6723.html

At the end of the day, this "the code is too obscure" argument is old and tired. It may be relatively complicated, but since it allows unmatched results, it's just worth it.

Your argument, that external code generators are preferable, made a lot of sense in the times where C++ compilers were not powerful enough for metaprogramming, but nowadays this problem is pretty much solved, although it still takes some care to write meta-code that's easy on the compiler.

At most, your arguments against C++ template metaprogramming are arguments from the implementer's point of view.

Not really. although it certainly have practical implications. I never talked about the difficulty for compiler writers

From the user's point of view, I don't think there exists any significant drawback! There used to be the too long compile times but that changed a lot in recent years as C++ compilers improved.

Not really, compiling C++ with template is still incredibly slow (at least with g++ and sun studio compilers). Compiling relatively small heavily templated source code of a couple of 100s lines takes up to 300 Mb of memory with g++ -O2 (4.2). Boost.Python is even worse. That's the only language I am aware of where you have to use special implementation techniques with design implication to make compilation time bearable.

Coming back to the implementor's point of view... If you want to see how template metaprogramming allowed 2 devs to write only 11 KLOC and surpass 500+ KLOC libraries, see my blog here, http://bjacob.livejournal.com/6723.html

I did not know about your project, it looks interesting and I will look at it (the header only certainly has a big implication on the deployment issue for open source projects, it makes it really easy to use, big plus IMO), but I think it is misleading to compare your library to say atlas or MKL. First, MKL implements a huge number of features you don't have (fft, etc...), and blas/lapack interface is a must for any serious use in the scientific community (I understand you are interested
in less scientific usage, but by comparing yourself to ATLAS/GOTO/MKL and co, that's where you are putting yourself ). Also, when you distribute software, the source only nature of something like eigen2 is a problem: MKL practically have several implementations of the same algorithm depending on the architecture, which is needed for binary only software (I am only interested in open source software, but on platforms like windows, binaries are crucial, even for open source softwares).

I am a bit surprised at your benchmark results: in my experience, ublas is one order of magnitude slower than ATLAS for example. But then, it depends so much on your CPU, compiler, compiler flags, etc... Benchmarking this correctly is extremely difficult.

At the end of the day, this "the code is too obscure" argument is old and tired. It may be relatively complicated, but since it allows unmatched results, it's just worth it.

That's where we disagree, and I guess we'll have to agree to disagree; I think readability beats speed almost all the time (to a certain extent, obviously, I would not be able to afford being 1000x times slower than C; 10 is certainly affordable most of the time, like most people can afford to be several times slower in C than they are in ASM). Almost nobody cares about a factor 2-3 of speed anymore. Buying more hardware is just cheaper.

Your argument, that external code generators are preferable, made a lot of sense in the times where C++ compilers were not powerful enough for metaprogramming

I don't quite understand your argument about compiler implementation. The problem with templates is in the nature of templates, and the fact that they were never designed for what people try to do with them with meta-programming, not so much how (incredibly) difficult they are to implement for compilers. g++ error messages are still totally obscure and unreadable (with g++ 4.1, at least), and the syntax is horrible. Templates in C++ are a kind of code generators, and you could argue they are powerful, since they are turing complete. But HLL like LISP/ML or even things like python are much more easy to use for parsing/generating code, and certainly more readable.

I find the code you posted quite painful to read, but you could argue it is personal taste. Objectively, though, most of it is just boilerplate to get around fundamental limitations of the template syntax. In languages like LISP of OCAML, partly designed (and certainly used for) compiler/language design, it would be much easier to do, and arguably more elegant.

I wish your project to succeed, but my personal opinion is that it is not where the future of numerical computation is. If you rewrite some algorithms from say Fortran, going to C++ is just not worthwhile from a maintainability POV. Fortran is painful to use, but yet, people do not rewrite the algorithms. Again, the fact that C++ still, after 30 years, does not have a matrix concept as found in a 60 years old language like Fortran does not help.

Ashigabou:
I do not know whether you have an industry affiliation or experience but your comment does not reflect my experience.

Currently, all oil and gas exploration goes through several steps that are computationally time consuming. Seismic surveys need to be processed several times, and a turn around time of 6-12 months is the average (depending on the algorithms that are applied). These algorithms are tweaked and optimized time and time again, and are currently run on clusters of e.g. 5000 nodes and upwards to achieve appropriate turn around times.

A reduction in performance of a factor of 2-3 (as you mentioned yourself that "nobody cares about") would imply that the oil and exploration world wide would go down with the respective amount per year. That's not tolerable.

In fact : Take a look at the plethora of code that is being implemented and ported to FGPAs, CUDA/TESLA/GPGPU and other acceleration devices. Jacob is entirely correct in his assumption that complexity is worth the hassle. Actually, the hassle to have complex yet high performance code is probably offset by billions of dollars.

That was oil and gas. Ditto for medical imaging. Lives are saved every day due to the use of clusters, FPGAs and other acceleration devices used for image reconstruction, CAD (computer aided diagnostics) and other image processing. Again, factors of 2-3 or more aren't acceptable.

At the end of the day, this "the code is too obscure" argument is old and tired. It may be relatively complicated, but since it allows unmatched results, it's just worth it.

That's where we disagree, and I guess we'll have to agree to disagree; I think readability beats speed almost all the time (to a certain extent, obviously, I would not be able to afford being 1000x times slower than C; 10 is certainly affordable most of the time, like most people can afford to be several times slower in C than they are in ASM). Almost nobody cares about a factor 2-3 of speed anymore. Buying more hardware is just cheaper.

[.....]

I wish your project to succeed, but my personal opinion is that it is not where the future of numerical computation is. If you rewrite some algorithms from say Fortran, going to C++ is just not worthwhile from a maintainability POV. Fortran is painful to use, but yet, people do not rewrite the algorithms. Again, the fact that C++ still, after 30 years, does not have a matrix concept as found in a 60 years old language like Fortran does not help. [/q]

I think you hit the nail on the head, without highly experienced (and hence expensive) developers, C++ code can be very difficult to maintain and develop, whereas C code is much easier simply because more people can cope with C syntax and semantics.

I've been wondering however how many successful open source libraries and applications are actually written in C++. I know that apps like Python, Ruby and R are written in C. GTK is written in C. I think most of the successful numerical libraries are written in C, eg GSL and the bulk of netlib (with a lot of FORTRAN). There is of course Boost, but who uses that other than hardcore developers and in any case I presume Boost can only be used with other C++ apps?

I've tended to stay away from C++ because in the open soruce community there are very few good C++ programmers, which means less usage by the community and more maintenance headaches for the author.

Finally, if you write a C++ library what can you link to, other C++ programs? At least with a C API you can link to anything.

Easy enough...limit what you do with templates and use the primarily to make more readable code.

Yes it's true for hard core mathematical stuff c++ is pretty much where it's at. It combines the ability to be expressive (which 'c' lacks) with the ability to tune memory and performance (which other OO languages lack).

Fortran isn't touched when it comes to precision when doing Numerical Analysis.

Nothing to do with precision. Fortran use the same format than C those days on common machines (IEEE floating point); if you implement the same algorithm, you can in theory have exactly the same results (although differences with compilers mean this will never happen in practice). There is a lot of expertise in fortran numerical code, though, which is why you still have so much code implemented in fortran and used today (blas, lapack, optimization, etc...).

Fortran beats down C (and C++, they are the same here) because it does not have pointers, or more exactly does not have the aliasing problem (different pointers pointing at the same memory address), which makes optimization by compilers so difficult. That's why Fortran is still much faster than C or C++ on most benchmarks.

It has also an array concept: that's why most C++ is a really poor language IMO for numerical code, because everybody has a different, incompatible class for vectors, so the speed you gain from being in C++ is lost converting formats between libraries. This and the fact that the language is way too complicated IMO. I personally believe that C++ is a niche language: it is still the only "industry" language you can use when you need ten of thousand objects touching each other (heavy GUI, games), but I would never use C++ if I had a valid other choice.

"Fortran beats down C (and C++, they are the same here) because it does not have pointers, or more exactly does not have the aliasing problem (different pointers pointing at the same memory address), which makes optimization by compilers so difficult."

This is not really true. Since Fortran 90, the language has added support for pointers which basically work as in C but are less extensively used compared to the C and less powerful, for example pointer arithmetic is not supported, no pointer to a pointer to a variable declaration, no function pointers.

So declaring a pointer to a real number would be written in Fortran 90 as:

REAL, POINTER :: p1

Making p1 to point to a variable var would give

p1 => var.

So pointers exist in Fortran, they are less powerful than in C but the fortran syntax is less complex and also the language adds several nice features as, the possibility of testing the association status of a pointer to a variable, the possibility to disassociated a pointer to a target and a TARGET attribute that allows the compiler to know that a particular variable could be pointed by a pointer and therefore it must not be optimized out of existence.

Pointers in Fortran can poin to any variable type, to derived data types and to arrays. They can be used to implement dynamic memory allocation and to construct various types of dynamic data structures.

So now why Fortran is faster than C in numerical code? Well i don't think it is a language related question (C is quite a fast and low level language), i rather think that compilers make all the difference here. I mean, Fortran compilers are built and optimized to create the fastest numerical code possible. They are designed to produce only fast numerical code. C or C++ compilers have a larger field of application, they compile very different king of applications where optimizations strategies can be very different.

Fortran compilers are number crunching, number crunching, number crunching, they have been built for that for several decades... they know only that but they do it very well.

Also besides the performance question, Fortran is by far the most used language for coding numerical applications among scientists because it offers a lot of features that are important for this audience. Matrix representation, direct matrix multiplication, better implementation for complex numbers, more flexible manipulation of arrays, easier to read, easier to port, easier to maintain. All of those make this language a very attractive solution for scientists and engineers who code numerical applications.

Fortran isn't touched when it comes to precision when doing Numerical Analysis.

Bullshit.

The C library MPFR implements arbitrary precision math with correct rounding. It supports all the basic operations you expect as well as a bunch of others that I have found useful for my research (e.g. polylogarithm). Unless you're doing actual mathematics (in which case you want to either use interval arithmetic (for which there is MPFI) or work over an algebraic number field(I recommend using Sage)), there is nothing more you can ask for in terms of precision. As for performance, I can't complain, though I don't know what Fortran is like.

Off-topic: To be honest, I have neither agreed with nor learned from anything that you ever post here. How do you have 2 fans?

Fortran isn't touched when it comes to precision when doing Numerical Analysis.

Yeah, but its readability is like LISP or Prolog.
The Fortran language itself encourages spaghetti code.
Too many implicit rules about variables with names beginning with a letter between i and n automatically being integers.

Python 3000 (or if you put from __future__ import division) is the first language that handles numbers the way I think is most natural. It, like Fortran, allows you to create variables on the fly but it does it in a way that makes sense. If you assign 1 it will be an integer, if you assign 1.0 it will be a float. All divisions are floating divisions. This is natural. If you want integer division you have to use //.

For years I have struggled to understand where C++ fits into the landscape.

One obvious sweet spot is where performance is needed, or where there are constraints (e.g. memory constraints). As you mention, C is good in those departments as well, but there are some large advantages to C++. To give two examples (there are plenty more, but this is for the sake of the argument ):

- RAII (Resource acquisition is initialization): since resources can be associated with objects, resources can be acquired in constructors and released in destructors. This makes it possible to bind resource use to the object lifetime. E.g. when an object is allocated on the stack, and it goes out of scope, the resources are automatically released as well. RAII is often not possible with OO languages that use a garbage collector, since there is no guarantee when objects will be garbage collected/destructed (if ever). In GC-ed languages, resources often have to be released explicitly.
- Templates: templates allow you to specify algorithms and data structures without one of three choices required in C: 1. tailor data structures/algorithms to one type (e.g. a sort algorithm for strings), 2. use void pointers (usually with some function pointer), or 3. rely on macros. In C++ you can write generic algorithms and data structures with strong typing enforced. The STL (Standard Template Library) and large parts of Boost are great samples of what can be done with template programming (and template metaprogramming).

Can someone give me examples of problems where C++ is by far the best solution than other languages like C or Java or Python or Scheme or ...
[...]
For some practical scenarios, you can't beat a rapid development and rapid evolution language like Python.

As a general remark, I'd like to add that good C++ programmers can usually be as productive as good Java programmers (as some studies have shown, Google should be able to find them).

Additionally, C++ is an excellent language to write code where you need performance and tie it up with Python code (or your favorite dynamic language ). E.g. Boost.Python is excellent for creating Python bindings for C++ code.

Additionally, C++ is an excellent language to write code where you need performance and tie it up with Python code (or your favorite dynamic language ). E.g. Boost.Python is excellent for creating Python bindings for C++ code.

The problem is that it is so slow to compiler anything non trivial with boost.Python that is it is painful to develop with. I agree that in theory, it is nice, but I find it quite unusable in practice.

I personally believe the trend is toward doing as much as possible in scripting language, with some compiled code where needed, and moving away from scripting something which is essentially C++. Some core things (basic toolkits, etc...) will still be written in compiled languages for quite some time of course, but environments like python for scientific programming give you much more flexibility that what you can get by just wrapping a huge, inflexible C++ framework. Prototyping speed is a key factor, both in academic and in the private markets (finance, data analysis, etc...), and C++ is just so much behind for this that is cannot compete, even with top notch programmers.

When you want apps that aren't bloated and run fast. Case in point.. I downloaded an app several months ago written in Python (forget the name) that I could use to download Podcasts with. The thing ate up 22MB sitting idle in the system tray. I realize that RAM is cheap these days, but 22MB to sit and do next to nothing.. are you f**king kidding me?

And don't even get me started on Java. C# isn't much better either. I realize that these languages make it easier on developers, but as an end user, I say piss on all of 'em. If you want to write a cross-platform app, figure out how Opera does it and do it like they do.

I downloaded an app several months ago written in Python (forget the name) that I could use to download Podcasts with. The thing ate up 22MB sitting idle in the system tray. I realize that RAM is cheap these days, but 22MB to sit and do next to nothing.. are you f**king kidding me?

Because it was in the system tray it was probably using a graphical toolkit like GTK+ or QT.
Process monitors will incorrectly display the used RAM by including the size of shared libraries that may already be loaded in RAM.
Go look for some articles about KDE4 and memory usage. Linux will report it as using more than KDE3 even though it is actually much less.
Also, don't base your opinion of Python on one Python program, it may have been written bad. Things can go the other way to, if I based my Java opinion on just 2 java programs (eclipse and azureus), I might think that it was actually a good language and not the pile of crap that it actually is. Okay, I was only partly kidding about the Java part.

Things can go the other way to, if I based my Java opinion on just 2 java programs (eclipse and azureus), I might think that it was actually a good language and not the pile of crap that it actually is. Okay, I was only partly kidding about the Java part.

Actually, Java is not as bad as most people make it seem. It's fairly high-level, but still gives very good performance (in my experience in most, but not all, computationally-intensive code it's 1.5 or 2 times slower than equivalent C++ code).

Additionally, it used to be a very simple and predictable language (arguably the implementation of generics through type erasure has not improved Java in that respect), making it a good language for teaching programming in comp sci courses.

Of course, I dislike Swing-based applications as much as the next guy, and think Java is a tad to verbose . Also, one paradigm may be a bit limited. But we could have a worse industry-wide language (how about Visual Basic or Delphi) .

Your post mentions a few different languages. It will require writing a novel to explain the differences and the advantages of C++ over the languages you mentioned. Don't try to underestimate the power of C++ if you haven't learned the language and if that is the case, I strongly suggest you to learn it as C++ is pretty much the base of all programming languages.

P.S. C++ is also ideal for low level coding and personally I would rather use objects of classes rather than pure C functions.

Perhaps I did not express my self clearly. When I said base, I did not mean to imply other languages are derivative of C++. What I meant is, pretty much any language I've dealt with looks like C++ and is using the same concepts and if you know C++ you will have no trouble learning other languages. (There are exceptions). I studied Java after C++ and I was laughing at Java as I found it extremely simple to learn. Once I've learned about classes, inheritance, abstract classes and multiple inheritance, Java was piece of cake. To give the original reader an answer, Java doesn't support multiple inheritance and all you can do in Java is extend a class and implement an interface. Then I studied some PHP and again had no problems with it. These are just some examples.

Quite often I've found myself in situations where C++ delivers performance comparable to C but is much more faster to work with as you can take abstractions further away from the actual hardware in a way which is not entirely unlike some popular scripting languages.

C++ is a good middle-ground between C and high-level languages, which makes it great for "high level" system programming: things like GUIs. The major advantage of C++ here is that it is easy to interface with C, which is the lowest common denominator for other languages to bind to, so if I write a library in C++ and export a C API from it, you can easily make use of my library from whatever language you like.

For years I have struggled to understand where C++ fits into the landscape.
[...]
Where is C++ a compelling solution?

IMHO C++ is a compelling solution for larger projects in which runtime efficiency matters. C++ is e.g. well suited for embedded software. A more detailed discussion can be found here: C++ in Embedded Systems: Myth and Reality, http://www.embedded.com/98/9802fe3.htm .

The reason I dislike is that the language has evolved and has added many features.

C++ tries to be both a high level language by providing OOP and it tries to be low level language by providing direct memory access, using pointers etc.

And it also adds lots of other features like templates, operator overloading, function overloading, exception handling etc.

IMHO the way this language has evolved, it has become a mess. I do both level of programming i.e. hardware programming and at times (for fun) user mode programming. And C++ doesn't fit in either. For user mode code, I prefer C# and for low level stuff, I prefer C.

I think the only place where C++ remains important is user mode applications where performance is super critical. The trend I see is that people implementing all non-performance critical code in higher level languages like C# and just write the performance critical code in C++.

In the past I also used to use C++ a lot but I had one horrible experience where a programmer wrote code which used nested templates inside nested templates inside nested templates. This led to object initialization so complex that when there was memory leak, it was just impossible to debug.

Another problem was use of exception handling, now with C error handling, when you step over a function call, it breaks on next instruction. But in C++ you may land up at a totally different place where the exception is caught and then you can't check any local variables because guess what, your stack is unwound.

And to make matters worse, people use auto_ptr and boost crap. This stuff if it works it is fine, but once you leak memory with auto_ptr, then god help you find that memory. This is not really C++ fault but well boost etc is common to C++ only. Really C++ was not designed with garbage collection and people shouldn't try to do that in C++.

Anyways now I use C++ sometime but with a much constrained set of features like C + Classes + single inheritance and no templates (other than STL) and no operator overloading and no exception handling crap.

C++ basically makes it easy to write bad code due to its numerous features.

Anyways now I use C++ sometime but with a much constrained set of features like C + Classes + single inheritance and no templates (other than STL) and no operator overloading and no exception handling crap.

I hope you are aware that STL makes very heavy use of exceptions? If you intend to write secure and robust code using the STL, you are going to need exception handling.

I hope you are aware that STL makes very heavy use of exceptions? If you intend to write secure and robust code using the STL, you are going to need exception handling.

No. STL generates exceptions only in exceptional conditions. You can write perfectly secure and robust code in C++, while using the STL, without writing exception handlers.

About the only exception that can happen, if your program doesn't actually do something which explicitly causes an exception, is one caused by an out-of-memory condition, and often just letting that propagate until it kills the process is the right thing to do anyway.

OK. You can do all also in C, and of course in ASM. But C make life easier than ASM, and C++ easier than C.

...and Python easier than C++. "

Yes, but you can write code that is equally fast as C code in C++, or C++ code that is faster than writing assembly code by hand (I trust most modern compilers better). The same can't be said of Python. In some areas it's almost an order of magnitude slower than compiled C++ code.

Instead of Python, that yes, is very easy, I prefer C#, for desktop applications of course. For Python you need lot of packages: python-it-self, py-win, py-sql ..

In C# you only need the .net framework, and, beside pointer, you have more or lees all the power of C++.net and only a bit less of speed than C++. Not to say how helpfull is the Visual Studio environment and all the documentation you can find in MSDN Lib.

There's also a problem with Python: uses lots of memory for doing the same, and user can modify code, on desktop aplications, of course.

Yes, but you can write code that is equally fast as C code in C++, or C++ code that is faster than writing assembly code by hand (I trust most modern compilers better). The same can't be said of Python. In some areas it's almost an order of magnitude slower than compiled C++ code.

Remember, the right tool for every job?

And there are cases where interpreted JIT compiled languages like python outperform compiled C code.
With JIT compiled languages you can get get optimization across libraries like function in-lining which you cannot get with compiled languages where optimization is done at the "compilation unit" level which is a single source file.

And there are cases where interpreted JIT compiled languages like python outperform compiled C code.
With JIT compiled languages you can get get optimization across libraries like function in-lining which you cannot get with compiled languages where optimization is done at the "compilation unit" level which is a single source file.

We are talking C++ here, and functions can be inlined across compilation units.

In general (I think Java is a better example here) I agree with you: JIT compiled languages have more opportunities to optimize at runtime, although in practice across the board compiled code is usually much faster. The primary exception is often memory management (when no custom allocators are used in C++). E.g. memory allocation is often faster in VM languages than optimized malloc()s.

And there are cases where interpreted JIT compiled languages like python outperform compiled C code.

And then you apply profile guided optimizations to C and C++ code and then let's talk performance. My personal testing has shown that PGO can speed up C/C++ code by roughly 50% in some cases.

JIT languages will never ever outperform equivalent C++ code. All the benefits of the JIT such as virtual function speculation, dead code elimination, whole program optimization, etc can all be done in native languages, particularly if you enable profile guided optimizations. GCC, MSVC and Intel C all support this feature so it's really down to developers to use it.

And there are cases where interpreted JIT compiled languages like python outperform compiled C code.
With JIT compiled languages you can get get optimization across libraries like function in-lining which you cannot get with compiled languages where optimization is done at the "compilation unit" level which is a single source file.

There is no such thing as "JIT compiled language" or "statically compiled language" anymore. These are the properties of particular implementations, not of languages themselves.

You can statically compile Java to native code with GCJ or Excelsior JET, and the LLVM project enables you to compile C/C++ to low-level VM code for post-install optimization.

Yes, but you can write code that is equally fast as C code in C++, or C++ code that is faster than writing assembly code by hand (I trust most modern compilers better). The same can't be said of Python. In some areas it's almost an order of magnitude slower than compiled C++ code.

Remember, the right tool for every job?

Choosing the right tool for the job doesn't mean you have to choose only one tool.

Let's put some facts on the table: C is a much more elegant language than C++, Python is a much more elegant language than C++, Python is slower than both C and C++, C has less features than C++.

Ok, so you need performance and you choose C++ because you want high level features. The problem is, C++ also brings to the table a truckload of bizantine bugs and debugging headaches due to it's overbloated syntax and overcomplex compilers.

Performance is mostly driven by algorithms. And programmer's brains are much more limited than the machines they are programming... In a higher level language you have much more freedom to better choose your algorithms and better organize your code in order to identify the performance-critical sections and reimplement them in an (elegant) lower level language like C.

There is a reason why Python is becoming popular in the scientific community you know... (BTW, and there's also a reason why Python succeeds where Java failed, because non-performance-critical code doesn't mean code-that-eats-tons-of-memory-for-no-reason.)

Performance is mostly driven by algorithms. And programmer's brains are much more limited than the machines they are programming... In a higher level language you have much more freedom to better choose your algorithms and better organize your code in order to identify the performance-critical sections and reimplement them in an (elegant) lower level language like C.

There is a reason why Python is becoming popular in the scientific community you know... (BTW, and there's also a reason why Python succeeds where Java failed, because non-performance-critical code doesn't mean code-that-eats-tons-of-memory-for-no-reason.)

When you say scientific community, I'd ask you to qualify which one it is. I know that there are quite a few bioinformatics packages for Python, and that makes it popular in that circle. Add to the fact that Python allows you to get away without object oriented programming (unlike Java/C# where you are forced to think and code in objects), it's definitely a hit with the non-programming sciency types.

However, for any heavy lifting no one seriously considers Python. For my work on neural networks, I always described the difference between using C++ and Python as this: Would you rather spend 3 hours writing code that will take 3 days to run? Or would you rather spend 3 days writing code that will take 3 hours to run? Keep in mind that well written code gets reused, and it is not uncommon for code to be rerun millions of times during the course of your research.

It's a lot faster writing code in Python, but your code runs a lot faster when written in C++. Thus I found that the trade off was if I was writing a very small throwaway script that was going to be used once or twice I'd use Python. Otherwise, pretty much anything else is better. Even Matlab.

[q]
However, for any heavy lifting no one seriously considers Python. For my work on neural networks, I always described the difference between using C++ and Python as this: Would you rather spend 3 hours writing code that will take 3 days to run? Or would you rather spend 3 days writing code that will take 3 hours to run? Keep in mind that well written code gets reused, and it is not uncommon for code to be rerun millions of times during the course of your research.
[q]

It really depends on the task/field/condition, you cannot make big bold statement in the blank. But in scientific communities, matlab, mathematica, etc... (depending on the field) are huge, and used more than C++ most of the time, more exactly anytime you can get away with it (which again depends on the field: in video processing, you will have a hard time doing much in pure python for sure).

I find your example unconvincing, because especially in the academic environment, you keep changing the requirements, you are in a constant prototyping loop. The 3 hours run / 3 days coding almost never makes sense: code reuse in academic science hardly happens once you get above a certain level of abstraction, and any non trivial project involves a lot of non processing stuff. You have all the problems around the processing itself, such as data IO, visualization, etc... which are a big PITA to do in low level languages (that's also a problem in environments like matlab BTW), so you start using a scripting language for that, and you realize that you can use it more and more. You don't want to parse XML in C++, you don't want to do write your visualization in C++. Sure, you will not implement your core sparse matrix handling in python, but this is already done in fortran anyway. Fortran libraries, for numerical things, are good enough.

The point is nobody wants to use C++ anymore: you still need C speed for some stuff, but you really want to get away from it as fast as possible. That's not the default choice anymore.

For my work on neural networks, I always described the difference between using C++ and Python as this: Would you rather spend 3 hours writing code that will take 3 days to run? Or would you rather spend 3 days writing code that will take 3 hours to run?

Then, if you get your python application running right after it's finished, Both with be completed at the same time.

Exception, that these extra hours on your hands, while your program is running (instead coding C++). You can write another piece of code that will push your science forward.

Most of the time, when a python application take considerably more time to execute, it's because there's is a special block of code that can't be optimise. You can implement it in C, and execution time reduce drasticaly without adding excess in development time.

What do you mean "even Matlab"?
Last time I checked Matlab was specifically geared towards scientific computing with tons of carefully optimized functions for things like inverting a matrix. If you get past these, it's not really fast, granted. But I still think saying that some people choose Matlab over Python doesn't mean Python's bad. On the contrary, that the comparison even pops up means that Python is a damn fine general purpose language.

If I need performance I usually turn to Psyco.
It doesn't make Python as fast as C but it can provide a pretty decent speedup if your code is not IO bound and doesn't use too many of Python's dynamic features.

There is a reason why Python is becoming popular in the scientific community you know... (BTW, and there's also a reason why Python succeeds where Java failed, because non-performance-critical code doesn't mean code-that-eats-tons-of-memory-for-no-reason.)

Languages are (fortunately) nearly always chosen for pragmatic reasons. I am a student in computational linguistics, in our field Prolog, Perl, C, and C++ are quite popular. Prolog is often used, because it is often far easier to implement natural language parsers in Prolog than in most other languages (e.g. nearly all modern Prolog implementations even implement a special rule syntax for making definite clause grammars). Perl is popular because it has syntactic sugar for regular expressions, and was a natural fit with other UNIX tools (although Python seems to be getting more popular). C and C++ are used because they allow for tight and fast code. Sure, you can write a finite state dictionary library in Java or Python, but you will lose some of its advantages because nearly everything is a reference and you can't tightly pack data in structs. Similarly, some computationally-intensive analysis on a large corpus can take a few hours even with fast C++ code, where an equivalent Python program would require one or two days. Then it's often more than worth it to invest a bit of extra time, and write it properly in a low-level language. Some people will argue that faster machines will solve this, but practically, people will want to analyze larger data sets when hardware allows it, rather than upgrading to use some different language .

Developers and pseudo developers hate C++ in favor of their own pet language (usually Python, Java or C#). End users on the other hand try to stay as far away as possible from programs written in these languages in favor of software written in C and C++. For reference, see the Beagle vs Tracker for an apples to apples comparison.

The majority of end user facing software is written in C++. You can small executables, fast execution, and minimal dependencies.

And note that your list includes all professional applications, ie where there are organizations behind them that put in a lot of resources (Mozilla turns over at least 50 million a year). Try constructing the same list using non-professionally supported open source, and you've find most of it is in C. The reason, its cheaper to develop in C than C++ because good C++ programmers are expensive.

While its easy to agree with many of the C++ comments, I find the performance of my C/C++ code has almost nothing to do with how I express the code at the syntactic level and far more to do with how it matches the caches (or doesn't). If your data set fits in cache then C/C++ and most all compiled languages can do wonders, otherwise all languages get beaten up rotten by the Memory Wall.

2 counter examples.
The classic JPEG library and most all media codecs achieve great performance because they perform a great no of basic register cpu operations on a very small data set that can keep the cpu busy for a few us on 64 int blocks (the DCT tile). Lots of muls and even some divs fly by the core. This isn't really anything that can't be done by any decent compiled systems language in the Algol/C family and asm doen't run any faster unless you get into very specific cpu extentions. This use to be the realm of asm only performance, so atleast those days are gone.

at the opposite end in performance a memory scanner
I have a heap manager that scans 2GByte of memory holding millions of objects and looks for holes or dead objects. It boils down to 7+ asm instruction that simply can't be improved upon by any rewriting of the for loop.

here the only thing of interest is that ptr jumps upwards through the 32Bit address space over each object and this loop takes avg 70ns a pop, in other words my 2.66GHz multi instruction issue D805 is giving me only 100mips, nothing like several k mips. Its gets another 2.5x slower if I scan the heap entirely out of order with ptr = table[i] where the table is absolutely in random order. Now I am down to 170ns a pop for slightly more instructions.

Ironically, I previously walked the tree to find or mark all leaf nodes in use, that executed in the exact same amout of time but did 10x more work doing so. The locality was really the same so lots more instructions got executed for no cost, it was still held up the Memory Wall, but this was less obvious then.

At this point no improvemments to C++ or any language for that matter are worth didly squate as long as cpus have this Memory Wall, and it can only get even worse as clocks speed get further away from DRAM random accesses. Making the caches bigger helps a little or helps a lot, but costs alot too.

This sort of explains why bling bling special effects in OSX, Vista Aero, Compiz are now so easy to do, they keep the processor busy while doing very little useful work. Now try random scanning the entire memory and they don't look so fast anymore. I have to assume the many pregnant pauses I experience in Windows, OSX is mostly due to the Memory Wall popping up all the time, worse it could be the Disk Wall.

at the opposite end in performance a memory scanner
I have a heap manager that scans 2GByte of memory holding millions of objects and looks for holes or dead objects. It boils down to 7+ asm instruction that simply can't be improved upon by any rewriting of the for loop.

Which is why you optimize for locality of reference.

Even so as you have noted, there are areas where you are essentially I/O bound and this is one of them. However, there are many other tasks that are not I/O bound and that is where you get huge performance gains with C++.

Wow that is a lot of repetitive comments. Let me see if I can summarize this in lay man terms:
1. C++ is written to write all top apps, games and OS components. All your bases belong to C++.
2. Python is better because my brain and eyes don't hurt when I use it. C++ gives me migrains.
3. C#/Java rules because it carries a humongous framework unlike C++.
4. Use the right tool for the right job. C++ pwns all tools for all jobs.
5. C is faster and simpler than C++, but is C++'s ugly older sister.
6. VM based languages are faster than C++ provided you have gazillion bytes of memory to use them.