Slashdot videos: Now with more Slashdot!

View

Discuss

Share

We've improved Slashdot's video section; now you can view our video interviews, product close-ups and site visits with all the usual Slashdot options to comment, share, etc. No more walled garden! It's a work in progress -- we hope you'll check it out (Learn more about the recent updates).

Given a collection of developers that write difficult to understand, difficult to maintain and sloppy type unsafe code, going to C++ may not help. The previous problems are problems with the developers not the language. C++ just enables such developers to write even worse code. Hopefully they are also introducing new coding style guidelines, and are willing to enforce such guidelines. If so I'd be more optimistic.

I'd also be more optimistic if by using classes and templates they were really referring to using STL, not writing their own.

Or maybe they just want to use C++ style comments and won't really use classes and templates much.:-)

Bad code is bad code, and you can write it in any language, yes, even visual basic.net.

So the point is not so much "how useless are those lousy GCC devs who will write crappy code", but "how good are those GCC devs now they have a more powerful tool in their hands".

I'd hope they start to discover the STL too, and use the standard containers at the very least - no need to use custom ones unless you either continue to use the existing C-based ones, or you have some very specific performance issues that you absolutely cannot fix any other way (and generally, you don't have this problem with the STL)

Now, sure, I hope they don't discover cool new features like STL algorithms and start to litter the code with lamba-d functors.

or you have some very specific performance issues that you absolutely cannot fix any other way (and generally, you don't have this problem with the STL)

On a platform with no swap file, such as a handheld or set-top device, one of the more common "very specific performance issues" is the ability to handle allocator failures. A program in such an environment is supposed to free cached or otherwise purgeable resources instead of just letting main() catch std::bad_alloc and abort. What are best practices for using the C++ containers if virtual memory is not available?

Is he referring to using containers as part of the target application or as part of the compiler itself? The compiler internals might be much cleaner or there may be less redundant code for the compiler if it used STL rather than alternative or custom containers etc. Your target application would still only be using STL if you wrote code to use it...

You write a damn exception handler block every time you have a "new". If you're using Linux and you run out of VRAM, it just starts killing processes until it has the memory. Its "optimistic" allocator doesn't throw std::bad_alloc -- is some really scary shit.

It could be worse though, in.NET if you run out of stack, you don't even get the exception - it just exits.

You do get a StackOverflowException [microsoft.com], actually. The catch - pardon the pun - is that it's a "magic" exception type that cannot be caught by user code, since.NET 2.0. So in practice it's only there for debuggers.

Swap itself didn't; virtual memory allocation techniques allowing overcommit did.It's practically useless to check the result of a malloc on a modern VM-equipped OS, except for very large buffers (where you typically also have an obvious failure path e.g. "screw it, this image is too big"). You program can get OOM-killed after all allocations have succeeded.

Even in environments with honest-to-god memory allocation, implementing proper OOM safety requires prohibitively thorough testing, where you need to sim

There are some cases where lambda-d functions improve clarity. For example, lambdas make for very clear, concise threading of simple tasks using the new C++11 threading operations.

Really, many of the new features play so beautifully together. For example, you can write a simple packet reader/parser which:

* Loops indefinitely* Waits until data can be read* Spawns the processing of that data in its own thread and resumes holding and processing packets, while in the spawned thread:* Proper type of packet is

The Linux Kernel used the C++ compiler for a while. I believe it was during the 0.99.x era. The goal was to improve the code quality by leveraging C++ compiler features like function name mangling while only using C language features. This, however, looks like they want to use a limited set of C++ language features that would be very handy for experienced C programmers.

If you look at the way the Linux kernel uses macros combined with GCC extensions like typeof(x), it is obvious that they are actually writing templates. And many of their struct definitions reproduce inheritance and virtual method calls.

Given a collection of developers that write difficult to understand, difficult to maintain and sloppy type unsafe code, going to C++ may not help.

It is very difficult to write easy to understand, type-safe, code in C.

The reason being that C requires so much micro-management that you end up with the code for that mixed in with the actual interesting algorithms. C++ basically makes the compiler do an awful lot of what you have to do in C anyway and does it for you automatically while keeping the details neatly out of the way.

It's also very hard to write type safe code properly in C. Just look at the classic example of the unsafe qsort versus the safer and faster std::sort.

I'd also be more optimistic if by using classes and templates they were really referring to using STL, not writing their own.

What on earth is wrong with writing your own classes and templates? They almost certainly already have a healthy collection of structs with function pointers and macros (linux certainly does have poor reimplementations of half of C++ using macros). These are best replaced with classes and templates on the whole.

That's the point. C++ formalises what everyone was doing in C anyway, making it much more uniform, easier to read, shorter and therefore much less prone to bugs.

It's also very hard to write type safe code properly in C. Just look at the classic example of the unsafe qsort versus the safer and faster std::sort.

You can do all kinds of nifty stuff with macros and gcc/clang extensions to provide type safety to C. Yeah, if you don't already have a library for that it can be a bit difficult to write one (or find one you like). But once you have the library it's very easy to write (mostly) type safe code with C. For example I have a type safe array_sort() in C.

You can do all kinds of nifty stuff with macros and gcc/clang extensions to provide type safety to C

Yes, I know.

You can write a GENERATE_SORT(Type, Comparator) macro which generates a sort function to work on an array of Type, using the specified comparator, and has no name collisions and is type safe using liberal amounts of ## and so on.

The point is not that you can't do them in C (you can), but the methods for doing it are ad-hoc. By moving the functionality into the compiler, C++ provides a regularity of syntax for such things that C lacks.

For you, I suggest two-pronged approach to C++. First of all you have libraries and frameworks. They take full advantage of C++ features, but hide them. Then you have the actual application code that uses these libraries, and is much simpler, ideally readable by somebody who only knows Java or C#.

The difference to doing the same in C is, in C you'll use macros and have poor type safety, ugly-looking code and get obscure macro related errors when when you put bad stuff as macro arguments.

In short, the trick with C++ is, you don't use most of the features, unless you really have to. Note that you can write your current C code as C++ code, except use whatever subset of C++ features you think will make your C-like code better, and only when it actually makes it better. Limiting yourself to pure C is, IMNSHO, just stupid, unless you're coding for a small embedded system and don't want to include C++ runtime in it.

If that fails, and you find yourself on a system with only a K&R C compiler, you bootstrap toan ANSI C compiler by going to gcc 2.95.n or something like that. Then you use that to getto a fairly recent C-based gcc and finally use the resulting C++ compiler to compile thefinal version of gcc.

I've read their guidelines, and they're doing much like I've been doing recently with moving from C to C++ for embedded systems programming, which is to avoid the really crazy shit that you can do in C++. In particular, exceptions and RTTI are absolutely verboten. They're even planning a compiler switch that turns off the features that will be outlawed in the compiler source. Any templates outside of STL are also forbidden ("template hell" sucks), and I won't even use STL myself because I can't count on having a heap. Even iostreams are being frowned on except maybe possibly in debug dump code where no text language translations are needed.

C++ can really tidy up C code that uses any sort of function pointer hooks or object dispatch style switch statements with virtual methods. A class can become a mini-API, and even used as a sort of device-driver, as in the mbed [mbed.org] libraries. Doing this has really helped improve encapsulation in my own code.

Actually the correct comparison is to not only disable exceptions in the compiler, but in addition adding hand-crafted error handling to the code. Because manual error handling also costs performance. And without error handling, your application is broken, period (and yes, I have been bitten by applications doing improper error handling. And yes, that included data loss. Loss of data stored on the hard disk, because the application didn't do any error checking when replacing the file with a new version. Fortunately I could get back most of it from the nightly backup).

I have incorporated your correction about templates into my article. Thank you.

Programmers can lose track of for how many different type combinations they have instantiated a template, causing code size to balloon. There is a common extension called extern template allowing for explicit instantiation, but it's not in C++98, and not all compilers support it.

[This] is extremely misleading. Your point of contention is in no way specific to C++ nor templates. It equally applies to any langauge which supports structures and/or classes. That's not C++ nor templates specific issue.

I still don't understand what you're trying to say about my point about "different type combinations" being wrong. I was referring to the fact that a lot of compilers instantiate templates by duplicating the object code one for each specialization, and you get one specialization for each combination of types, not just for each type.

They're even planning a compiler switch that turns off the features that will be outlawed in the compiler source.

That is an interesting avenue. As languages like C++ and perl can attest, languages can evolve by adding features but it's almost impossible to take them back. Having a compiler flag to enforce a coding standard is a way to do that less coercively.

It's a strange mindset to see run-time type information, available by standard in widespread languages such as Java and C#, as 'really crazy shit that you can do in C++'. It carries a runtime cost and the only unusual thing about C++ is that you don't have to pay that cost if you don't use it.
There is indeed crazy shit like compile-time Turing-complete template metaprogramming (the 'Vogon liver' that has grown way beyond its original intended purpose) but it's important to distinguish between that and la

When I realized that my array of Boolean objects in C++ was an order of magnitude more memory intensive than the bit-arrays I could create in C

Dude, std::vector<bool>. All of the iterable, dynamically-resizable, type-safe goodness of a real array type with very nearly all the efficiency (time and space) of hand management of packed bit arrays. The only downside is that you do have a little extra bookkeeping info (an int) to support the dynamic resizing. If you need to avoid even that, there's also std::bitset, which has a length fixed at compile time. Odds are that code using std::bitset will be more efficient than what you'd write, and you don't have to waste brain cycles on "keeping track of the fact that it's a pointer to a bunch of bits".

There are some reasons to prefer C over C++, but your example is decidedly not one of them. In fact, it strongly favors C++.

Ignoring RTTI is fine, but forbidding exceptions requires a dangerous sort of doublethink. The language itself, as well as the STL, is defined to generate and use exceptions. By ignoring their existence, you banish yourself to a nonstandard purgatory.

For example, every new now must become new(std::nothrow). For every STL container type, you have to provide a custom allocator that doesn't throw. That's a bit unwieldy.

By denying exceptions, you force everyone to use error-prone idioms. For example, the only way a constructor can signal a failure is to throw an exception. If you forbid exceptions, then all constructors must always be failure-proof. And then you have to provide an extra initializer method to do the real initialization that can fail. Every user of the class must reliably call the init method after construction, which gets cumbersome when classes are nested or when you're putting instances into an STL container. It also means that objects can have a zombie state--that period of time between construction and initialization. Zombie states add complexity and create an explosion in the number of test cases. Separate initialization means you can't always use const when you should.

Exceptions are necessary to the C++ promise that your object will be fully constructed and later destructed, or it will not be constructed at all. This is the basis of the RAII pattern, which just happens to be the best pattern yet devised for resource management. Without RAII, you will almost certainly have leaks. Worse, you won't be able to write exception-safe code, so you are essentially closing the door to ever using exceptions.

I've gone the opposite direction. Moving more of my C++ code into C by using my own OOP system. Before you say "That's crazy talk", consider that it makes inter-operating with my game's scripting language so much more buttery smooth than in C++ -- It's so nice to just create a new object in either script or C and have them talk to each other without going through a few layers of APIs, passing off dynamically allocated instances and having C free it, or the script GC it automatically.

This could work well. Indeed, there is something to be said for having C with a few extensions being a lot better for application programming than plain C.

On the other hand, a lot of real-world C++ code is as crappy as it is exactly because people write it as if it were C with a few extensions, rather than taking advantage of other C++ features that would make it actually nice to read.

As an example of this, it helped me a lot when I finally realized that, in C++, you can use almost any well-implemented type

The only downside to C++ vs C, IMO, is the C++ learning curve. You have to learn all of both language, and how their respective constructs get translated to machine code and when to use what. However, once you've done that, and internalized it all, you can write highly efficient code at a much higher level of abstraction, making you much more productive. And you can also drop down to low-level bit-twiddling when you need to, wrapping it in a higher-level abstraction or not, as appropriate.

I've never understood the hostility towards OOP. I've always seen it as nothing more than another great tool to use, but so many posters act as if OOP is some false god brainwashing the masses. My theory is they're taking the act of embracing OOP as synonymous with insulting C.

Look at the added java.io.PrintStream.printf() [oracle.com] method that uses a variable argument list. Someone had to be a special kind of asshole to adulterate a strongly-typed OO-language with that bullshit when the obvious OO solution is an array for a second argument. That's the kind of modification made when someone is making a political point, not a design improvement.

Irrelevant? Not quite. For your particular use, maybe. But most Linux distros are still built using GCC, and most embedded platforms provide a GCC-based toolchain. So if, by 'irrelevant', you actually mean, 'the compiler with the most-often executed output code on earth', then yes, I guess you're right.

Most embedded platforms use Keil, Assembler and all kinds of various odd proprietary compiler suites that suit their 8-bit and 16-bit nature better. The elitist, narrow though visible of 32-bit ARM is using GCC.

I assure you your refrigerator temperature thermostat was not programmed in GCC.

And the AVR I have used used a mix of GCC and GNU assembler. I think someone somewhere had an official commercial compiler for it but that doesn't help if it's not licensed for anyone in the company to use.

I have actually seen cases where companies license one commercial compiler for use in production builds while all the developers use GCC, out of concerns that the commercial compiler is more efficient while being too expensive to license more broadly. Over time there's pressure to dump the commercial compiler because it tends to be difficult to debug when the devs don't have access to the production compoiler, and because it turns out the expensive compiler doesn't really generate more efficient code.

Actually a number of the older embedded platforms I've programmed for DID in fact use gcc+patches, usually with proprietary stuff added all around. I believe for most of the microcontrollers supported by Keil, the compiler is based on GCC (often the older 2.x series.)

It's got digital temp settings for fridge/freezer compartments, an optional "super-cool" for fast-freezing the freezer section after putting a load of groceries into it.

Once of the coolest fridges I've seen (don't own it, too expensive) actually learned your habits--if you always have breakfast at time X and it generally results in the temperature in the fridge warming by a degree, it will pre-cool by an extra degree at time X-1 so that when you open it for breakfast it will warm up to the desired temperatu

Well, let's see. I personally work with control systems using x86, MIPS, PowerPC and ARM architectures, running Linux, VxWorks, QNX and WinCE (various combinations). They all have GCC toolchains, although we admittedly don't use it for CE.

Now, personally, my refrigerator has an analog thermostat, so, technically, you are right. If it had a thermostat implemented on a CPU, then I'd think there's a very good chance it was compiled with GCC.

What exactly "programmed in GCC" might mean is left for the reader to speculate on.

I also note with curiosity that the one vendor you can actually name with a competing compiler is a development environment aimed primarily at ARM and is, in fact, produced by the "elitist, narrow though visible" ARM.

It took me til about 1990 to realize that C++ was a fundamentally broken and overcomplicated attempt at an object oriented programming language. By attempting too much (OO + C backward compatibility) it achieved, to be kind, something other than safety and elegance.

C++ seems to me like the space shuttle of programming languages; includes a kitchen sink, a tool on board for every purpose, lightning fast, and dangerous as hell.

No. 22 more years has seen Challenger and Columbia blow up, and we've learnt some lessons about things we should do and things we shouldn't do. Just as the Challenger investigation didn't conclude, "Ban O-rings," nobody has decided to ban parts of C++, either.

C++ is in some ways like a human language: It has an enormous range of things you can say in it. Some of them are only appropriate in certain situations. Some of them are never appropriate if you want people to take you seriously. Some of them just plain don't make sense.

So quite a lot of the development over those 22 years has been in the community learning idioms that let you use the power of C++ without hurting yourself.

C++ is awesomely powerful, incredibly fast and resource efficient, and between new high performance applications and existing codebases it will continue to be used for decades.

However, it also has a beastly learning curve and lots of corner cases, and while its execution speed is wonderful it's so complex that compilation times for non-trivial applications are slower than equivalent feature (but slower execution) applications written in most other languages. If you really need performance that only C+

It took me til about 1990 to realize that C++ was a fundamentally broken and overcomplicated attempt at an object oriented programming language. By attempting too much (OO + C backward compatibility) it achieved, to be kind, something other than safety and elegance.

Actually if it only had near-compatibility with C and OO, it would have been a very nice and useful language. But then things went south and they added too many overloadable operators, a nightmarish jumble of rules for typecasting/overload resolution, exceptions that can't be implemented properly in modern application software, but add a whole new dimension of concerns that the programmer should always be aware of... Then they topped it all off with hideously overcomplicated templates. The standard librarie

"So now, real-world projects that use C++ for the useful things it does provide have to maintain coding guidelines to avoid shooting themselves in the foot too often."

How is that not the case for _any_ modern language? Anyone write terrible code in any language. I've seen some Python that made me want to rip my eyeballs out (used tons of esoteric functionality... coupled with a design that made me question the person's sanity).

Coding guidelines are a good idea no matter the language. Keep everything consistent and make sure that the code remains maintainable into the future...

You are missing the point. Most languages, if not all, have coding guidelines, but compare guidelines for, say, Java, Python, or even C, with existing coding guidelines in C++. You'll see the difference in how much the later cuts through what is available in C++.

Pretty much most C++ coding guidelines (in particular for systems and mission-critical development) cut away templates, STL, i/o streams and exceptions. Boost and RTTI are certainly the most banned of things.

The Java community is working around some of the design flaws in the language related to exceptions. As you probably know, RuntimeExceptions don't have to be declared or explicitly caught with try/catch, unless the developer wants to catch them. So I've seen tons of code in different open source libraries that wraps the core Java libraries with code that does try { doSomethingWithCoreJavaLibrary(); } catch (Exception ex) { throw new RuntimeException(ex); }

I hadn't really looked at it much through the course of my career -- most of my employers wanted C or Java, not C++. Having only recently started with it, I'm finding it to be about as sharp a weapon as C, but with the ability to be far more type safe. It really isn't that difficult to get a grasp on it. You just need to understand its pass-by rules, which are moderately more complex than Java's. You also need to be able to understand the STL and use it effectively. You also need some object oriented design experience if you're doing your own design work.

The third party libraries for it are pretty nice these days, too. I'd rather do threading in C++ with boost::thread than in Java. I've found boost::regex and boost::program_options to be a joy to work with as well. Eigen is also very nice if you need a math library.

Overall I've been quite enjoying working with it. It's not nearly as intimidating as it first appears, and the stuff you really need to know about it is pretty simple and easy to learn.

Division support in C on some platforms (such as ARM) and exception support in C++ rely on libraries called libgcc and libsupc++. These libraries are GPLv3 with an exception. Were it not for the exception, anything compiled with the would either be GPL (because of libgcc and libsupc++) or produce a linker error (because the libraries are called and not present). The exception applies only if the compiler has not been modified to introduce non-free optimization passes performed in an independent process. See GCC Exception FAQ [gnu.org].

Who does believes in GPL cuties? Apple, FreeBSD, 6 year olds, anybody else?

Were it not for the exception, anything compiled with the [gcc compiler?] would either be GPL (because of libgcc and libsupc++) or produce a linker error (because the libraries are called and not present).

I think you mean "linked with libgcc/libsupc++". One can compile code with gcc/g++ without linking against the bundled libgcc. For example, the BSD-licensed libcompiler-rt library produced for the LLVM project is said to be a drop-in replacement for libgcc, and as a bonus, it's even a bit more efficient. If the same is not already true for libsupc++, I'm sure it's only a matter of time.

The LLVM project started in 2000 at the University of Illinois at Urbana–Champaign, under the direction of Vikram Adve and Chris Lattner. LLVM was originally developed as a research infrastructure to investigate dynamic compilation techniques for static and dynamic programming languages.

LLVM was created by freeBSD due to the continual dropping of support for older hardware by the GCC team. Another issue they had was the optimizations of the software increased the difficulty of debugging things as the optimizations varied every time they compiled the software. Thus LLVM was created with the goal of binary stability that could be easily debugged and that supported the many older peices of kit that freeBSD runs on instead of being forced to use GCC 1.2/1.5/2.1/2.2/2.3 and such.

Gcc still blows the crap out of LLVM in several benchmarks. LLVM is great for many things as well. GCC needed competition to make sure it didn't get stagnent. Some of us still remeber the egcs period of time. Unless corperate entities were modifying the sources of GCC, I'm not sure why it matters.

The GPL doesn't force you to give back. You need to have a read of it, it only forces you to "give forward".

Apple has now fully embraced clang/llvm for a couple of reasons: it was legally very difficult to for them to integrate gcc tightly with their IDE (by which I mean they would have to GPL Xcode if they linked directly to gcc); it is technically very difficult to integrate with an IDE - apparently the gcc code base is a complete mess as far as integration with other tools is concerned.

Clang/LLVM is financed by Apple and it is released under an Open Source licence. Call that parasitic if you like but because of Apple (in part) you now have a clean modern compiler toolchain that's a credible open source alternative to gcc. If nothing else, it means that the gcc dev team now have an incentive to improve their product because they have competition.

AIUI GCC is now GPLv3, the libraries it ships with are GPLv3 with exceptions that allow using them to build non-GPL programs. However they were paranoid about the idea that people would try and save gcc's internal state to disk and then run it through a propietry backend. So they crafted a complex exception that tries to forbid that while allowing most other combinations of gcc with propietry tools.

Yeah, that always freaks me out. GCC backends atleast are configured using LISP wrapped in C. I hope this is one of the things they clean-up, though, it won't be straight forward. LISP is quite powerful and fast as a machine language, it just happens to be unparsable by humans.

> LISP is quite powerful and fast as a machine> language, it just happens to be unparsable by humans.What on earth are you talking about? Lisp is extremely trivial to parse. Lisp barely even has syntax.

Now, keeping track of Lisp program flow in your head, that can be a bit tricky and can lead to some substantial maintainability issues, especially when some hotshot programmer starts throwing lambda functions around like there's no tomorrow (or, worse, continuations).

Which is a complier more likely to be able to optimise? polymorphism that is explicit in the language or polymorphism that is hacked together by creating vtables (which are basically structures full of function pointers) manually? Which is more likely to have mistakes made that associate the wrong vtable with an object?

C++ has it's problems but it's the only widely supported language that both provides OOP features and yet still allows the writing of tight code where needed.

Doesn't help. STL arrays are allocated on the heap, and that's a quite slower and more wasteful allocation form than on the stack.

What is an "STL array"? If you mean std::array, then no, it's allocated on the stack. If you mean std::vector, then that's a dynamically resizable array, and an analogous data structure written in C would still be heap-allocated - you'd just have to do malloc/realloc/free yourself.

Sure, you can use C arrays, but guess what: out go type safety and STL algorithms and C++ idioms.

Again, wrong. Since raw pointers are iterators, you can perfectly well use STL algorithms and other C++ idioms with C arrays. In C++11 it's even easier now that std::begin and std::end are defined as global functions, and overloaded for arrays, so you don't need to much around with pointers at all. Type safety is still there as well, since C arrays are typed.

If they use the Smarter-C-than-C parts of C++ it's fine. Just don't start going overboard with modern C++ style, bloatware with templates and generics, autopointers, overloaded operators and functions, etc, then it's great. Use it as C with better type checking and easier modularization and the C diehards will approve.

Yeah. I'm not a fan of C++, though the compiler spends so little time running that this shouldn't pose much of a problem with bloat and clunk. On the other hand, loading C++ stuff is an abomination that takes eternity due to massive mangling (a problem Michael Meeks has spent a lot of time trying to marginalize with Bdirect linking, faster hash algorithms, etc), and the compiler gets run repeatedly.

I'm not sure mangling is really as much of a problem people make it out to be. It *did* cause problems trying to mix binaries from different compilers but I don't think it was ever really a performance problem. If linking is slower it's because the programs are larger.

OTOH name mangling is a massive benefit to programmers. Writing big programs is a huge pain in the butt if every single function/variable has to have a unique name. Namespaces are one of the reasons C++ programs scale so much better then C programs.

The problem with mangling is C gives you a symbol like "strcpy", which you might compare for the 50,000 links that have to be made during program load, and have to perform 300,000 character comparisons.

In C++ you get _NSstd__IOSTREAM__55STRING_OPEREQ__STRING__STRING__CHARX__ or some crazy thing. You wind up with 100, 150, 250 character long function names for class foo member 'int bar(int, &int)'. To make matters worse, the above hypothetical was ridiculous: you won't do 300,000 character comparis

Maybe not a big deal on a Linux system with an older G++ already installed, but this could be a serious issue for bootstrapping GCC on non-Linux platforms. Where you might have only needed the native C compiler before, now you will need the native C++ compiler, which may be an expensive product.

Unless they're going to make it a multi-step bootstrap where the first pass is only C code. I highly doubt that.