Lately my Spider-Sense has been tingling a lot, and here and there on the web I've found some discussions about the regulation/licensing of programmers. Now, even though I'm going to (try to) resume my academic career, eventually getting my BS in Computer Science, I still don't like the idea of regulation - in any profession, for that matter. Futhermore, it seems to me that prominent common law countries (America and UK) are on their way to establish suffocating burocracies not unlike the ones found in continental europe. I also don't think there's a well-defined body of best practices that should be mandatory for practitioners - no, I don't think OO is the only way. I also think that (most) software isn't that similar to a bridge or a nuclear reactor: to me it's more a form of expression, like literature. Am I paranoid, or bureaucrats are really coming for programmers? What do you think about it? I also wonder what would happen to endless senior programmers with no academic qualifications: is the writing on the wall for them?

it's a bizarre idea indeed - however, these days politicians around the world are also more than ever convinced that rainy days should be banned by law.

@rouncer: On the regulation i.e. complete censorship of the internet: I think recent facts have shown it's only a matter of time. My two cents that the eu is already doing the same nsa thing, just in a more bureaucratic&grotesque way - it's the eu, after all! And guess what? They are going to say that it's required to properly fight tax evasion - these days it's their favourite explanation...

I don't see any evidence there that software engineers are going to be universally required to be licensed anytime soon. Apparently in a few jurisdictions there exist professional software engineer licenses that you can get if you qualify, but it doesn't sound like any of those jurisdictions actually requires you to have such a license to get a job. That Steve McConnell article is from 1999 btw, so people have evidently been talking about this for a long time without much movement.

If we ever get something like that it will come from software developers forming a union, because that's the way that kind of thing works. They put the laws in effect to protect themselves and shut out scabs and whatnot.

P.Eng is a technical term for redirection of liability. The only jobs that require it (and even then I've seen some that were optional) are ones that have high risk. If something goes wrong and lives are lost, they have someone to blame and throw in jail. Without a P.Eng, you have a get out of jail free card. The government of course loves certification because it gets them off the hook.@Albertone

I also don't think there's a well-defined body of best practices

Not everyone thinks the same or likes to operate under the same principles, but there are practices that vastly improve software quality and productivity. When developers ignore them because they think it's a waste of time, you end up with a messy product littered with issues, lack of documentation, and unsustainable software. Software engineering is still a young industry, but it needs to grow and become more mainstream. We need to recognize and promote best practices, otherwise how else can we evolve?

Ha ha, didn't know Linus felt that way about c++ and STL and Boost. I kind of like them myself but I suppose I'm not the kind of programmer he would like having around. I've never understood why c programmers have to write there own low level stuff which is different just to be different and no one knows how to use it except the authors. You have to learn how the author did it for everything you do with the program before you can proceed. As far as object oriented, it's the easiest to maintain and read code I've ever seen. You can probably do code that is easy to read and maintain without it, but you can also do unreadable garbage, where object oriented tends to be more readable even if you hardly try. C coders seem to always use char instead of strings and their code is full of holes. Take a look at Ogre some time, that code base is always changing but it's still readable and relatively fast. Linux might be free and open source software, but it also looks like a dictatorship to me. Maybe that's the only way something like that would work.

At a high processor speed with ample memory, ease of maintenance becomes more important, I think.

The thing is, that correctly written C++ code should be as fast as C code (of course assuming you don't use some *bad* virtual classes, *bad* smart pointers, etc.) - so using C over C++ doesn't give you more speed. Note the word correctly - 90% of C++ developers use it incorrectly (too much abstraction, too much virtual methods and using stl stuff terribly wrong).

To the maintenance, I strongly have to refuse what you wrote. C code is as easy to maintain as C++, I'd also say it's even easier - as you don't need few pages describing syntax, naming, etc. - you can go with just a single one (you don't need any interface specific naming, virtual class naming, etc.). Also C language is simplier, using especially dynamic libraries is way too much easier in C (C++ interfaces for libraries are bloody mess), you can't have 2 functions with same name (which is HUGE advantage - as you can't accidetally call different function), etc. So using C instead of C++ is possible for large scale projects and maintenance is as easy as for other languages.

Of course it strongly depends on project creator/project leader. You can create unmaintainable bloody mess in C as well as in C++ (and vice-versa). The bad thing is, that most people are taught (and then thinks), that OOP is the only (or best) way to create maintainable code (especially when it comes to situation where there are multiple developers on the single project). It just depends on people who created/contribute to the project, it isn't bound to paradigm or language - it is bounded to projects and people.

Sadly 90% of more-men projects doesn't even have document describing syntax, etc. So for a reader it becomes really hard to figure out what is going on. Most C projects are often started by a single guy and he doesn't realize some1 can read the code after him, so he just use his own habits in the code.

Ha ha, didn't know Linus felt that way about c++ and STL and Boost.

It is not just Linus. I personally hate STL and Boost too. I don't think they're bad or such, but they're trying to be somehow doing everything and nothing. Debugging stl or boost code is a nightmare, and again 90% people overuse them. They destroy portability (they seem to be similar on Windows and Linux, but in the end they differ - which creates a ton of headaches), and more imporatantly, the amount of memory taken by them is huge. Still in some of the projects I have I use stl - it wasn't my idea in the beginning, but mostly (as I develop cross platform applications) we considered this decision a terrible mistake that sadly won't be reversed in those projects.

I'm sure a lot of it is just personal preference. I just think OOP is easier to read and maintain. When I use it, I think, what is the smallest object I want to deal with that can be extended into more complex objects. This helps me construct a better model that I can easily add new objects without making a big mess. They can all be dropped into the same container, etc. I hadn't heard that about STL. If it's true, it's the fault of the maintainers and they should get to work, because that's the idea of standardization. When we use it, we don't have to review every coders different way of doing it and work off that. C++ really looks like it's unfinished to me in a lot of ways. You suddenly have to drop back to a C way of doing things in it, which seems kind of stupid and awkward. It has a split personality. I wouldn't want to see it split from C, I just think there should be more C++ methods that you know they are c++ methods. I haven't used it for a while, so maybe it's better than my previous experiences. It's like everyone gave up on it and worked out c# and java instead because it was hopeless as an OO language. Still, it gets used probably more than those others, so it must be doing something right. If I have a choice, I'll take an OO engine any day of the week. Unity uses a component model and I hate it really. Supposedly it's faster, but I feel like I'm constantly starting over every time I need something a little different. I'm sure the engine was written OO, but they sealed the game object so it couldn't be extended. You can only add components.

Note that you can actually do OOP in classic C (but all in all doing it in C++ will give you basically the same code with more clear syntax), and this is why I love C++ - you can write low level code ideally with 'extern "C"' \~ keeping good old naming for core procedures (doing pure C is often simplier than doing object mess around, for lower level stuff), and then create some nice class on top of it, that can be actually used.

STL is fine as long as you use just single STL implementation. Once you switch it, the mess begins. I'm glad that at least LibC is pretty well standardized on most platforms and that the basic functions behave the same. For STL as long as you use just basic containers without some extended types (never specify alignment for stuff inside containers, switching STL implementations on that code results simply in crashes) it is okay ... a lot of things have undefined behaviour or platform-specific behaviour (if you try to get CPU thread count with stl, most implementations return 0 (e.g. not-yet-implemented), one or two works correctly and I've found even one that defined weird value (I realized my 4-core CPU has 2\\^32 threads ). Maps and sets differ on each platform (and their timings significantly differ) - I had a resource manager running on "map\" - it was blazingly fast on libc++ in llvm (clang); and in MSVC it was terribly slow (more than 400 times slower).

In my opinion they shouldn't focus on adding in my opinion useless stuff (smart pointers, unique pointers, and other stuff like this \~ which is easy to create on your own), but on standardization and forcing each creator to strictly do what is in specs.

this is why I love C++ - you can write low level code ideally with 'extern "C"' \~ keeping good old naming for core procedures (doing pure C is often simplier than doing object mess around, for lower level stuff), and then create some nice class on top of it, that can be actually used.

On it's own, it's fine. Being able to do Matrix a = Matrix b * Matrix C is great, but you also want Vector a = Matrix b * Vector b

So when you see a line of code a = b * c; ... what is it? You have no way of knowing without going back in the code and working out the types of the vars.

Even that isn't too bad, most of the time you know the types of everything so the code is obvious.

But it's when someone decides that they want to save time and they overload an operator with something that is nothing to do with the operator.

I've seen the ++ operator overloaded with a routine that created a formatted string and printed it to stdout.

As Vilem says though, that's just bad coding rather than a problem with the language.

I've also had major problems with a web browser that was written in C. They wanted an OOP design, so they used a system based on structures full of unions. However when I moved their code onto our compiler we had alignment issues due to the unions. What should have been a weeks work ended up three months.

This is a major company who bragged about how portable their code was!

I hate using functions for basic math operations like vector addition or matrix multiplication. It's so much clearer just to write it using standard math syntax.

But I would draw the line at using an operator for dot or cross. For those I prefer functions since there's no operators in C++ that look like the mathematical dot and cross operators.

The C++ iostream system with the overloaded \<\< and >> operators is an abomination IMO. I understand the desire for a type-safe formatting solution, but that is just gross, so I stick with printf and friends. Clang now type-checks printf arguments against the format string, and it lets you mark your own printf-like functions so it can check them too.

But we have gotten way, way off topic now. To get back to it: I definitely can see the argument for some type of certification for engineers working on safety-critical systems, like we have certification for engineers who build bridges and suchlike. If nothing else, it gives you ammunition for pushing back on bosses who want you to cut corners on such a project. "I can't take shortcuts here; it wouldn't be ethical and I could lose my license."

There's probably just as many ways of building a bridge as there are of building a software project. I don't buy the idea that just because there's no single "best" software design methodology means there can't be a useful professional certification. You can go meta: the test could have a section about different design methodologies and intelligently choosing between them.

I hate using functions for basic math operations like vector addition or matrix multiplication. It's so much clearer just to write it using standard math syntax.

I don't see why it's clearer than vectorMult(first,second) or whatever. I can go to the function and find out what's going on with it that way. If it was never misused, that would be fine, but a lot of times it's used for something that isn't commutative. If it's a function, it's a clue that something non standard is going on. If someone is going to do something like that, I want it to be some trusted library, not some guy that thought it would be cool.

safety critical systems should be written in a suitable language, certification won't help that.

Anybody that writes safety critical code in java is an idiot. For a virtual machine to be certified it has to run for ten hours before the garbage collector crashes. That's it, ten hours.

In fact we had major problems getting ours certified. It ran the test 147 times faster than the Sun JVM, which meant the garbage collector had to survive for the equivalent of 1470 hours (about two months). It usually crashed after 9.5 hours.