This website is run by the community, for the community... and it needs advertisements in order to keep running. Blocking our ads means your killing our stats!
Please disable your ad-block, or become a premium member to hide all advertisements and this notice.

This site uses cookies. By continuing to use this site, you are agreeing to our use of cookies. Learn More.

Bjarne Stroustrup on Modern C++

This website is run by the community, for the community... and it needs advertisements in order to keep running. Blocking our ads means your killing our stats!
Please disable your ad-block, or become a premium member to hide all advertisements and this notice.

Good article on the state of C++ today and computer science education in general :-

This website is run by the community, for the community... and it needs advertisements in order to keep running. Blocking our ads means your killing our stats!
Please disable your ad-block, or become a premium member to hide all advertisements and this notice.

i've just finished reading it, was a very interesting and enthralling read, i kind of got clued to the screen, that was until the phone call informing me of a nice delivery to pick up from reception. but thanks i enjoyed that.

Skimmed over it so far, and it is a great interview. He and I are one regarding the lack of education on modern computer science courses - and I particularly chuckled when he mentioned people unfamiliar with proper practise criticising the decisions made for C++ structure and what-have-you. You simply wouldn't believe the number of people I have encounted both in realisty and forums with that attitude, not just for C++ but software engineering in general, when they have no idea what they are talking about.

I have to say it made a lot of sense to me I agreed with practically everything he said. I've sucessfully used C++ for real time embedded and for large scale apps, you use the portions that make sense in each case. Hes also right about ditching lanaguages every few years is counterproductive, and how education needs to teach the fundamentals and that to learn it all takes 5-6 years. Also on how he mentions the short commings of the language, almost every other lanaguage has a much better standard library, I agree the threading, network and memory management primitives are very important for C++ to advance.

I read this article when studying OOP in C++. Having been trained in vb.NET using visual studio with everything available as a component, it came as a surprise to have to write my own date class. We also had to write using a plain text editor which nearly killed me as a result of the number of mistakes that I made. The learning curve for C++ after other high level languages is quite a steep one but well worth the effort. It is interesting to note that some of the more troublesome features of C++ such as multiple inheritance and pointer arithmetic are now either discourages or forbidden altogether in subsequent languages such as C#.

It is interesting to note that some of the more troublesome features of C++ such as multiple inheritance and pointer arithmetic are now either discourages or forbidden altogether in subsequent languages such as C#.

Click to expand...

Multiple inheritance was used because there was no language support for interfaces. Concepts of mixins and duck typing are alive and well, as are traits types. Languages don't have multiple inheritance largely because they use other idioms or have interfaces as first class language constructs.

You will never get away from managing memory at some level, it may not be strictly required for high level languages and for business apps, but it is very useful for systems programming. Lanaguages like .Net and Java still have things like stong and weak references, and you must still be aware of the memory model when writing advanced code. The introduction of references vastly reduced the need for pointers in modern C++, only small parts of system libraries should need to use them.

Many years ago, the INCITS H2 Database Standards Committee(nee ANSI X3H2
Database Standards Committee) had a meeting in Rapid City, South Dakota.
We had Mount Rushmore and Bjarne Stroustrup as special attractions. Mr.
Stroustrup did his slide show about Bell Labs inventing C++ and OO
programming for us and we got to ask questions.

One of the questions was how we should put OO stuff into SQL. His
answer was that Bells Labs, with all their talent, had tried four
different approaches to this problem and come the conclusion that you
should not do it. OO was great for programming but deadly for data.

I have watched people try to force OO models into SQL and it falls apart
in about a year. Think in relational terms instead.

Let's get collective or plural names on the tables, since they model
sets, make the data element names consistent (why did you have both
"name" and "school_name" for the same thing??) and pick sizes that match
the USPS rules for address labels.

Doing stuff like having a column take on a name based on which table it
is in (i.e. "FOREIGN KEY(schoolname) REFERENCES School(name)") is a
horrible design flaw that completely destroys your data dictionary.

If you never built a data dictionary in the first place, try to catch up
before you are toally screwed -- or update your resume and run for the
hills.

The learning curve for C++ after other high level languages is quite a steep one but well worth the effort. It is interesting to note that some of the more troublesome features of C++ such as multiple inheritance and pointer arithmetic are now either discourages or forbidden altogether in subsequent languages such as C#.

Click to expand...

I'm still a firm believer in it being the programmer rather than the language at fault here. If you place C++ in the most expert of hands it is truely a wonderful language.

Be glad that you're not in my job where we mainly deal with assembly, C++ and reverse engineering bytecode.

I have watched people try to force OO models into SQL and it falls apart
in about a year. Think in relational terms instead.

Click to expand...

Totally agree, look at ORM and microsofts approach of ADO and datasets, in fact M$ deliberately avoided releasing a pure ORM framework for these very reasons. The relational model is incredibly powerful and backed up by years of research by mathematicians, it should be the default approach unless you know what you are doing. Look at EJB, it was a massive foul-up, largely for two reasons :-

1. No seperation of concerns, there is no need to marry a component model, distributed computing and persistence. The three areas are concerned with solving quite different problems.

2. A very poor ORM technology and poor use of that technology. ORM should be used carefully.

Doing stuff like having a column take on a name based on which table it
is in (i.e. "FOREIGN KEY(schoolname) REFERENCES School(name)") is a
horrible design flaw that completely destroys your data dictionary.

If you never built a data dictionary in the first place, try to catch up
before you are toally screwed -- or update your resume and run for the
hills.

Click to expand...

Doesn't seem so bad to me, foreign keys rarely have the same name as the primary key they reference, theres normally a naming convention of tablename + keyname. A data dictionary seems to be a rather archaic concept. It is a global glossary, in a complex system made up of many packages, namespaces, components, and classes, its probably pretty useless other than as a key to other technical documentation, and yes I have done Yourdon, SSADM, UML, RUP and Agile before you ask, they all have their advantages and disadvantages !

“Dictionaries are like watches; the worst is better than none, and the best cannot be expected to go quite true.”

-- Mrs. Priozzi Anecdotes of Samuel Johnson, 1786

Sometimes I think brain capacity and ego size must be inversely proportional.

I have to admit I usually create the data dictionary last as part of the project's documentation. But that's because I use a RAD approach rather than a waterfall model. And I'm still not terribly sure what Agile is all about to be honest... seems to combine the disadvatages of both if you ask me (never ending changes and a full dev cycle).

Sounds like Mr Celko is the one with the ego...

Click to expand...

I don't see it that way. He's just confident he's right... and he usually is. Guys with big egos with continue down a path they know is wrong just to save face.

And I'm still not terribly sure what Agile is all about to be honest... seems to combine the disadvatages of both if you ask me (never ending changes and a full dev cycle).

Click to expand...

Well agiles not really something that should be applied to an in production database schema, so the advantages are likely to be lost on most DBAs.

However anyone who has seen 'analysis patalysis', or a project generate a years worth of documentation and no code only to be terminated will understand the benefits.

Some parts are similar to Iterative and RAD, de-risk the project by delivering early and often. Requirements change, the market changes, the requirements may have been wrongly stated etc. The cost of detection of bugs curve is often used to state that requirements must be nailed down and 100&#37; right as early as possible, in reality this approach can actually load risk onto a project and lead to over engineering.

Releasing early and often does produce its own inefficiencies but you are at least getting real feedback of the viability of your designs. Architects build models, engineers go into wind tunnels, electronics engineers use simulators and create prototypes, these are all ways of reducing risk by getting early feedback.

The idea is to use feeback cycles to produce a spiral of virtuous improvement, look at how athletes 'raise the bar' using sports science.

CertForums.com is not sponsored by, endorsed by or affiliated with Cisco Systems, Inc. Cisco®, Cisco Systems®, CCDA™, CCNA™, CCDP™, CCNP™, CCIE™, CCSI™; the Cisco Systems logo and the CCIE logo are trademarks or registered trademarks of Cisco Systems, Inc. All other trademarks, including those of Microsoft, CompTIA, VMware, Juniper ISC(2), and CWNP are trademarks of their respective owners.