A ten minute compile is not a big code base. Last time I compiled the Qt 5 libraries it took 5 or 6 hours!

...nudges me towards the extra step of solving partial problems in separate projects first, before integrating them into the big project. Especially in the initial phases of tackling a programming problem where I haven't fleshed out yet how my data structures, algorithms and/or API have to look.

This made me giggle.

Whilst deriding long compile times you are saying that they nudged you into doing the correct thing.

Significant chunks of code should be developed, built and tested in isolation before going anywhere near the project code base. This encourages separation of concerns, you won't inadvertently be putting dependencies into the new code that need not be there. It encourages proper testing of the chunk of code. "Unit testing" as it is commonly known. It makes doing such testing much easier.

When you have your tests in place and they are in good shape then it's time to integrate that code chunk.

When bugs are discovered in your program later you will be able to run those unit tests and see if any changes have broken those chunks. Regression testing.

When bugs are fixed you can update those unit tests to demonstrate that the fix is correct. And prevent it from recurring as bugs have a habit to do.

All in all, those long build times are encouraging you to adopt good practices as used by organizations that develop software that needs to work correctly.

If you prefer edit/compile/test cycles of 15 minutes because that supposedly makes your design better because it's not 'designed at the keyboard' (where else? on paper? on a whiteboard? in your head?) ....

If there is no paper or white board or head used in the design process before one starts typing code there is something very wrong.

Of course "white bearded" C programmers know when to use arrays and when to use pointers. They can also tell when people are pontificating about "smart C programmers" without actually being one themselves.

As you are clearly referring to me there I have to say:

White bearded programmers should know better than to stoop to argumentum ad hominem.

I am saddened to find you resort to unnecessary insult to support your ill thought out argument.

Your whole series of post come over as insulting. You seem to think that all C programmers are incompetent idiots who can't write correct or secure code. This is clearly nonsense. "Because "performance" or whatever silly reason." is deeply insulting .

PeterO

Discoverer of the PI2 XENON DEATH FLASH!
Interests: C,Python,PIC,Electronics,Ham Radio (G0DZB),1960s British Computers.
"The primary requirement (as we've always seen in your examples) is that the code is readable. " Dougie Lawson

PeterO wrote: ↑
With edit/compile/test cycles easily taking 15 minutes you quickly learn to plan out your code rather than "designing at the keyboard" which seems to be popular these days.

I must respectfully disagree with this. I have worked on big codebases which indeed took over 10 minutes to compile,

I'm not talking about big code bases, I'm talking about using old/slow machines where even quite small programmes take minutes to compile.

and I found this made me unhappy because such long compilation times take me outside 'the flow'. It makes me hesistant to compile & test often, and nudges me towards the extra step of solving partial problems in separate projects first, before integrating them into the big project. Especially in the initial phases of tackling a programming problem where I haven't fleshed out yet how my data structures, algorithms and/or API have to look.

If you prefer edit/compile/test cycles of 15 minutes because that supposedly makes your design better because it's not 'designed at the keyboard' (where else? on paper? on a whiteboard? in your head?) it sounds like a mild case of Stockholm Syndrome to me.

You seem to have completely missed the point. When there was (or is) no alternative to a 15 minute cycle you take great care to (as some one else just wrote) "get it right first time". I find this makes me think much more carefully about what I'm writing. I also find I have far fewer "edit/compile/test" cycles when I'm writing code on old machines than I do when I'm using a PC, even for similar code size/complexity.

PeterO

Discoverer of the PI2 XENON DEATH FLASH!
Interests: C,Python,PIC,Electronics,Ham Radio (G0DZB),1960s British Computers.
"The primary requirement (as we've always seen in your examples) is that the code is readable. " Dougie Lawson

I must respectfully disagree with this. I have worked on big codebases which indeed took over 10 minutes to compile,

When I was learning programming, at first anyway, everything was on punched cards and we had one run per day.
Focuses the mind on getting it right first time.

With 8th I usually write small words and test SED (stack-effect-diagram) and outcome as I write. These small words are then factored to produce a working program (hopefully! ).

"Divide and Conquer" is a great technique. On modern machines I rarely add more that a dozen lines of code without testing. On old machines there aren't enough hours in a day to perform all the required compile cycles so I use an equivalent to your "small words" and develop subroutines in isolation (where possible). One language I use (called HCode) allows a subroutine in an "in core" compiled programme to be cancelled and redefined without recompiling the rest of the source code. It doesn't recover the memory used by the cancelled versions so eventually the compiler runs our of memory and at that point a complete recompile is needed.

PeterO

Discoverer of the PI2 XENON DEATH FLASH!
Interests: C,Python,PIC,Electronics,Ham Radio (G0DZB),1960s British Computers.
"The primary requirement (as we've always seen in your examples) is that the code is readable. " Dougie Lawson

PeterO wrote: ↑
With edit/compile/test cycles easily taking 15 minutes you quickly learn to plan out your code rather than "designing at the keyboard" which seems to be popular these days.

I must respectfully disagree with this. I have worked on big codebases which indeed took over 10 minutes to compile, and I found this made me unhappy because such long compilation times take me outside 'the flow'. It makes me hesistant to compile & test often, and nudges me towards the extra step of solving partial problems in separate projects first, before integrating them into the big project. Especially in the initial phases of tackling a programming problem where I haven't fleshed out yet how my data structures, algorithms and/or API have to look.

If you prefer edit/compile/test cycles of 15 minutes because that supposedly makes your design better because it's not 'designed at the keyboard' (where else? on paper? on a whiteboard? in your head?) it sounds like a mild case of Stockholm Syndrome to me.

Speaking of Stokholm... Embarcadero Delphi would compile that MUCH faster... I heard from some developers 5 million lines on HDD per 30 sec on some i7.

Linux is like woman - both wants 180 % of your time...
You want speed Java 9.8x? Throw it out of some Window(s)!
My girlfriend is terribly unmature - she always sinks my boats in bathtub!

Actually it was not. It was designed by a team at Honeywell Bull led by Jean Ichbiah.

Yes, and the requirements were specified by a working group of the DoD, it was Bull who won the development contract. (Source: Wikipedia)

History

In the 1970s, the US Department of Defense (DoD) was concerned by the number of different programming languages being used for its embedded computer system projects, many of which were obsolete or hardware-dependent, and none of which supported safe modular programming. In 1975, a working group, the High Order Language Working Group (HOLWG), was formed with the intent to reduce this number by finding or creating a programming language generally suitable for the department's and the UK Ministry of Defence requirements. After many iterations beginning with an original 'Straw man proposal' the eventual programming language was named Ada. The total number of high-level programming languages in use for such projects fell from over 450 in 1983 to 37 by 1996.

The HOLWG working group crafted the Steelman language requirements, a series of documents stating the requirements they felt a programming language should satisfy. Many existing languages were formally reviewed, but the team concluded in 1977 that no existing language met the specifications.

Requests for proposals for a new programming language were issued and four contractors were hired to develop their proposals under the names of Red (Intermetrics led by Benjamin Brosgol), Green (CII Honeywell Bull, led by Jean Ichbiah), Blue (SofTech, led by John Goodenough) and Yellow (SRI International, led by Jay Spitzen). In April 1978, after public scrutiny, the Red and Green proposals passed to the next phase. In May 1979, the Green proposal, designed by Jean Ichbiah at CII Honeywell Bull, was chosen and given the name Ada—after Augusta Ada, Countess of Lovelace. This proposal was influenced by the programming language LIS that Ichbiah and his group had developed in the 1970s. The preliminary Ada reference manual was published in ACM SIGPLAN Notices in June 1979. The Military Standard reference manual was approved on December 10, 1980 (Ada Lovelace's birthday), and given the number MIL-STD-1815 in honor of Ada Lovelace's birth year.

Musketeer wrote: ↑
Embarcadero Delphi would compile that MUCH faster... I heard from some developers 5 million lines on HDD per 30 sec on some i7.

Delphi used to be Borland Delphi, and was based on Borland Turbo Pascal (using a pascal dialect called Object Pascal). It compiled blindingly fast, thanks to the single pass compiler and lack of header files (Pascal believes strongly in defining everything upfront in a specific order, with the unfortunate consequence that pascal programs read from back to front).

Anders Heijlsberg was the original author of Turbo Pascal and the chief architect of Delphi.

Where do we see Heijlsberg too? At Microsoft, where he designed C#, and now TypeScript. The .Net library C# uses, looks much like the VCL (Visual Component Library) that Delphi is built on. C++Builder is a Delphi clone with C++ behind it, instead of Object Pascal (although you can mix & match). Currently these products are somewhat combined in RAD Studio, owned by Embarcadero. They have a free Community Edition.

"You can't actually make computers run faster, you can only make them do less." - RiderOfGiraffes

Exactly. There is a big difference between specifying requirements a new language should meet and designing said new language. The former was done my DoD committee , the later was done by Jean Ichbiah and co.

Thirdly, we have just suffered large performance hits on account of mitigations for Spectre and Meltdown. We should be willing to sacrifice a few percent performance for correctness.

I just noticed the lead programmer of FidoBasic has a fur coat which is looking a little grey. Fido barked get back to work; almost everyone around here is over 50 in dog years.

The dog developer continued with a growl, by replacing all pointers--even the canine ones--by uint64_t I've discovered it's much easier to program the memory management units, DMA controllers, GPU registers, Ethernet controllers and programmed IO ports needed in all the Linux device drivers. Since Linux is more than 80 percent device drivers each of which has 50 percent extra code to deal with hardware bugs, it is now at least possible to eliminate all pointer-related bugs in the Linux kernel without even using FidoBasic or Rust for that matter.

Moreover, if you further replace JavaScript by FidoBasic when running untrusted code in the web browser, you can safely turn off those Spectre Meltdown mitigations and revert that factor-of-4 speed regression in the Linux system function-call mechanism.

Out of all the code I have ever written, or maintained of others, I have not yet run accross a bug that is the result of using pointers. More often bugs are from the lack of sanity checking on values being passed around.

As for data locality, in normal cases you allocate a large array of the structure that you are using in a list or tree or other pointer linked list (memory allocation is costly, so to be minimized at all costs [same reason it is a good idea to avoid new in C++]), and then only allocate another array when you run out of ellements in the already allocated.

Bounds checking in C is no more costly than in any other language. The difference is that the Programmer has to implement bounds checking manually (and not have a fence post type bug [which I have seen in code very often (this is one bug that I thouroghly check for in every bit of code I write or look at while maintaining)]).

In C we get to implement our own bounds checking, which in some cases may be faster than in other languages. If it is known that a certain amount of range is available, we may be able to go multiple access without having to explicitly check the bounds, and without ever reaching the limits. Though one does have to take some care when doing such things.

Even in BASIC when using pointers you can reference beyond the end of a block that is being used, when doing indexed access off of a pointer. The same for Pascal when using Pointers.

Unfortunately many programmers do not bother to do any kind of bounds checking at all, and that can cause some interesting bugs. Bounds checking is as important as sanity checks.

RPi = The best ARM based RISC OS computer around
More than 95% of posts made from RISC OS on RPi 1B/1B+ computers. Most of the rest from RISC OS on RPi 2B/3B/3B+ computers

Problem is human psychology too: test shows that people have tendency to first judge word by its first and last character.

So people notice at once that it is "Heater" even when you write "Heaterar" or "Heatr" (especially if it is subject) instead and ""Aheate notice fastest.

This is exactly why we have compilers that checks variable definitions; we humans are able to overlook typos like these and can accidentally mistype variable names for that reason. I would say that's actually a weakness in languages like Python and JavaScript, where typos like that can be missed if the circumstances are right.

Problem is human psychology too: test shows that people have tendency to first judge word by its first and last character.

So people notice at once that it is "Heater" even when you write "Heaterar" or "Heatr" (especially if it is subject) instead and ""Aheate notice fastest.

What problem?

I have long suspected that when people are reading, after they get out of infant school, they are not looking at all the letters in a word one by one and figuring out what the word is. Rather the shape of the word as a kind of "blob" suggests immediately what the word is, then given the context of what they are reading they know what the word is. Pretty much without thinking about the actual letters and words they get the meaning out of the text just by it's overall shape. Only when things make no sense at that high level do they have to go back and check the details.

Therefore, it makes sense to me to make variable and function names in programs familiar words and camel case.

For example:

One might have a function to calculate an error in a position measurement. Why not call it that:

This is exactly why we have compilers that checks variable definitions; we humans are able to overlook typos like these and can accidentally mistype variable names for that reason. I would say that's actually a weakness in languages like Python and JavaScript, where typos like that can be missed if the circumstances are right.

I kind of sort of agree. But not.

It's great to have compilers that check as many silly mistakes as possible.

But I observe the following:

1) If you have variable or function name typos in your code you will find out soon enough, either it does not compile (in C or whatever) or it fails badly when you run it for the first time (in Python, JS or whatever)

2) The compilers we have don't check most silly mistakes anyway, integer overflow, out of bounds array access, random pointer access etc. All of which are more of a problem than typos in symbols.

3) All code that matters should have unit tests and integration tests in place. If you have those then code with typos in Python or JS will show up very soon. If you don't have those you are producing untested junk anyway so who cares.

All in all I'm not sold on this compiler checking thing as a means of finding bugs. It is of course required so that they can actually make something that at least tries to run from your source.

Problem is human psychology too: test shows that people have tendency to first judge word by its first and last character.

So people notice at once that it is "Heater" even when you write "Heaterar" or "Heatr" (especially if it is subject) instead and ""Aheate notice fastest.

What problem?

I have long suspected that when people are reading, after they get out of infant school, they are not looking at all the letters in a word one by one and figuring out what the word is. Rather the shape of the word as a kind of "blob" suggests immediately what the word is, then given the context of what they are reading they know what the word is. Pretty much without thinking about the actual letters and words they get the meaning out of the text just by it's overall shape. Only when things make no sense at that high level do they have to go back and check the details.

Therefore, it makes sense to me to make variable and function names in programs familiar words and camel case.

For example:

One might have a function to calculate an error in a position measurement. Why not call it that:

See what happened there? The function got a cryptic name that is hard to read. Then they added a comment, because the boss says comments are a good idea, that actually says what it is.

Well, why not delete the redundant comment and use the same text as the meaningful name itself?

I agree with you to a point. Sometimes it is worth dropping vowels in names (or using common shortened versions of words in names), as names become unreadable, and more prone to errors in typing when they get longer than 12 characters in length.