The initial information was to add the undocumented “UseRoaming no” to ssh_config, with no other information provided on the “upcoming” CVE. This occurred (at least 5 hours?) before a fix was committed.

And you seriously thought that no other information was going to be made available?

The “initial information” was in no way an attempt to “refuse to disclose it”, it was a heads up to try to protect as many people as possible with a quick fix before a proper one could be made (ripping out the code), which included coordinating a proper release between a number of people in different countries.

“Full disclosure” does not mean “instantly release all details to everyone with no warning.” Most people involved in security would probably agree with that. Full disclosure can take time, as long as everything is eventually released.

Sure - my “NAS” is an old desktop. Anyone who is in the position of choosing which filesystem to use is probably middle rather than upper class - rich people will just buy an off-the-shelf NAS system, plug disks into it, and let it do its thing.

In general I think I understand what the author is saying, but I am entirely unconvinced that his replacement is much better. For example, he wants users to use a command line system and even a scripting language. I really don’t think he has seen real users.

Now his statement about using languages that are not memory safe being engineering malpractice – I would tend to agree, if we had a real developed software engineering field. But I don’t think we are anywhere close to where that should be.

I always saw memory safety as an implementation problem rather than a language problem. There’s no reason a C implementation can’t guarantee that the application will immediately crash when out-of-bounds memory is accessed, like many languages with exceptions and such do. This is pretty similar to what some malloc implementations already do with guarding allocations and unmap upon free, although the ones that currently exist aren’t really silver bullets.

However, concerns about type safety are definitely more of a language problem…

There’s no reason a C implementation can’t guarantee that the application will immediately crash when out-of-bounds memory is accessed

I am afraid that the C type system is unable to deal with these issues, so it would be left to a runtime system check or a static analysis tool. The runtime system would be unacceptable because it would slow things down, and a static analysis tool cannot be guaranteed to find all instances.

An invalid array access is usually confined to undefined behaviour for this very reason. I can see malloc being the basis for a runtime system though.

Looking at what rust or haskell do with memory access is interesting. Rust tries to contain ownership and Haskell just wraps it in a function call.

Maybe finding most instances is enough - being able to ask for a large object and treating it like malloc treats an address space is a fact of life loophole in most general purpose safe languages, and a common technique in areas like embedded runtimes and emulators. Runtime checks are also a common solution in a few popular languages, but I suppose the trade-off is more acceptable there. :)

Runtime checks are a great example of why the ‘culture’ of C++ is a benefit for performance: If you check the bounds in some manner yourself (to be explicit) and then access the element, you definitely do not want to have it check it again. So on an vector you can use .at(…) to check or operator[] to avoid the check.

I think what would be nice would be a way of proving you have checked the bounds for a static analyzer in the compiler, and it can assume a certain range is thus valid. That would be a great way to discount the areas you don’t need to worry about.

I love how he promotes the usage of VLAs only to give plenty of warnings later on how easily this can fail for larger objects and that a user can exploit that to crash your program.
At least with malloc, you can check to see if the allocation failed. With VLAs, you literally have to just live with the stack overflow in case you requested too much stack at once.

The only thing I agree on is the stdint.h-usage. It greatly improves the readability. Most of the other points were more of an experimental nature or don’t matter (personal taste).

Why is it “bad practice” to declare the variables at the top of the function?
Only because you can doesn’t mean you should. And if your functions grow too large you might have to think about
splitting them up a bit, not scatter your variable declarations all over the place. This way you end up with more cruft
in the end, not less.

2) “#pragma once”

Way to go for portability. If you only care about the gcc/clang-monoculture, this may seem logical, but it’s non-standard,
so don’t use it. :P

3) “restrict-keyword”

If you do numerical mathematics, go ahead, use it. I use it for my work as well. Most people however don’t even know
how restrict even works exactly and just add it anywhere, thinking it’s safe. In most cases, the speed benefit won’t matter
anyway, because your program is stuck in I/O 99% of the time.

4) “Return Parameter Types”

The convention ‘0’ for success and ‘1’ for error is common knowledge. The bool-proposal was kind of stupid, because
you end up setting up conventions there as well. Does return ‘true’ mean error or success?

5) “Never use malloc, use calloc”

Seriously? This can actually shadow bugs in your program (forgotten 0-terminators on dynamic strings) which can fuck
things up later on. Also it’s slower. If you use calloc everywhere, you basically admit that your data structures are messed
up and have let your program grow too much. Or that you have simply not understood the language/machine.

And the most important point: Make up your own mind people! If you prefer your own coding style, then use it. If it’s too weird, it’s not guaranteed if people will commit something, but in C you can’t go too wrong anyway.
Nevertheless, I like the gofmt approach. :)
Also, take those “how to’s” always with a grain of salt. This is merely a reflection of the author’s opinion. Hell, take what I say with a grain of salt. Read the docs, read the standards(!) and inform yourself. C is simple enough that you can make up your own mind on those technical details.
If you are still thinking about using VLAs in your code, take a look at the GCC implementation.

Guides like this are the reason so many people are still writing bad code, because they let others think for them instead of informing themselves.

I don’t think 1 for error is that a common convention, though I agree not 0 is a relatively common way to signal failures. In a lot of the code I work on, almost everything returns an int which is 0 for success and -1 for failure.

I personally think it’s a “bad practice” (whatever that means – to be avoided, I guess) to declare variables outside the scope in which they are used. If you need a variable inside one arm of an if statement, put it in there, not at the top of the block. Inline loop counter declaration is essentially the same thing.

Regarding (1), declaring variables as needed instead of at the beginning of the block can help you in my experience. in ANSI C, it is easy to miss if a variable has not been initialized or actually has vanished from the code. Also, patterns of variable reduce (like i am going to reuse i here…) are probably not emerging as often.

So its not necessarily that declaring variabls at the top is bad, it is just nicer to declare them as you go.

In the end it doesn’t matter. I often reuse my loop variables, you probably don’t. I guess even if we worked on a project together this wouldn’t be too much of an issue, anything else is not important.

Way to go for portability. If you only care about the gcc/clang-monoculture, this may seem logical, but it’s non-standard, so don’t use it. :P

I actually think this as nice bonus! :P

Every time I have been using some other compiler than gcc/clang, there has been horrible headaches in every corner (especially with IAR, damn it!). Although I must say that all my experiences outside gcc/clang world has been with propietary compilers. I might be somewhat biased.

I would be more likely to point to this with the disclaimer “You see this guy’s opinions? Do the opposite of what he says.”

Some of the compiler features he mentions are non-standard. This matters for me. I actually use a C compiler that isn’t GCC or clang on a regular basis (pcc). -march=native is often unacceptable for downstream distributors, and generally I’m annoyed when programs ignore my CFLAGS in favor of their own ridiculous optimizations. Usually I value a fast compilation far more than non-hot parts of the code being sprinkled with magic. As others have mentioned, “#pragma once” is also non-standard, and variable size arrays (i.e. alloca) can be a security risk.

No specific comments on types (though you should certainly use char to refer to utf-8 octets, otherwise people who have to use your libraries or read your code will be annoyed). I use “unsigned” when I want a integer that’s at least 16 bits and don’t care about specifics. That’s in line with the standard.

There are valid arguments for separating declarations from code, especially when you have resources you want to allocate and free. for loops are perhaps a case when this rule can be broken - not sure I have a strong opinion here.

Isn’t that at least half of any effective programming guide? Knowing how to write a program that compiles and runs in a given language is easy. Knowing how to write a good program that minimizes errors, maximizes readability, performance, security, and refactoribility is hard.