As far as I can tell, the is no universal agreement on the definitions of strong or weak typing, and even if there were, the extent to which type annotations are mandatory or optional is irrelevant.

In my view, a weakly-typed language must support implicit type coercion at least to some extent. C is a weakly-typed language which will, for example, happily add an integer to the ascii representation of a character. Python is an example of a strongly-typed language which will complain loudly if you attempt the same. But you'll never find type annotations in Python.

"In my view, a weakly-typed language must support implicit type coercion at least to some extent. C is a weakly-typed language which will, for example, happily add an integer to the ascii representation of a character. Python is an example of a strongly-typed language which will complain loudly if you attempt the same. But you'll never find type annotations in Python."

In my opinion, C doesn't have a "character type" (like a pascal CHAR). The 'char' is actually an integer type in C, which in most (all?) implementations happens to be an 8 bit byte. Because C doesn't have a string type either, we've become accustomed to representing strings as array of bytes (char). But that doesn't make the 'vector of chars' nominally equivalent to a 'string', they are different concepts.

If you follow my reasoning, then you wouldn't say C has a construct for "add an integer to the ascii representation of a character". If we renamed C's 'char' type to 'byte', you would have recognised the type for what it is (an integer instead of a character type, which C doesn't have).

Edit: If C did have a character type, I assume it wouldn't support direct arithmetic without a cast. Pascal doesn't support character arithmatic. .net doesn't. Javascript, which unifies characters and strings, doesn't. PHP does it's own thing by re-interpreting the character as a ascii digit.

A 'char' and 'unsigned char' are typically 8-bits; however, they are by definition the smallest integral unit on the system - whether it is 1, 7, 8, 15, 64, or 128 bits.

A one point 7 bits was very typical hence ASCII is officially a 7-bit character to integer mapping, while ASCIIZ is an 8-bit mapping, and projects like Qt only rely on the 7-bit mapping.

Today, 8-bit is the norm and we haven't gone above that since having an 8-bit integral type is quite useful. However there is nothing keeping anyone from making a 16/32 bit char/byte computer - which would simply mean that UTF-16/32 would be the standard and ASCII wouldn't be very easily supported as you would have to mask-out a good number of the bits to get the data.