No, because decoding a non-Unicode encoding requires table lookups.Â UTF-8 requires multiple branches per byte.Â strnlen can be optimized to less than one branch per byte (typically; glibc's optimization is somewhat more complex than that), and no memory access aside from linear access to the string itself.

The way I usually write it is one switch of the byte anded with 0xF8, so there is one branch (aside from validation checks, which you still have to do when computing the length). ÂSingle-byte encodings can obviously be done in constant time, but I wouldn't expect them to be used much anyway.

The basic reasons for ArrayBuffer existing aren't related to strings at all, but (as far as I understand) for things like allowing _javascript_ engines to more easily optimize algorithmic code.Â The original reason for ArrayBuffer isn't really related to this one way or the other.

I mean having both ArrayBuffer and Blob. ÂTo my mind (and again not being involved in the discussion), I would have expected one API for supporting access to binary data, and whether it is in RAM or may be spooled to disc is an implementation detail. ÂNot wanting to offer a synchronous API for the spooled to disc case complicates that of course.