Posted
by
Hemos
on Monday July 10, 2006 @09:46AM
from the like-doing-it-with-sandpaper dept.

lessthan0 writes "In 1995, Microsoft added long file name support to Windows, allowing more descriptive names than the limited 8.3 DOS format. Mac users scoffed, having had long file names for a decade and because Windows still stored a DOS file name in the background. Linux was born with long file name support four years before it showed up in Windows. Today, long file names are well supported by all three operating systems though key differences remain. "

Long filenames aren't all they are cracked up to be. I got made fun of once for using one. I can remember it so clearly now, we were in music theory class in high school and we had to use Finale on a Mac (OS 7 at the time) for our composition projects. I named one of my projects something like "Suso's Music Theory assignment number 4 for Mr. Becker 1993-9-24.mus" and saved it. A week later I was on the same Mac and noticed a file that wasn't mine called "Making fun of people who use really long filenames for their music theory assignments.mus". Nobody was admit to doing it but I knew who it was. I was devastated and never felt comfortable again in that class.

Now I'm scarred for life. I should have listened to my parents and gone with 8.3.

Technically incorrect: Mac filenames could be 255 chars, but at some revision of Finder (forget which), they limited things to 31 characters as a practical limit. The underlying system remained capable.

HFS was limited to 31 characters.

HFS+, introduced in Mac OS 8.1, allows filenames of up to 255 characters, but Classic Mac OS never, for all intents and purposes, supported it.

If you're going to try to correct people, you should probably make sure you're correct yourself so you don't end up looking like an ass.

Even though others apparently claim you're joking, I personally am all for gratuitous words in file names. Some times I achieve this by gratitously deep folder heirarchies, but usually I just randomly add keywords to files. I mostly use a GUI, so it doesn't stress me out too much, but makes it much easier to find them two years later.(I also like my music files to accurately contain the name of the track, so a song like "Where is everybody?" becomes "(maybe the artists name, album etc. -) 03 - Where is ever

I mostly use a GUI, so it doesn't stress me out too much, but makes it much easier to find them two years later.

Many shells now support the use of the "Tab" key to expand a filename (and list filenames that match what you have typed in so far). Also, if your music player is an iPod then you can format it with HPFS and lose that FAT restrictions - though, I believe the iPod actually just mangles the filename and uses the ID tags.

Perhaps that's a bit long of a file name, but it's at least descriptive. I can't tell you how many times I've gotten files titled Agenda 01.doc when they should be more like Tech Committee Agenda 2006-05-01.doc -- it's not excessively long, but with a file name like that I know EXACTLY what's in that file.

And now you can burn it to an ISO 9660 CD-R and be sure of getting the right filename, every time, even on ancient versions of SunOS/Solaris that refused to read Joliet! (Time heals all wounds. Slashdot threads on cross-platform file naming conventions reopen them.)

The only extra quoting necessary is in commands with variable substitution. And (while it may seem confusing), that syntax works even when the filenames have quotes internally. The double quotes identifies the contents to be treated as a single token with interpolation to be performed before passing on to ``command'', which is what you wanted.

Also the $() syntax is your friend. But remember to give it ""s too, you don't want it to expand it AND THEN tokenize it.

Ugh, I hate that behaviour. I wish it would use readline's default behaviour. The alternatives ('microsoft' and 'microsaucer') would be listed, and after that the original prompt completed up to 'micro' would appear.

(Readline is the line input library used by Bash and lots of other GNU/Linux software that presents a command-line interface).

Every time you create a file with a long file name, NTFS creates a second file entry that has a similar 8.3 short file name. A file with an 8.3 short file name has a file name containing 1 to 8 characters and a file name extension containing 1 to 3 characters. The file name and file name extension are separated by a period.

If you have a large number of files (300,000 or more) in a folder, and the files have long file names with the same initial characters, the time required to

Autocomplete via Tab key was only made avilible wint winxo's cmd.exe. Prior to that the tildas were the way to go for command line, for boxes that I didn't just install Freedos command.exe which allowed tab completion.

OS X is not case sensitive by default. It is case preserving, meaning that "Foo.txt" will still be "Foo.txt" when you move it or whatever (unlike in Windows, where it could turn into "FOO.TXT", but both names are still exactly the same file. Beware of this when copying files from a Linux (or otherwise case sensitive) filesystem to a Mac!

Now, OS X does have the option to use case sensitive HFS+ (or UFS, for that matter), but last I heard either is likely to cause problems if you try to use it as the root vo

Linux always is, by default (I don't know if you can make it otherwise without a LOT of hacking).

Windows: it is case "retentive" by default (it remembers cases as typed) but not case sensitive. It (full case sensitivity) can be enabled through a registry hack or two, or by selecting the "enable case sensitivity" option when installing SFU, at the cost of possibly breaking backwards compatibility with many applications.

Mac: OS 9 (and earlier) were case retentive only. OS X is case retentive (no sensitive) by default, however, if you install on a UFS filesystem it will become case sensitive, and just as with Windows, possibly breaking backwards compatibility with many applications.

Tada! Two sentances. I imagine, were I a perl coder, I could have done it in half of one, but there you go.

True enough but the drawback of using Perl style syntactic obfuscation to compact this/. story is that people would have to stare at the resulting half a sentence for a lengthy period of time before they managed to figure out what the hell you are trying to say.

Why are computer file names and conventions and protocols
so messed up? It's bizarre -- and Microsoft has been one of the
worst offenders with one of the most powerful positions and
opportunities to make it a better filename-naming world.

I had worked in the DOS world long ago, and
I'd always been frustrated with not only the restriction of the
8.3 naming convention, but the added imposition of:

the requirement the ".3" portion be satisfied, i.e., if
you didn't give a ".3" extension, it wasn't valid.

the semantic mapping of the extension to filetype, WTF?

the implied (don't remember if it was canonical) semantic
that no ".3" extension meant the file was a directory

the case insensitive nature of file names

etc. (or should I say,.etc)

Many years later, I had opportunity to consult in the
Windows/DOS world after having worked in the Unix world for over
a decade -- figured Microsoft had had enough time and money to
work out the kinks in what had obviously been an early-technology
constraint for the brain dead old DOS naming restrictions. Not.
Sigh.

And then the transition was a nightmare, whoever conjured up
the VFAT naming format and the "tilde" mapping backwards
compatibility to FAT names should have been shot. A golden
opportunity lost.

And then everything swings completely the other direction
where anything goes. This may curry favor with users, but wreaks
havoc on billions of lines of code which all of a sudden choke on
what had been simple parsing routines -- fixable, but at great
expense. I still think this was a paradigm shift that somehow
could have accommodated the user space/community but still
allowed some sanity in the machine world.

But layered on, or dovetailed into that quagmire is the
Microsoft insistence they "know better than thou"... and the
condescending insistence of dragging the ".3" extension nightmare
into the new rules for file naming. Would have been okay to
"allow" ".3" naming, but to impose the bizarre rules and
behaviors Microsoft has? (How many of you have files named
picture.jpg.jpg.jpg out there?)

Options to show extension, defaults to hide extensions, and
continued reliance and semantics applied to extensions continue
to make the filenaming world a landmine field.

And, Microsoft dares to allow mixed case naming, but does case
insensitive handling of file names... don't even get me started
about some of the bizarre results and buggy behavior I've traced
to that. I only wish I'd had a chargeback code for all of the
time I've spent fixing and debugging systems that all come back
to the file naming. Sigh, again.

All of this isn't to let Unix and Unix style file naming
skate. I've had problems, though fewer, there. But, at least
it's seemingly (to me) more consistent and predictable, though
there has been what I call "Windows" creep in that there have
appeared some apps that somehow think managing and imposing
"transparently" the extension to "file type" mapping is a good
thing (it's not).

(One of the funniest Unix debacles I experienced was debugging
a groups application -- they were moving files around and losing
all but one each processing cycle... turned out they were remote
copying from one Unix that had 14 (or more, can't remember) char
limit on file names to an old SunOS system that allowed only 11.
The remote copy that moved files from one system to the other for
subsequent processing did so without complaint, the receiving
side silently truncated the incoming files -- which were
identical in name through 11 chars... essentially copying the
incoming files over and over again on top of the same file...
Sigh and sheesh!)

You have some good points, but I really can't agree with two of your complaints...
"the semantic mapping of the extension to filetype, WTF"
It seems far better to me than mime-types or magic strings. Mime-types fail due to not being actually encoded on-filesystem, and magic strings require users to use a hex editor to try and identify an alien file type.
"the case insensitive nature of file names"
Case sensitivity is a big usability issue for people, so burdening the few (the programmers) so that the majority (the users) don't get confused, is a fair trade of IMHO.

Mime types could be encoded on-filesystem if FS designers chose to (Freedesktop.org has a specification for doing so in a cross-desktop fashion [freedesktop.org] if you're using a UNIX with extended attributes). In any case, mapping files to file types by extension has issues to do with user training and multiple extensions (in particular, if I send you Important.jpg.vbs, which extension are you doing to pick on for the filetype, and which one is the system going to use? The wrong answer results in unexpected behaviour, whic

The problem is that extensions are part of the filename, i.e. they are arbitrary. Mapping arbitrary data to meta information is stupid at best, dangerous usually and in combination with hidden extensions and automatic execution it is a blatant disregard of even the most basic security procedures.

Magic strings are the "right way", or at least close to it.Have you ever looked at the first 4 bytes of a Java.class file? It's CA FE BA BE. Guess what... even if it somehow gets named foobar.OMGWTFIsThisFileType, the JVM can still pick it out as a Java bytecode file. Why? How? All Java bytecode files always start with CAFEBABE. If it starts with CAFEBABE, the JVM can semi-safely assume that this is a valid bytecode file. But... what if some other file "collides" with that signature?

the requirement the ".3" portion be satisfied, i.e., if you didn't give a ".3" extension, it wasn't valid.
the semantic mapping of the extension to filetype, WTF?
the implied (don't remember if it was canonical) semantic that no ".3" extension meant the file was a directory

Not true. I used names with no extension for my Wordstar files back in DOS days. Since that's what most of my files were, I made that the simplest. Directories usually had no extension, but you could have if you wanted (some programs did that for their private data).

Winows 9x and above though do enforce rules on extensions; but worst of all, hide some, or all, of them by default. Thus Anna-Kournikova.jpg.exe. The old Mac OS had it right, the filetype flags were not user-created or normally visible, though you could get tools to hack them if you wanted.

Not quite. I still remember the pain of trying to use standard formats like JPEG across multiple applications back in the system 6/7 days.

Mac OS had a separate file type, and file creator, code. So apps could share filetypes, but have distinct creators. But egomaniacal programmers often made their apps change the codes. That's when you needed things like FileTyper.

unix also has "anything goes", but a strong sense of "not everything is wise".

Under ext3, Linux, all of the following are valid filenames:

"foo bar"

"-rf"

"*"

"\ ?"

" "

"foo\bar.*"

Don't get me started on the havoc newbies manage to make when trying to deal with suchlike. In general, people know this will blow up the first time someone tries a naive script, and tend to avoid all of it. The only thing borderline common is filenames with spaces in them, even this breaks some scripts, but arguabl

> the requirement the ".3" portion be satisfied, i.e., if you didn't give a ".3" extension, it wasn't valid.

Your memory is faulty here--that is not true; not even slightly.

> the semantic mapping of the extension to filetype, WTF?

Long predated MS. Found even in UNIX before MS existed. And still widely used even on UNIX/Linux/BSD. The big flaw that DOS had here (IMO) was making the extension determine whether a file was executable. Having an executable flag is a much better solution. But the approach that DOS took was widely used in other OSes at the time.

> the case insensitive nature of file names

There are plenty of arguments on both sides of this one. I'm more used to/more comfortable with/prefer case-sensitive filenames, but I can't bring myself to claim that one option is better than the other.

I thought VFAT was actually a fairly clever solution to the problem of providing backwards compatibility with the horrors of 8.3, and MS really had no choice but to provide backwards compatibility. I have a lot of complaints with the things MS has done over the years, but I actually kind of admire VFAT.

> defaults to hide extensions

This, on the other hand, is one of the biggest mistakes that MS ever made! Someone should have lost their job over this idiocy!

As a side note, I have to agree with everyone who says that the original article is terrible. The list of characters to avoid for portability is missing several, and the article completely overlooks one of the biggest and most headache-inducing issues--i18n and character encodings. This is one area where UNIX/Linux's ultra-flexibility actually gets it in trouble, since you can have file names with different encodings in the same directory. I actually had a mix of latin1 and utf8 filenames in my home directory for a while, and NOTHING would display them all correctly. And I bet it's even worse if you mix-and-match various CJK encodings. Windows, I'm told, forces everything to utf16, which would not have been my first choice, but at least it's consistent.

Try copying a 40 character file from a windows server to a OSX client. What happens? Well... it depends if you used appletalk or SMB to connect with.

Which is always the issue. Windows is the weakest link. Services for Macintosh (now deprecated) is the thing that changes the names to be "mac safe" even though the idea of "mac safe" has long since changed since SfM was created. "Luckily" SfM is gone in Vista.

Windows file names can be up to 255 characters, but that includes the full path.

Holy shit is this true? That seems like a brain-dead limitation to have in the year 2006.

Oh and Mac users didn't really have support for long file names until OS X. HFS has always supported 255-character file names, but in OS 9 and earlier, the Finder would only recognize up to 31 characters for a file name, so it was basically impossible to have a file name greater than 31 characters even though the file system allowed it.

Yes and no. That was a limitation of Windows 9x (a holdover from DOS and Unix), and still exists in the ANSI versions of the NT APIs. However, the native NT Unicode APIs support 32k characters for the path. I don't know if there's a 255 char limit on individual names for NT, off the top of my head. Though it's possible that the number of programs still using the ANSI APIs (since the Unicode version only works on NT, but the ANSI version works on 9x as well) may impose an artificial limit of 255 char paths o

I think the biggest problem I had one day was when I was trying to remove a file in Linux who's first character was -

That is what the -- option is for. It signifies that there will be no further options, so anything following it that starts with '-' will be interpreted as a filename. rm -- -funny-named-file will do the trick.

Of course, the funny characters are usually expanded by the shell, not rm, so it still won't work in many cases. Unix rules sometimes, doesn't it?

My favorite shell-expansion moment: when I was a new Unix user long, long ago (freshly coming over from VMS), I wanted to remove one funny-named file in a directory. I discovered that rm had this cool switch "-i" that would prompt for removal on each file. Great! I'd just say "yes" to the file named *, or whatever I'd accidentally created. So, being a VMS user (and thus used to switches that went anywhere on the command line), I typed this:

$ rm * -i

...and got the message "-i: No such file or directory". Ooops.... I learned a lot that day...

I've always considered this to be a borderline bug, since this also happens in wildcard expansion. If you do a "command *" in a directory where there are files beginning with a -, the wildcard will expand in a way that makes the command take the filename of file beginning with - as an option/argument. I've never found any "evil" way to exploit this, but it's always bothered me a bit.

I've got a CD-ROM that is unreadable under Windows XP because a Mac user put the files in a directory containing a '>' character.

If I can turn off Joliet comprehension I'll have access to the files in their original ISO9660 8.3 glory.

It's unfortunate that Microsoft's Joliet driver doesn't realize it's presenting names the OS can't tolerate. Otherwise it could replace the forbidden characters with % escapes before returning them to the OS. Or, alternately, handing the ISO9660 name to the OS if the Joliet name was forbidden by Windows' rules.

So, your OS supports long filenames, huh? Then why doesn't the vendor use them for all the cryptically named shared libraries, scripts, etc. that clutter up any modern os system directory?

They way I look at it, the day I look at something like "d3d8.dll" or whatever drek is fermenting in \WINDOWS32\ and it is actually named with a descriptive filename, then that OS will truly support long filenames.

Not sure where the Linux crown compares, but OS X is getting better with each revision. Classic Mac OS had this one down (mostly) cold.

Why not simply follow the POSIX standard*? You can avoid a lot of hassles that way. Isn't that why we have standards?? I know, it doesn't resolve the conflict with Windows case "insensitivity", but... it does provide interoperability between POSIX-compliant OSes.
* upper/lower case alphabetic characters, numeric digits, underscore, dash, and period.

"Though the file system supports paths up to ca. 32,000 Unicode characters with each path component (directory or filename) up to 255 characters long, certain names are unusable, since NTFS stores its metadata in regular (albeit hidden and for the most part inaccessible) files; accordingly, user files cannot use these names."

The article incorrectly states "Windows file names can be up to 255 characters, but that includes the full path. A lot characters are wasted if the default storage location is used: "C:\Documents and Settings\USER\My Documents\"." I will grant that this may have been a limitation in the past, but XP has had NTFS from the start, and NTFS is by far the most common windows FS today.

PATH_MAX is supposed to be defined as the length of any single path segment. NT "the OS", and NTFS "the filesystem", support completely qualified path concatenations that are like 32k or so long.

You can, using CMD.EXE, create a directory 250ish chars long. Then you can go into that directory, and create child dirs with a similar length, and so on, for quite a while.

Now, what happens when you try and access that file you made?

It depends entirely on the appplication.

In XP and earlier, explorer.exe got pretty confused around 4096 chars. When you were viewing a DFS redirected share, explorer got confused even earlier.

in CLR 1.0, if you have relative directory traversal, you can access paths which are longer than 255 chars, but any of the "open by path" routines cap it at 255 chars (including filename!). I filed a bug on this that the CLR guys said "won't fix - we just do what Win32 does". (gosh guys, i thought.NET was going to free us all from Win32. Guess not.)

So, the NT native APIs support enormous paths, NTFS supports them, but depending on which libraries your application uses, you probably can't do much better than 250 chars total - path _and_ filename.

Alot of what makes Microsoft "good" is its commitment to backwards compatibility. And a lot of what makes Microsoft so lousy is it's commitment to backwards compatability:/

I was doing tech support for Win95 way back when it came out, I think it was in 1995 (duh), I had a customer who wanted larger fonts on the desktop. I explained how to change the size of the fonts for desktop icons. As soon as we did, "Network Neighborhood" turned into "Network Neighborho...". Of course, the guys on the phone got a kick out of that and it was knows as "Net Ho" for at least a week after that.

They have a whole block on "Avoid using these characaters for maximum portability".

But, where's the exclamation mark? TONS of Windows people (including me) use exclamation points as the first character to put files/directories to the top of the list. Linux constantly chokes on these characters. But, no mention of it at all in this article.

But, where's the exclamation mark? TONS of Windows people (including me) use exclamation points as the first character to put files/directories to the top of the list. Linux constantly chokes on these characters. But, no mention of it at all in this article.

No it doesn't. Linux (like most UNIXes) have no problem with exclamation marks. In fact, the only characters specifically disallowed are NUL (for C compatability) and/

Your shell however, assigns a special meaning to the "!" character, and that special m

Windows here suck the most. NTFS is good, but all the backward compatibility cruft just drags the FS down.Once under Windows, I have spent about half an hour with Explorer refusing to copy one one. Explorer was insisting that "File No Found". Text file was there and perfectly editable by notepad. I needed about 30 minutes to observe that Explorer was giving error on only on file of whole directory and that file have had the longest name. ZOMG!!! They still have cap 255 bytes on path(!) length!!!

I want documents not files. Sometimes multiple files make up one document (webpage + stylesheet + media), sometimes there are multiple documents in one file (zip).

When will anyone come up with a persistant storage system which allows me to make random tags to documents and groups of documents. Drop the folders and give me 'search queries' on content and tags. Automatically save all data and don't bother me with giving it a name... When it's important I will give it the proper tags until then just remember it for me.

The purpose of the "OS" (its actually not the OS here, but lets use that term to make the following discussion clear) is to provide the set of tools needed to implement your "paradigm" (again, not true, but it will do).Your way of thinking.

As it turns out, having multiple "files" composing a "document" is easily mapped in a hierarchical layout. As a simple idea, put all the files into a node and call that node the name of the document.

The "OS" should not impose upon the applications, but should provide read

OS X supports up to 255 characters and can use the same characters as Linux, except for a colon (:).

In Terminal.app, you can create file names with colon, but such character is mapped to a forward slash when seen in Finder. On the other hand, you can use forward slash in Finder, and it is mapped to a colon in the command line.

Historically, Mac OSes use colon to separate folder names in a path.

There is a subtle restriction in HFS+. All files in HFS+ have their names in normalized unicode [unicode.org], and in order to normalize in the first place, file names must be in valid UTF-8 encoding. You cannot use random character string for file names.

There is no such restriction for UFS on Mac OS X. I think UFS supports roughly the same characters as in BSD and Linux and any other Unices. If you're transferring files from Linux with names in a legacy encoding, you can create a UFS disk image and convert file names to UTF-8 before copying them to HFS+.

There's a whole new dimension of fun when your file names include non-Roman characters, such as Japanese.

First of all, there is the matter of which encoding the file names are in. Lots of Japanese Windows installs and their utilities still use Shift-JIS for file names. OS X, on the other hand, uses Unicode, and typically expects UTF-8 for file names from programs. In fact, it not only expects it, it enforces it, returning an error when attempting to use a file name which is invalid UTF-8.

Many command utilities that deal with archive files utterly fail on OS X when given archives using Shift-JIS file names, and many others improperly translate it as 8-bit ISO Latin I. A few (such as the command line RAR archiver) are actually smart enough to make a system call to translate the file name from Shift-JIS to UTF-8.

And then there is the issue of Shift-JIS MP3 tags. If you open those with iTunes, not only do they get interpreted as ISO Latin I, but irreversably so if you do something that writes them back to the.mp3 file. (They get written back as a UTF-8 representation of the ISO Latin.) I've had luck in the past using a hex editor and SimpleText in Classic to convert them with much work, but I'm not sure what I'll do with the new Intel Macs that don't support Classic.

I know this might sound a bit offtopic, but since the post mentioned windows filesystems, I felt it might be a good place to throw this question...
Not many people know or have even used this, but NTFS has support for multiple streams of data in a single file, which is something that borrows concepts from object-oriented filesystems. This is scarcely, if at all, included in the regular windows documentation (it is documented in the MS knowledge base http://support.microsoft.com/kb/105763/ [microsoft.com]). I thought it to be a nice idea for, say, media files, to store the audio in one stream the the video in another, or adding subtitles or metainformation in different streams in a very standard way. But for some reason nobody used that, not even Microsoft who designed the feature.
Does anybody have a clue as to why this has not been used?

Did you somehow miss the link? It basically said to remove files with a preceding '-' (-filename) you do 'rm -- -filename' or 'rm./-filename'. And to remove a file with unprintable characters try 'rm file?with?unprintable?characters'.

It shows that you do a rm with a wildcard. If you are having issues, be sure to run it with ls first, and then change it to rm only after checking that it is what you want.

Also what was not mentioned in the article was the difference of a pathname vs. a filename. Yes, Windows does 255 as a filename. But the more limiting item is the max_path is 255, while in typical unix it is 1024. Basically, *nix is much longer and able to go much deeper in the path.

Is this anything other than an attempt to dis Windows for no other reason than 'Because'?

I think it is a valid issue. There are some files in a CVS module I simply cannot use on Windows because the filesystem chokes when CVS tries to write them in Windows and the rest of the CVS commit is aborted. It is a huge pain in the ass, even though these files do not contain any capital letters. This happens with ever CVS client on Windows, even Cygwin. MS needs to get off their butts and fix this crap once and fo

File names aside, is there a good way to "tag" files (generic metadata)
on Windows or Linux?

On NTFS, you can use ADS (Alternate Data Streams)
to store metadata about a file, though I don't know of any
software that can read such data in a consistant manner - Not to mention,
just about every malware scanner out there will flag such files as
suspicious.

On Linux, it very much depends on the FS you choose, though again,
support for file metadata remains about as standardized as snowflakes.

On windows, well behaved programs go in the aptly named "Program Files"

No, they go in "C:\Program Files" and the Registry and one or more users' "C:\Documents and Settings\%USERNAME%\Application Data" folder.

on OSX they go in "Applications"

No, they go in "/Applications" and "/Library" and one or more users' "~/Library". Also, by the way, OS X does have/bin and/usr/bin and all the other UNIX standard folders; they're just hidden from the finder.

No, on Linux they go in "/usr/local/bin" and "/usr/local/etc" and one or more users' "~" only, because "/bin" and "/usr/bin" are reserved for bits of the OS itself (equivalent to "C:\Windows" and "/System").

Not fully correct./usr/local/bin is for things installed manually. For example, if you download the Perl sources, compile, and 'make install', it'll go into/usr/local/bin by default. If you install the package, it'll go into/usr/bin.

Despite your naive assumption that something with "16" in it's name is better than something with "8", the facts are that UTF-16 cannot handle as many characters:UTF-16 as originally designed handles 0xffff characters.

Because that was not enough characters, UTF-16 was modified to have "surrogate pairs". Usually claimed to now handle 0x10ffff characters, but in fact they fail to subtract the surrogate half-characters (0x800). Also this deleted the only plausable claim that UTF-16 is better than UTF-8, in tha