The file that I produce is not read properly by a text editor (notepad, vim, whatever) while the other that I read is. I have exsamined the binary of the file and the BOM is the same, as is the format of the unicode character data. What should I do

The wide file streams perform a pretty much implemention-defined conversion on I/O. It depends mostly on your locale. If you're using a Windows ANSI locale, which you most likely are, it will convert the internal UTF-16 to Windows-1252 on writing. Or if you're on Linux, the UTF-32 to ISO-8859-1, or perhaps UTF-8 if you're using a UTF-8 locale.

All this adds up to the fact that C++'s character handling is quite useless.

When I wrote that, I was only thinking of windows and MSVC where wchar_t is 2 bytes and wide string literals are UTF-16LE. I keep forgetting sizeof(wchar_t) and character encodings for string literals are implementation defined