Imagine how glorious the neophyte would feel once he had finally figured out the right way to handle Unicode text in Perl after having slogged through the nearly impenetrable Perl Unicode documentation for hours. Then imagine how frustrating it would be for him to run the script and realize it doesn't work. It creates a badly broken and unusable text file (on Microsoft Windows, at least).

But our neophyte is patient and persistent. He Googles for help, and after several more hours of painstaking research and experimentation, he determines the following script works.

The chicanery needed just to read and write a Unicode file on Windows using Perl is absurd. It's much too arcane.

Can someone explain how this sequence of PerlIO layers works? Why must so many layers be used? Can these layers be specified using the open pragma? If so, how? If not, why not? And why has this ancient Perl bug still not been fixed in version 5.12.2?

In fact, the second, more elaborate version of the script is still wrong. The file named Input.txt has a byte order mark in it, so its encoding is actually UTF-16, not UTF-16LE. It seems there's no way to generate a UTF-16 file in little-endian byte order directly. To generate such a file, you have to specify the UTF-16LE CES (which is wrong) and add the byte order mark explictly to make it UTF-16 instead of UTF-16LE.

but apparently open pragma is broken and doesn't accept the same things as binmode/open

And why has this ancient Perl bug still not been fixed in 5.12.2?

I'm not a perl5-porter so I'm not sure, but it doesn't look like a bug exactly, and nobodys come up with a better way, or reported a bug (that I could find).

It seems there's no way to generate a UTF-16 file in little-endian byte order directly. To generate such a file, you have to specify the UTF-16LE CES (which is wrong) and add the byte order mark explictly to make it UTF-16 instead of UTF-16LE.

This thread is refreshing to read!!! As a Windows user that is somewhat new to Perl, I spent the past few hours trying to figure out why one of my supplied 193 xml files would keep outputting as a bunch of Chinese (?) characters. Jim described exactly what I kept trying.

I finished my script. Everything else works - it does all my replaces beautifully. I have maybe spent 8 hours total on my script and it will save me about 3 days of work.

But, for now, I have to go to that specific XML file, open it in Notepad, and save it as 'ANSI' instead of 'Unicode' before my script will work right.

I have tried adding the use ' $string' supplied in this thread, but I get this error:

Unknown PerlIO layer 'raw:perlio:encoding(UTF-16LE):crlf:utf8'

I really would like to create re-usable code out of my script, but I have yet to find the answer.

Should one use the same layers in the same order for both input and output? Also, do you know why it doesn't work with the open pragma?

I think you and others understand the point I'm making. If your text file is 40 years old and not EBCDIC, then it's ASCII, and writing a Perl script to handle it is easy. You're not forced to think about the character encoding of the text at all. But if you created the text file just now using Microsoft Notepad, writing a Perl script to do anything useful with the text in the file is beyond the capabilities of a neophyte Perl programmer. No one new to the language could arrive at this exceedingly arcane solution to the problem of handling a simple Unicode text file by reading any of the Perl documentation, especially PerlIO, or any books about the language. (PerlIO is incomprehensible to anyone who doesn't already know everything it documents.)

(although this isn't needed with newer versions, it doesn't do any harm either)

Without it, the strings would end up without the utf8 flag set
(upon reading), which means that Perl wouldn't treat them as
text/unicode strings in regex comparisons, etc., as it should.
Similarly for writing.

I think this only goes to prove your point that this is way too arcane
for mere mortals... And, even though there is a "solution" to the
issue, the current behavior of the :crlf layer is definitely a
bug, IMHO. For one, it violates the principle of least surprise. Instead, the following straightforward approach (as anyone sane in his mind would glean from the existing documentation) should work:

Should one use the same layers in the same order for both input and output?

They are processed from the file handle out when reading, and in the opposite direction when writing.

Also, do you know why it doesn't work with the open pragma?

Maybe it does the equivalent of binmode, and binmode doesn't remove the existing layers. (:raw simply ends up disabling the crlf layer, then :crlf reenables the existing layer rather than adding a new layer.)

As far as I can see, the only reason you need :crlf is because you've specifically added
the UNIX line ending (\n) to your output. It would be better to use the platform-independent $/.
The :raw layer should preserve the line endings. So that reduces the chicanery somewhat.

Except for ASCII files, binmode($file_handle)
was required on MSWin32 systems. :raw performs the same function so, while perhaps appearing
to add to the chicanery, it certainly reduces the amount of code.

I don't have sufficient knowledge of UTF-16 to address that aspect of you post.
What I would suggest is that, after removing :crlf and changing \n to $/,
you try your test code without :perlio.
You may still need it but it wouldn't hurt to check.

I agree there's a lot of Unicode-related documentation; however, everything I've made reference to is available here: PerlIO.

As far as I can see, the only reason you need :crlf is because you've specifically added the UNIX line ending (\n) to your output.

:crlf is needed here to get the same platform-independent line-ending handling of plain text files Perl has always supported. Without it, the line-ending handling is badly broken. Half of the line-ending character pair CRLF is missed.

D:\>cat Demo.pl
#!perl
use strict;
use warnings;
open my $input_fh, '<:raw:perlio:encoding(UTF-16LE)', 'Input.txt';
while (my $line = <$input_fh>) {
chomp $line;
print "There's an unexpected/unwanted CR at the end of the line\n"
if $line =~ m/\r$/;
}
D:\>file Input.txt
Input.txt: Text file, Unicode little endian format
D:\>cat Input.txt
We the People of the United States, in Order to form a more perfect
Union, establish Justice, insure domestic Tranquility, provide for
the common defence, promote the general Welfare, and secure the
Blessings of Liberty to ourselves and our Posterity, do ordain and
establish this Constitution for the United States of America.
D:\>perl Demo.pl Input.txt
There's an unexpected/unwanted CR at the end of the line
There's an unexpected/unwanted CR at the end of the line
There's an unexpected/unwanted CR at the end of the line
There's an unexpected/unwanted CR at the end of the line
There's an unexpected/unwanted CR at the end of the line
D:\>

And as Anonymous Monk has already pointed out, \n is the express mechanism in Perl intended to make line-ending handling platform-independent. It is defined not to mean the LF-only Unix line-ending, but rather to mean whatever the line-ending character or character combination terminates lines of plain text files on the platform in use.

It would be better to use the platform-independent $/.

No it wouldn't. And even if it were better, how would someone new to Perl ever figure that out. I've been programming Perl for years and I've never once seen $/ used in place of the usual and ordinary \n. chomp()-ing and "...\n"-ing are the long-lived and ubiquitous standard idioms.

Except for ASCII files, binmode($file_handle) was required on MSWin32 systems. :raw performs the same function so, while perhaps appearing to add to the chicanery, it certainly reduces the amount of code.

But this is the whole point. The file named Input.txt is not a binary file; it's a plain text file. All the Unicode files I want to manipulate on Microsoft Windows using Perl, the text-processing scripting language, are plain text files. binmode() and :raw are lies. Chicanery.

In my humble opinion, this should work on a Unicode UTF-16 file with a byte order mark.

It seems perfectly reasonable to me to expect the scripting language to determine the character encoding of the file all by its little lonesome — it only has to read the first two bytes of the file — and just to do the right thing.

The documentation on :raw says that CRLF conversion is turned off. It appears that \n in the print statement is represented as CRLF before the arguments to print enter the output stream so \n can be used as normal.

Changing $/ to \n in my tests (not surprisingly) produces the same results.