NAME

DESCRIPTION

An exploration of some of the issues facing Perl programmers
on EBCDIC based computers.

Portions of this document that are still incomplete are marked with XXX.

Early Perl versions worked on some EBCDIC machines, but the last known
version that ran on EBCDIC was v5.8.7, until v5.22, when the Perl core
again works on z/OS. Theoretically, it could work on OS/400 or Siemens'
BS2000 (or their successors), but this is untested. In v5.22 and 5.24,
not all
the modules found on CPAN but shipped with core Perl work on z/OS.

If you want to use Perl on a non-z/OS EBCDIC machine, please let us know
by sending mail to perlbug@perl.org

Writing Perl on an EBCDIC platform is really no different than writing
on an ASCII one, but with different underlying numbers, as we'll see
shortly. You'll have to know something about those ASCII platforms
because the documentation is biased and will frequently use example
numbers that don't apply to EBCDIC. There are also very few CPAN
modules that are written for EBCDIC and which don't work on ASCII;
instead the vast majority of CPAN modules are written for ASCII, and
some may happen to work on EBCDIC, while a few have been designed to
portably work on both.

If your code just uses the 52 letters A-Z and a-z, plus SPACE, the
digits 0-9, and the punctuation characters that Perl uses, plus a few
controls that are denoted by escape sequences like \n
and \t
, then
there's nothing special about using Perl, and your code may very well
work on an ASCII machine without change.

But if you write code that uses \005
to mean a TAB or \xC1
to mean
an "A", or \xDF
to mean a "ÿ" (small "y"
with a diaeresis),
then your code may well work on your EBCDIC platform, but not on an
ASCII one. That's fine to do if no one will ever want to run your code
on an ASCII platform; but the bias in this document will be towards writing
code portable between EBCDIC and ASCII systems. Again, if every
character you care about is easily enterable from your keyboard, you
don't have to know anything about ASCII, but many keyboards don't easily
allow you to directly enter, say, the character \xDF
, so you have to
specify it indirectly, such as by using the "\xDF"
escape sequence.
In those cases it's easiest to know something about the ASCII/Unicode
character sets. If you know that the small "ÿ" is U+00FF
, then
you can instead specify it as "\N{U+FF}"
, and have the computer
automatically translate it to \xDF
on your platform, and leave it as
\xFF
on ASCII ones. Or you could specify it by name, \N{LATINSMALLLETTERYWITHDIAERESIS
and not have to know the numbers.
Either way works, but both require familiarity with Unicode.

COMMON CHARACTER CODE SETS

ASCII

The American Standard Code for Information Interchange (ASCII or
US-ASCII) is a set of
integers running from 0 to 127 (decimal) that have standardized
interpretations by the computers which use ASCII. For example, 65 means
the letter "A".
The range 0..127 can be covered by setting various bits in a 7-bit binary
digit, hence the set is sometimes referred to as "7-bit ASCII".
ASCII was described by the American National Standards Institute
document ANSI X3.4-1986. It was also described by ISO 646:1991
(with localization for currency symbols). The full ASCII set is
given in the table below as the first 128 elements.
Languages that
can be written adequately with the characters in ASCII include
English, Hawaiian, Indonesian, Swahili and some Native American
languages.

Most non-EBCDIC character sets are supersets of ASCII. That is the
integers 0-127 mean what ASCII says they mean. But integers 128 and
above are specific to the character set.

Many of these fit entirely into 8 bits, using ASCII as 0-127, while
specifying what 128-255 mean, and not using anything above 255.
Thus, these are single-byte (or octet if you prefer) character sets.
One important one (since Unicode is a superset of it) is the ISO 8859-1
character set.

ISO 8859

The ISO 8859-$n are a collection of character code sets from the
International Organization for Standardization (ISO), each of which adds
characters to the ASCII set that are typically found in various
languages, many of which are based on the Roman, or Latin, alphabet.
Most are for European languages, but there are also ones for Arabic,
Greek, Hebrew, and Thai. There are good references on the web about
all these.

Latin 1 (ISO 8859-1)

A particular 8-bit extension to ASCII that includes grave and acute
accented Latin characters. Languages that can employ ISO 8859-1
include all the languages covered by ASCII as well as Afrikaans,
Albanian, Basque, Catalan, Danish, Faroese, Finnish, Norwegian,
Portuguese, Spanish, and Swedish. Dutch is covered albeit without
the ij ligature. French is covered too but without the oe ligature.
German can use ISO 8859-1 but must do so without German-style
quotation marks. This set is based on Western European extensions
to ASCII and is commonly encountered in world wide web work.
In IBM character code set identification terminology, ISO 8859-1 is
also known as CCSID 819 (or sometimes 0819 or even 00819).

EBCDIC

The Extended Binary Coded Decimal Interchange Code refers to a
large collection of single- and multi-byte coded character sets that are
quite different from ASCII and ISO 8859-1, and are all slightly
different from each other; they typically run on host computers. The
EBCDIC encodings derive from 8-bit byte extensions of Hollerith punched
card encodings, which long predate ASCII. The layout on the
cards was such that high bits were set for the upper and lower case
alphabetic
characters [a-z]
and [A-Z]
, but there were gaps within each Latin
alphabet range, visible in the table below. These gaps can
cause complications.

Some IBM EBCDIC character sets may be known by character code set
identification numbers (CCSID numbers) or code page numbers.

Perl can be compiled on platforms that run any of three commonly used EBCDIC
character sets, listed below.

The 13 variant characters

Among IBM EBCDIC character code sets there are 13 characters that
are often mapped to different integer values. Those characters
are known as the 13 "variant" characters and are:

\ []{} ^ ~ ! # | $ @ `

When Perl is compiled for a platform, it looks at all of these characters to
guess which EBCDIC character set the platform uses, and adapts itself
accordingly to that platform. If the platform uses a character set that is not
one of the three Perl knows about, Perl will either fail to compile, or
mistakenly and silently choose one of the three.

The Line Feed (LF) character is actually a 14th variant character, and
Perl checks for that as well.

EBCDIC code sets recognized by Perl

0037

Character code set ID 0037 is a mapping of the ASCII plus Latin-1
characters (i.e. ISO 8859-1) to an EBCDIC set. 0037 is used
in North American English locales on the OS/400 operating system
that runs on AS/400 computers. CCSID 0037 differs from ISO 8859-1
in 236 places; in other words they agree on only 20 code point values.

1047

Character code set ID 1047 is also a mapping of the ASCII plus
Latin-1 characters (i.e. ISO 8859-1) to an EBCDIC set. 1047 is
used under Unix System Services for OS/390 or z/OS, and OpenEdition
for VM/ESA. CCSID 1047 differs from CCSID 0037 in eight places,
and from ISO 8859-1 in 236.

POSIX-BC

The EBCDIC code page in use on Siemens' BS2000 system is distinct from
1047 and 0037. It is identified below as the POSIX-BC set.
Like 0037 and 1047, it is the same as ISO 8859-1 in 20 code point
values.

Unicode code points versus EBCDIC code points

In Unicode terminology a code point is the number assigned to a
character: for example, in EBCDIC the character "A" is usually assigned
the number 193. In Unicode, the character "A" is assigned the number 65.
All the code points in ASCII and Latin-1 (ISO 8859-1) have the same
meaning in Unicode. All three of the recognized EBCDIC code sets have
256 code points, and in each code set, all 256 code points are mapped to
equivalent Latin1 code points. Obviously, "A" will map to "A", "B" =>
"B", "%" => "%", etc., for all printable characters in Latin1 and these
code pages.

It also turns out that EBCDIC has nearly precise equivalents for the
ASCII/Latin1 C0 controls and the DELETE control. (The C0 controls are
those whose ASCII code points are 0..0x1F; things like TAB, ACK, BEL,
etc.) A mapping is set up between these ASCII/EBCDIC controls. There
isn't such a precise mapping between the C1 controls on ASCII platforms
and the remaining EBCDIC controls. What has been done is to map these
controls, mostly arbitrarily, to some otherwise unmatched character in
the other character set. Most of these are very very rarely used
nowadays in EBCDIC anyway, and their names have been dropped, without
much complaint. For example the EO (Eight Ones) EBCDIC control
(consisting of eight one bits = 0xFF) is mapped to the C1 APC control
(0x9F), and you can't use the name "EO".

The EBCDIC controls provide three possible line terminator characters,
CR (0x0D), LF (0x25), and NL (0x15). On ASCII platforms, the symbols
"NL" and "LF" refer to the same character, but in strict EBCDIC
terminology they are different ones. The EBCDIC NL is mapped to the C1
control called "NEL" ("Next Line"; here's a case where the mapping makes
quite a bit of sense, and hence isn't just arbitrary). On some EBCDIC
platforms, this NL or NEL is the typical line terminator. This is true
of z/OS and BS2000. In these platforms, the C compilers will swap the
LF and NEL code points, so that "\n"
is 0x15, and refers to NL. Perl
does that too; you can see it in the code chart below.
This makes things generally "just work" without you even having to be
aware that there is a swap.

Unicode and UTF

UTF stands for "Unicode Transformation Format".
UTF-8 is an encoding of Unicode into a sequence of 8-bit byte chunks, based on
ASCII and Latin-1.
The length of a sequence required to represent a Unicode code point
depends on the ordinal number of that code point,
with larger numbers requiring more bytes.
UTF-EBCDIC is like UTF-8, but based on EBCDIC.
They are enough alike that often, casual usage will conflate the two
terms, and use "UTF-8" to mean both the UTF-8 found on ASCII platforms,
and the UTF-EBCDIC found on EBCDIC ones.

You may see the term "invariant" character or code point.
This simply means that the character has the same numeric
value and representation when encoded in UTF-8 (or UTF-EBCDIC) as when
not. (Note that this is a very different concept from The 13 variant characters mentioned above. Careful prose will use the term "UTF-8
invariant" instead of just "invariant", but most often you'll see just
"invariant".) For example, the ordinal value of "A" is 193 in most
EBCDIC code pages, and also is 193 when encoded in UTF-EBCDIC. All
UTF-8 (or UTF-EBCDIC) variant code points occupy at least two bytes when
encoded in UTF-8 (or UTF-EBCDIC); by definition, the UTF-8 (or
UTF-EBCDIC) invariant code points are exactly one byte whether encoded
in UTF-8 (or UTF-EBCDIC), or not. (By now you see why people typically
just say "UTF-8" when they also mean "UTF-EBCDIC". For the rest of this
document, we'll mostly be casual about it too.)
In ASCII UTF-8, the code points corresponding to the lowest 128
ordinal numbers (0 - 127: the ASCII characters) are invariant.
In UTF-EBCDIC, there are 160 invariant characters.
(If you care, the EBCDIC invariants are those characters
which have ASCII equivalents, plus those that correspond to
the C1 controls (128 - 159 on ASCII platforms).)

A string encoded in UTF-EBCDIC may be longer (very rarely shorter) than
one encoded in UTF-8. Perl extends both UTF-8 and UTF-EBCDIC so that
they can encode code points above the Unicode maximum of U+10FFFF. Both
extensions are constructed to allow encoding of any code point that fits
in a 64-bit word.

UTF-EBCDIC is defined by
Unicode Technical Report #16
(often referred to as just TR16).
It is defined based on CCSID 1047, not allowing for the differences for
other code pages. This allows for easy interchange of text between
computers running different code pages, but makes it unusable, without
adaptation, for Perl on those other code pages.

The reason for this unusability is that a fundamental assumption of Perl
is that the characters it cares about for parsing and lexical analysis
are the same whether or not the text is in UTF-8. For example, Perl
expects the character "["
to have the same representation, no matter
if the string containing it (or program text) is UTF-8 encoded or not.
To ensure this, Perl adapts UTF-EBCDIC to the particular code page so
that all characters it expects to be UTF-8 invariant are in fact UTF-8
invariant. This means that text generated on a computer running one
version of Perl's UTF-EBCDIC has to be translated to be intelligible to
a computer running another.

TR16 implies a method to extend UTF-EBCDIC to encode points up through
2 ** 31 - 1
. Perl uses this method for code points up through
2 ** 30 - 1
, but uses an incompatible method for larger ones, to
enable it to handle much larger code points than otherwise.

Using Encode

Starting from Perl 5.8 you can use the standard module Encode
to translate from EBCDIC to Latin-1 code points.
Encode knows about more EBCDIC character sets than Perl can currently
be compiled to run on.

to get four files containing "Hello World!\n" in ASCII, CP 0037 EBCDIC,
ISO 8859-1 (Latin-1) (in this example identical to ASCII since only ASCII
characters were printed), and
UTF-EBCDIC (in this example identical to normal EBCDIC since only characters
that don't differ between EBCDIC and UTF-EBCDIC were printed). See the
documentation of Encode::PerlIO for details.

As the PerlIO layer uses raw IO (bytes) internally, all this totally
ignores things like the type of your filesystem (ASCII or EBCDIC).

SINGLE OCTET TABLES

The following tables list the ASCII and Latin 1 ordered sets including
the subsets: C0 controls (0..31), ASCII graphics (32..7e), delete (7f),
C1 controls (80..9f), and Latin-1 (a.k.a. ISO 8859-1) (a0..ff). In the
table names of the Latin 1
extensions to ASCII have been labelled with character names roughly
corresponding to The Unicode Standard, Version 6.1 albeit with
substitutions such as s/LATIN// and s/VULGAR// in all cases;
s/CAPITAL LETTER//
in some cases; and
s/SMALL LETTER ([A-Z])/\l$1/
in some other
cases. Controls are listed using their Unicode 6.2 abbreviations.
The differences between the 0037 and 1047 sets are
flagged with **
. The differences between the 1047 and POSIX-BC sets
are flagged with ##.
All ord() numbers listed are decimal. If you
would rather see this table listing octal values, then run the table
(that is, the pod source text of this document, since this recipe may not
work with a pod2_other_format translation) through:

Table in hex, sorted in 1047 order

Since this document was first written, the convention has become more
and more to use hexadecimal notation for code points. To do this with
the recipes and to also sort is a multi-step process, so here, for
convenience, is the table from above, re-sorted to be in Code Page 1047
order, and using hex notation.

IDENTIFYING CHARACTER CODE SETS

It is possible to determine which character set you are operating under.
But first you need to be really really sure you need to do this. Your
code will be simpler and probably just as portable if you don't have
to test the character set and do different things, depending. There are
actually only very few circumstances where it's not easy to write
straight-line code portable to all character sets. See
Unicode and EBCDIC in perluniintro for how to portably specify
characters.

But there are some cases where you may want to know which character set
you are running under. One possible example is doing
sorting in inner loops where performance is critical.

To determine if you are running under ASCII or EBCDIC, you can use the
return value of ord() or chr() to test one or more character
values. For example:

Obviously the first of these will fail to distinguish most ASCII
platforms from either a CCSID 0037, a 1047, or a POSIX-BC EBCDIC
platform since "\r"eqchr(13)
under all of those coded character
sets. But note too that because "\n"
is chr(13) and "\r"
is
chr(10) on old Macintosh (which is an ASCII platform) the second
$is_ascii
test will lead to trouble there.

To determine whether or not perl was built under an EBCDIC
code page you can use the Config module like so:

CONVERSIONS

utf8::unicode_to_native()
and utf8::native_to_unicode()

tr///

In order to convert a string of characters from one character set to
another a simple list of numbers, such as in the right columns in the
above table, along with Perl's tr/// operator is all that is needed.
The data in the table are in ASCII/Latin1 order, hence the EBCDIC columns
provide easy-to-use ASCII/Latin1 to EBCDIC operations that are also easily
reversed.

For example, to convert ASCII/Latin1 to code page 037 take the output of the
second numbers column from the output of recipe 2 (modified to add
"\"
characters), and use it in tr/// like so:

Similarly one could take the output of the third numbers column from recipe 2
to obtain a $cp_1047
table. The fourth numbers column of the output from
recipe 2 could provide a $cp_posix_bc
table suitable for transcoding as
well.

If you wanted to see the inverse tables, you would first have to sort on the
desired numbers column as in recipes 4, 5 or 6, then take the output of the
first numbers column.

iconv

XPG operability often implies the presence of an iconv utility
available from the shell or from the C library. Consult your system's
documentation for information on iconv.

On OS/390 or z/OS see the iconv(1) manpage. One way to invoke the iconv
shell utility from within perl would be to:

# OS/390 or z/OS example

$ascii_data = `echo '$ebcdic_data'| iconv -f IBM-1047 -t ISO8859-1`

or the inverse map:

# OS/390 or z/OS example

$ebcdic_data = `echo '$ascii_data'| iconv -f ISO8859-1 -t IBM-1047`

For other Perl-based conversion options see the Convert::*
modules on CPAN.

C RTL

OPERATOR DIFFERENCES

The ..
range operator treats certain character ranges with
care on EBCDIC platforms. For example the following array
will have twenty six elements on either an EBCDIC platform
or an ASCII platform:

@alphabet = ('A'..'Z');# $#alphabet == 25

The bitwise operators such as & ^ | may return different results
when operating on string or character data in a Perl program running
on an EBCDIC platform than when run on an ASCII platform. Here is
an example adapted from the one in perlop:

An interesting property of the 32 C0 control characters
in the ASCII table is that they can "literally" be constructed
as control characters in Perl, e.g. (chr(0)
eq \c@
)>
(chr(1)
eq \cA
)>, and so on. Perl on EBCDIC platforms has been
ported to take \c@
to chr(0) and \cA
to chr(1), etc. as well, but the
characters that result depend on which code page you are
using. The table below uses the standard acronyms for the controls.
The POSIX-BC and 1047 sets are
identical throughout this range and differ from the 0037 set at only
one spot (21 decimal). Note that the line terminator character
may be generated by \cJ
on ASCII platforms but by \cU
on 1047 or POSIX-BC
platforms and cannot be generated as a "\c.letter."
control character on
0037 platforms. Note also that \c\
cannot be the final element in a string
or regex, as it will absorb the terminator. But \c\X is a FILESEPARATOR
concatenated with X for all X.
The outlier \c?
on ASCII, which yields a non-C0 control DEL
,
yields the outlier control APC
on EBCDIC, the one that isn't in the
block of contiguous controls. Note that a subtlety of this is that
\c?
on ASCII platforms is an ASCII character, while it isn't
equivalent to any ASCII character in EBCDIC platforms.

*
Note: \c?
maps to ordinal 127 (DEL
) on ASCII platforms, but
since ordinal 127 is a not a control character on EBCDIC machines,
\c?
instead maps on them to APC
, which is 255 in 0037 and 1047,
and 95 in POSIX-BC.

One must be careful with scalars and strings that are passed to
print that contain ASCII encodings. One common place
for this to occur is in the output of the MIME type header for
CGI script writing. For example, many Perl programming guides
recommend something similar to:

REGULAR EXPRESSION DIFFERENCES

You can write your regular expressions just like someone on an ASCII
platform would do. But keep in mind that using octal or hex notation to
specify a particular code point will give you the character that the
EBCDIC code page natively maps to it. (This is also true of all
double-quoted strings.) If you want to write portably, just use the
\N{U+...}
notation everywhere where you would have used \x{...}
,
and don't use octal notation at all.

Starting in Perl v5.22, this applies to ranges in bracketed character
classes. If you say, for example, qr/[\N{U+20}-\N{U+7F}]/, it means
the characters \N{U+20}
, \N{U+21}
, ..., \N{U+7F}
. This range
is all the printable characters that the ASCII character set contains.

Prior to v5.22, you couldn't specify any ranges portably, except
(starting in Perl v5.5.3) all subsets of the [A-Z]
and [a-z]
ranges are specially coded to not pick up gap characters. For example,
characters such as "ô" (oWITHCIRCUMFLEX
) that lie between
"I" and "J" would not be matched by the regular expression range
/[H-K]/
. But if either of the range end points is explicitly numeric
(and neither is specified by \N{U+...}
), the gap characters are
matched:

/[\x89-\x91]/

will match \x8e
, even though \x89
is "i" and \x91
is "j",
and \x8e
is a gap character, from the alphabetic viewpoint.

Another construct to be wary of is the inappropriate use of hex (unless
you use \N{U+...}
) or
octal constants in regular expressions. Consider the following
set of subs:

Although that form may run into trouble in network transit (due to the
presence of 8 bit characters) or on non ISO-Latin character sets. But
it does allow Is_c1
to be rewritten so it works on Perls that don't
have 'unicode_strings'
(earlier than v5.14):

SOCKETS

Most socket programming assumes ASCII character encodings in network
byte order. Exceptions can include CGI script writing under a
host web server where the server may take care of translation for you.
Most host web servers convert EBCDIC data to ISO-8859-1 or Unicode on
output.

SORTING

One big difference between ASCII-based character sets and EBCDIC ones
are the relative positions of the characters when sorted in native
order. Of most concern are the upper- and lowercase letters, the
digits, and the underscore ("_"
). On ASCII platforms the native sort
order has the digits come before the uppercase letters which come before
the underscore which comes before the lowercase letters. On EBCDIC, the
underscore comes first, then the lowercase letters, then the uppercase
ones, and the digits last. If sorted on an ASCII-based platform, the
two-letter abbreviation for a physician comes before the two letter
abbreviation for drive; that is:

The property of lowercase before uppercase letters in EBCDIC is
even carried to the Latin 1 EBCDIC pages such as 0037 and 1047.
An example would be that "Ë" (EWITHDIAERESIS
, 203) comes
before "ë" (eWITHDIAERESIS
, 235) on an ASCII platform, but
the latter (83) comes before the former (115) on an EBCDIC platform.
(Astute readers will note that the uppercase version of "ß"
SMALLLETTERSHARPS
is simply "SS" and that the upper case versions
of "ÿ" (small y WITH DIAERESIS
) and "µ" (MICROSIGN
)
are not in the 0..255 range but are in Unicode, in a Unicode enabled
Perl).

The sort order will cause differences between results obtained on
ASCII platforms versus EBCDIC platforms. What follows are some suggestions
on how to deal with these differences.

Ignore ASCII vs. EBCDIC sort differences.

This is the least computationally expensive strategy. It may require
some user education.

Use a sort helper function

This is completely general, but the most computationally expensive
strategy. Choose one or the other character set and transform to that
for every sort comparison. Here's a complete example that transforms
to ASCII sort order:

MONO CASE then sort data (for non-digits, non-underscore)

If performance is an issue, and you don't care if the output is in the
same case as the input, Use tr/// to transform to the case most
employed within the data. If the data are primarily UPPERCASE
non-Latin1, then apply tr/[a-z]/[A-Z]/, and then sort(). If the
data are primarily lowercase non Latin1 then apply tr/[A-Z]/[a-z]/
before sorting. If the data are primarily UPPERCASE and include Latin-1
characters then apply:

tr/[a-z]/[A-Z]/;

tr/[àáâãäåæçèéêëìíîïðñòóôõöøùúûüýþ]/[ÀÁÂÃÄÅÆÇÈÉÊËÌÍÎÏÐÑÒÓÔÕÖØÙÚÛÜÝÞ/;

s/ß/SS/g;

then sort(). If you have a choice, it's better to lowercase things
to avoid the problems of the two Latin-1 characters whose uppercase is
outside Latin-1: "ÿ" (small y WITH DIAERESIS
) and "µ"
(MICROSIGN
). If you do need to upppercase, you can; with a
Unicode-enabled Perl, do:

tr/ÿ/\x{178}/;

tr/µ/\x{39C}/;

Perform sorting on one type of platform only.

This strategy can employ a network connection. As such
it would be computationally expensive.

TRANSFORMATION FORMATS

There are a variety of ways of transforming data with an intra character set
mapping that serve a variety of purposes. Sorting was discussed in the
previous section and a few of the other more popular mapping techniques are
discussed next.

URL decoding and encoding

Note that some URLs have hexadecimal ASCII code points in them in an
attempt to overcome character or protocol limitation issues. For example
the tilde character is not on every keyboard hence a URL of the form:

http://www.pvhp.com/~pvhp/

may also be expressed as either of:

http://www.pvhp.com/%7Epvhp/

http://www.pvhp.com/%7epvhp/

where 7E is the hexadecimal ASCII code point for "~". Here is an example
of decoding such a URL in any EBCDIC code page:

where a more complete solution would split the URL into components
and apply a full s/// substitution only to the appropriate parts.

uu encoding and decoding

The u
template to pack() or unpack() will render EBCDIC data in
EBCDIC characters equivalent to their ASCII counterparts. For example,
the following will print "Yes indeed\n" on either an ASCII or EBCDIC
computer:

(although in production code the substitutions might be done
in the EBCDIC branch with the function call and separately in the
ASCII branch without the expense of the identity map; in Perl v5.22, the
identity map is optimized out so there is no expense, but the
alternative above is simpler and is also available in v5.22).

Such QP strings can be decoded with:

# This QP decoder is limited to ASCII only

$string =~ s/=([[:xdigit:][[:xdigit:])/chr hex $1/ge;

$string =~ s/=[\n\r]+$//;

Whereas a QP decoder that works on both ASCII and EBCDIC platforms
would look somewhat like the following:

$string =~ s/=([[:xdigit:][:xdigit:]])/

chr utf8::native_to_unicode(hex $1)/xge;

$string =~ s/=[\n\r]+$//;

Caesarean ciphers

The practice of shifting an alphabet one or more characters for encipherment
dates back thousands of years and was explicitly detailed by Gaius Julius
Caesar in his Gallic Wars text. A single alphabet shift is sometimes
referred to as a rotation and the shift amount is given as a number $n after
the string 'rot' or "rot$n". Rot0 and rot26 would designate identity maps
on the 26-letter English version of the Latin alphabet. Rot13 has the
interesting property that alternate subsequent invocations are identity maps
(thus rot13 is its own non-trivial inverse in the group of 26 alphabet
rotations). Hence the following is a rot13 encoder and decoder that will
work on ASCII and EBCDIC platforms:

iconv is supported as both a shell utility and a C RTL routine.
See also the iconv(1) and iconv(3) manual pages.

locales

Locales are supported. There may be glitches when a locale is another
EBCDIC code page which has some of the
code-page variant characters in other
positions.

There aren't currently any real UTF-8 locales, even though some locale
names contain the string "UTF-8".

See perllocale for information on locales. The L10N files
are in /usr/nls/locale. $Config{d_setlocale}
is 'define'
on
OS/390 or z/OS.

POSIX-BC?

XXX.

BUGS

Not all shells will allow multiple -e
string arguments to perl to
be concatenated together properly as recipes in this document
0, 2, 4, 5, and 6 might
seem to imply.

There are a significant number of test failures in the CPAN modules
shipped with Perl v5.22 and 5.24. These are only in modules not primarily
maintained by Perl 5 porters. Some of these are failures in the tests
only: they don't realize that it is proper to get different results on
EBCDIC platforms. And some of the failures are real bugs. If you
compile and do a maketest
on Perl, all tests on the /cpan
directory are skipped.

In earlier Perl versions, when byte and character data were
concatenated, the new string was sometimes created by
decoding the byte strings as ISO 8859-1 (Latin-1), even if the
old Unicode string used EBCDIC.

HISTORY

15 April 2001: added UTF-8 and UTF-EBCDIC to main table, pvhp.

AUTHOR

Peter Prymmer pvhp@best.com wrote this in 1999 and 2000
with CCSID 0819 and 0037 help from Chris Leach and
André Pirard A.Pirard@ulg.ac.be as well as POSIX-BC
help from Thomas Dorner Thomas.Dorner@start.de.
Thanks also to Vickie Cooper, Philip Newton, William Raffloer, and
Joe Smith. Trademarks, registered trademarks, service marks and
registered service marks used in this document are the property of
their respective owners.