Hauptmenü

Archiv der Kategorie: Standards

Java uses a nice pragmatic file format for simple configuration tasks and for internationalization of applications. It is called Java properties file or simply „.properties file“. It contains simple key value pairs. For most configuration task this is useful and easy to read and edit. Nested configurations can be expressed by simple using dots („.“) as part of the key. This was introduced already in Java 1.0. For internationalization there is a simple way to create properties files with almost the same name, but a language code just before the .properties-suffix. The concept is called „resource bundle“. Whenever a language specific string is needed, the program just knows a unique key and performs a lookup.

The unpleasant part of this is that these files are in the style of the 1990es encoded in ISO-8859-1, which is only covering a few languages in western, central and northern Europe. For other languages as a workaround an \u followed by the 4 digit hex code can be used to express UTF-16 encoding, but this is not in any way readable or easy to edit. Usually we want to use UTF-8 or in some cases real UTF-16, without this \u-hack.

A way to deal with this is using the native2ascii-converter, that can convert UTF-8 or UTF-16 to the format of properties files. By using some .uproperties-files, which are UTF-8 and converting them to .properties-files using native2ascee as part of the build process this can be addressed. It is still a hack, but properly done it should not hurt too much, apart from the work it takes to get this working. I would strongly recommend to make sure the converted and unconverted files never get mixed up. This is extremely important, because this is not easily detected in case of UTF-8 with typical central European content, but it creates ugly errors that we are used to see like „sch�ner Zeichensalat“ instead of „schöner Zeichensalat“. But we only discover it, when the files are already quite messed up, because at least in German the umlaut characters are only a small fraction of the text, but still annoying if messed up. So I would recommend another suffix to make this clear.

The bad thing is that most JVM-languages have been kind of „lazy“ (which is a good thing, usually) and have used some of Java’s infrastructures for this, thus inherited the problem from Java.

Another way to deal with this is to use XML-files, which are actually by default in UTF-8 and which can be configured to be UTF-16. With some work on development or search of existing implementations there should be ways to do the internationalization this way.

Typically some process needs to be added, because translators are often non-IT-people who use some tool that displays the texts in the original languages and accepts the translation. For good translations, the translator should actually use the software to see the context, but this is another topic for the future. Possibly there needs to be some conversion from the data provided by the translator into XML, uproperties, .properties or whatever is used. These should be automated by scripts or even by the build process and merge new translations properly with existing ones.

Anyway, Java 9 Java 9 will be helpful in this issue. Finally Java-9-properties that are used as resource bundles for internationalization can be UTF-8.

Some of us still remember the times when it was recommended to avoid „special characters“ when writing on the computer. Some keyboards did not contain „Umlaut“-characters in Germany and we fell back to the ugly, but generally understandable way of replacing the German special characters like this: ä->ae, ö->oe, ü->ue, ß->sz or ß->ss. This was due to the lack of decent keyboards, decent entry methods, but also due to transmission methods that stripped the upper bit. It did happen in emails that they where „enhanced“ like this: ä->d, ö->v, ü->|,… So we had to know our ways and sometimes use ae oe ue ss. Similar issues applied to other languages like the Scandinavian languages, Spanish, French, Italian, Hungarian, Croatian, Slovenian, Slovak, Serbian, the Baltic languages, Esperanto,… in short to all languages that could regularly be written with the Latin alphabet but required some additional letters to be written properly.

When we wrote on paper, the requirement to write properly was more obvious, while email and other electronic communication via the internet of those could be explained as being something like short wave radio. It worked globally, but with some degradation of quality compared to other media of the time. So for example with TeX it was possible to write the German special letters (and others in a similar way) like this: ä->\“a, ö->\“o, ü->\“u, ß->\ss and later even like this ä->“a, ö->“o, ü->“u, ß->“s, which some people, including myself, even used for email and other electronic communication when the proper way was not possible. I wrote Emacs-Lisp-Software that could put my Emacs in a mode where the Umlaut keys actually produced these combination when being typed and I even figured out how to tweak an xterm window for that for the sake of using IRC where Umlaut letters did not at all work and quick online typing was the way to go, where the Umlaut-characters where automatically typed because I used 10-finger system typing words, not characters.

On the other hand TeX could be configured to process Umlaut characters properly (more or less, up to the issue of hyphenation) and I wrote Macros to do this and provided them to CTAN, the repository for TeX-related software in the internet around 1994 or so. Later a better and more generic solution become part of standard TeX and superseded this, which was a good thing. So TeX guys could type ä ö ü ß and I strongly recommended (and still recommend) to actually do so. It works, it is more user friendly and in the end the computer should adapt to the humans not vice versa.

The web could process Umlaut characters (and everything) from day one. The transfer was not an issue, it could handle binary data like images, so no stripping of high bit or so was happening and the umlaut characters just went through. For people having problems to find them on the keyboard, transcriptions like this were created: ä->&auml; ö->&ouml; ü->&uuml; ß->&szlig;. I used them not knowing that they where not actually needed, but I relied on a Perl script to do the conversion so it was possible to actually type them properly.

Now some languages like Russian, Chinese, Greek, Japanese, Georgian, Thai, Korean use a totally different alphabet, so they had to solve this earlier, but others might know better how it was done in the early days. Probably it helped develop the technology. Even harder are languages like Arabic, Hebrew, and Farsi, that are written right to left. It is still ugly when editing a Wikipedia page and the switching between left-to-right and right-to-left occurs correctly, but magically and unexpected.

While ISO-8859-x seemed to solve the issue for most European languages and ISO-8859-1 became the de-facto standard in many areas, this was eventually a dead end, because only Unicode provided a way for hosting all live languages, which is what we eventually wanted, because even in quite closed environments excluding some combinations of languages in the same document will at some point of time prove to be a mistake. This has its issues. The most painful is that files and badly transmitted content do not have a clear information about the encoding attached to them. The same applies to Strings in some programming languages. We need to know from the context what it is. And now UTF-8 is becoming the predominant encoding, but in many areas ISO-8859-x or the weird cp1252 still prevail and when we get the encoding or the implicit conversions wrong, the Umlaut characters or whatever we have gets messed up. I would recommend to work carefully and keep IT-systems, configurations and programs well maintained and documented and to move to UTF-8 whenever possible. Falling back to ae oe ue for content is sixties- and seventies-technology.

Now I still do see an issue with names that are both technical and human readable like file names and names of attachments or variable names and function names in programming languages. While these do very often allow Umlaut characters, I would still prefer the ugly transcriptions in this area, because a mismatch of encoding becomes more annoying and ugly to handle than if it is just concerning text, where we as human readers are a bit more tolerant than computers, so tolerant that we would even read the ugly replacements of the old days.

But for content, we should insist on writing it with our alphabet. And move away from the ISO-8869-x encodings to UTF-8.

In the old days of the web, more than 20 years ago, we found a possibility to write German Umlaut letters and a lot of other letters and symbols using pure ASCII. These are called „entities“, btw.

Many people, including myself, started writing web pages using these transcriptions, in the assumption that they were required. Actually in the early days of the web there were some rumors that some browsers actually did not understand the character encodings that contained these letters properly, which was kind of plausible, because the late 80s and the 90s were the transition period where people discovered that computers are useful outside of the United States and at least for non-IT-guys it was or it should have been a natural requirement that computers understand their language or at least can process, store and transmit texts using the proper characters of the language. In case of German language this was not so terrible, because there were transcriptions for the special characters (ä->ae, ö->oe, ü->ue, ß->ss) that were somewhat ugly, but widely understandable to native German speakers. Other languages like Russian, Greek, Arabic or East-Asian languages were in more trouble, because they consist of „special characters“ only.

Anyway, this „&auml;“-transcription for web pages, which is actually superior to the „ae“, because the reader of the web page will read the correct „ä“, was part of the HTML-standard to support writing characters not on the keyboard. This was a useful feature in those days, but today we can find web pages that help use with the transliteration or just look up the word with the special characters in the web in order to write it correctly. Then we can as well copy it into our HTML-code, including all the special characters.

There could be some argument about UTF-8 vs. UTF-16 vs. ISO-8859-x as possible encodings of the web page. But in the area of the web this was never really an issue, because the web pages have headers that should be present and inform the browser about the encoding. Now I recommend to use UTF-8 as the default, because that includes all the potential characters that we might want to use sporadically. And then the Umlaut kann just directly be part of the HTML content. I convereted all my web pages to use Umlaut-letters properly, where needed, without using entities in the mid 90s.

Some entities are still useful:

„&lt;“ for „<“, because „<“ is used as part of the HTML-syntax

„&amp;“ for „&“ for the same reason

„&gt;“ for „>“ for consistency with „<„

„&nbsp;“ for no-break-space, to emphasize it, since most editors make it hard to tell the difference between regular space and no-break-space.

Whoever is working with MS-Windows, should know these black windows with CMD running in them, even though they are not really popular. The Unix and Linux guys hate them, because they are really primitive compared to their shells. Windows guys like to work graphically. Or they prefer powershell or bash with cygwin. Linux and Unix have the equivalent of these windows, but usually they are white. Being able to configure the colors on both systems in any way this is of no relevance.

NT-based MS-Windows systems (NT 3.x, 4.x, 2000, XP, Vista, 7, 8, 10) have several subsystems and programs are running in them, for example Win64, Win32 (or Wow64 on 64-bit-systems), Win16, cygwin (if installed), DOS… Because programs for the DOS subsystem are typically started in a CMD window, and because some of the DOS commands have equally named and similarly operating pendents in the CMD window, the CMD window is sometimes called DOS-window, which is just incorrect. Actually this black window comes into existence in many situations. Whenever a program is started that has input or output (stdin, stdout, stderr), a black window is provided aroudn, if no redirection is in place. This applies for CMD. Under Linux (and Unix) with X11 it is the other way round. You start the program that provides the window and it automatically starts the default shell within that window, unless something else is stated.

Now I recommend an experiment. You just need an MS-Windows installation with any graphical editor like emacs, gvim, ultraedit, textpad, scite, or even notepad. And a cmd-window.

Please type these commands, do not use copy/paste

In the cmd-window cd into a directory you may write in.

echo "xäöüx" > filec.txt. Yes, there are ways to type these letters even with an American keyboard. 🙂

Open the file with a graphical editor. How do the Umlauts look?

Use the editor to create a second file in the same directory with contents like this: yäöüy.

view it in CMD:

type fileg.txt

How do the Umlauts look?

It is a feature or bug, that all common MS-Windows versions are putting the umlauts to different positions then the graphical editors. If you know how to fix this, let me know.

What has happened? In the early 80es MS-DOS came into existence. By that time standards for character encoding were not very good. Only ASCII or ISO-646-IRV existed, which was at least a big step ahead of EBCDIC. But this standardized only the lower 128 characters (7 Bit) and lacked at some characters for almost any language other than English. It was tried to put a small number of these additional letters into the positions of irrelevant characters like „@“, „[„, „~“, „$“ etc. And software vendors started to make use of the upper 128 characters. Commodore, Atari, MS-DOS, Apple, NeXT, TeX and „any“ software came up with a specific way for that, often specific for a language region.

These solutions where incompatible with each other between different software systems, sometimes even between versions or language versions of the same software. Remember that at that time networks were unusual and when they existed, they were proprietary to the operating system with bridge solutions being extremely difficult to implement. Even formats for floppy disks (the three-dimensional incarnations of the save button) had proprietary formats. So it did not hurt so much to have incompatible encodings.

But relatively early X11 which became the typical graphical system for Unix and later Linux started to use standard encodings like the ISO-8859-x family, UTF-8 and UTF-16. Linux was already on ISO-8859-1 in version 0.99 in the early 90es and never tried to invent its own character encoding. Thank god for that….

Today all relevant systems have moved to Unicode standard and standardized encodings like ISO-8869-x, UTF-8, UTF-16… But MS-Windows has done that only partially. The graphical system is using modern encodings or at leas Cp1252, which is a decent approximation. But the text based system with the black window, in which CMD is running, is still using encodings from the MS-DOS times more than 30 years ago, like Cp850. This results in a break within the system, which is at least very annoying, when working with cygwin or CMD-windows.

Those who have a lot of courage can change this in the registry. Just change the entries for OEMCP and OEMHAL in HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Control\Nls\CodePage simultaneously. One of them is for input, the other one for output. So if you change only one, you will even get inconsistencies within the window… Sleep well with these night mares. 🙂
Research in the internet has revealed that some have tried to change to utf-8 (CP65001) and got a system that could not even boot as a result. Try it with a copy of a virtual system without too much risk, if you like… I have not verified this, so maybe it is just bad rumors to create damage for a great company that has brought is this interesting zoo of encodings within the same system. But anyway, try it at your own risk.
Maybe something like chcp and chhal can work as well. I have not tried that either…

The PDF format has experienced a success story on its way from being a quasi proprietary format that could only be dealt with using Adobe tools to a format that is specified and standardized and can be dealt with using open source tools and tools from different vendors. It has become accepted that PDF is primarily a print format and that for web content HTML is the better choice, which was not clear 15 years ago, when people coming from print layout who just considered themselves trivially capable of adding web to their portfolio just wanted to build whole web pages by just using PDF instead of HTML.

Now the format did change over time and there are always PDF files that use specific features that do not work in certain PDF viewers.

But there are requirements for maintaining documents over a long period of time. Just consider long term contracts that have a duration of 50-100 years. The associated documents usually need to be retained for that duration plus ten years. Alone the issue of storing data for such a long time and being able to read it physically is a challenge, but assuming that this issue is addressed and files can still be read in 110 years, the file format should be readable.

Now companies disappear. A lot of them in 100 years, probably even big ones like Adobe, Apple, Microsoft, Oracle and others. We do not know which companies will disappear, only that it is very likely that some companies that are big now will disappear. Proprietary software may make it to another vendor when shutting down the company, to pay the salaries of the former employees for some more days. But it might eventually disappear as well. Open source software has a better chance of being available in 100 years, but that cannot be absolutely guaranteed either, unless special attention is given to that software over such a long time. And if software is not maintained, it is highly unlikely that it will be able to run on the platforms that are common in 100 years.

So it is good to create a stable and simplified standard for long term archiving. Software for accessing that can be written from scratch based on that specification. And it is more likely to remain available, if continuous need for it can be seen.

The idea is a format called PDF/A, where A stands for „archive“, which is an option for storing PDF files over a very long period of time. Many cool features of PDF have been removed from PDF/A and make it more robust and easy to use. Important is also not to rely on additional data sources, for example for passwords of encrypted PDF files or for fonts. Encryption with password protection is a bad thing because it is quite likely that the password is gone in 100 years. Fonts need to be included, because finding them in 100 years might not be trivial. This usually means that proprietary fonts have to be avoided, unless the licensing allows inclusion of the fonts into the PDF file and unlimited reading. Including JavaScript, Video, Audio or Forms is also a bad idea. Video should be archived separately and it has the same issues as PDF for long term archiving.

It is a nice thing to be able to use random access files and to have the possibility to efficiently move to any byte position for reading or writing.

This is even true for text files that have a fixed number of bytes per character, for example exactly one, exactly two or exactly four bytes per character. Maybe we should prefer the term „code point“ here.

Now the new standard for text files is UTF-8. In many languages each character is usually just one byte. But when moving to an arbitrary byte position this may be in the middle of a byte sequence comprising just one character (code point) and not at the beginning of a character. How should that be handled without reading the whole file up to that position?

It is not that bad, because UTF-8 is self synchronizing. It can be seen if a particular byte is the first byte of a byte sequence or a successive one. First bytes start with 0 or 11, successive bytes start with 10. So by moving forward or backward a little bit that can be handled. So going to a rough position is quite possible, when knowledge of the average number of bytes per character for that language is around.
But when we do not want to go to a rough character position or to an almost exact byte position, but to an exact character position, which is usually the requirement that we have, then things get hard.

We either need to read the file from the beginning to be sure or we need to use an indexing structure which helps starting in the middle and only completely reading a small section of the file. This can be done with extra effort as long as data is only appended to the end, but never overwritten in the middle of the file.

But dealing with UTF-8 and random access is much harder than with bytes. Indexing structures need to be maintained in memory when accessing the file or even as part of the file.

Since about 20 years we have been kept busy with the change to Unicode.

Why is it so difficult?

The most important problem is that it is hard to tell how the content of a file is to be interpreted. We do have some hacks that often allow recognizing this:
The suffix is helpful for common and well defined file types, for example .jpg or .png. In other cases the content of the file is analyzed and something like the following is found in the beginning of the file:
#!/usr/bin/ruby

From this it can be deduced that the file should be executed with ruby, more precisely with the ruby implementation that is found under /usr/bin/ruby. If it should be the ruby that comes first in the path, something like
#!/usr/bin/env ruby

could be used instead. When using MS-Windows, this works as well, when using cygwin and cygwin’s Ruby, but not with native Win32-Ruby or Win64-Ruby.

The next thing is quite annoying. Which encoding is used for the file? It can be a useful agreement to assume UTF-8 or ISO-8859-1, but as soon as one team member forgets to configure the editor appropriately, a mess can be expected, because files appear that mix UTF-8 and ISO-8859-1 or other encodings, leading to obscure errors that are often hard to find and hard to fix.

Maybe it was a mistake when C and Unix and libc were defined and developed to understand files just as byte sequences without any meta information about the content. In the internet mime headers have proved to be useful for email and web pages and some other content. This allows the recipient of the communication to know how to interpret the content. It would have been good to have such meta-information also for files, allowing files to be renamed to anything with any suffix without loosing the readability. But in the seventies, when Unix and C and libc where initially created, such requirements were much less obvious and it was part of the beauty to have a very simple concept of an I/O-stream universally applicable to devices, files, keyboard input and some other ways of I/O. Also MS-Windows has probably been developed in C and has inherited this flaw. It has been tried to keep MS-Windows runnable on FAT-file-systems, which made it hard to benefit from the feature of NTFS of having multiple streams in a file, so the second stream could be used for the meta information. But as a matter of fact suffixes are still used and text files are analyzed for guessing the encoding and magic bytes in the beginning of binary files are used to assume a certain type.

Off course some text formats like XML have ways of writing the encoding within the content. That requires iterating through several assumptions in order to read up to that encoding information, which is not as bad as it sounds, because usually only a few encodings have to be tried in order to find that out. It is a little bit annoying to deal with this when reading XML from a network connection and not from a file, which requires some smart caching mechanism.

This is most dangerous with UTF-8 and ISO-8859-x (x=1,2,3,….), which are easy to mix up. The lower 128 characters are the same and the vast majority of the content consists of these characters for many languages. So it is easy to combine two files with different encodings and not recognizing that until the file is already somewhat in use and has undergone several conversion attempts to „fix“ the problem. Eventually this can lead to byte sequences that are not allowed in the encoding. Since spoken languages are usually quite redundant, it usually is possible to really fix such mistakes, but it can become quite expensive for large amounts of text. For UTF-16 this is easier because files have to start with FFFE or FEFF (two bytes in hex-notation), so it is relatively reliable to tell that a file is utf-16 with a certain endianness. There is even such a magic sequence of three bytes to mark utf-8, but it is not known by many people, not supported by the majority of the software and not at all commonly used.

In the MS-Windows-world things are even more annoying because the whole system is working with modern encodings, but this black CMD-windows is still using CP-850 or CP-437, which contain the same characters as ISO-8859-1, but in different positions. So an „ä“ might be displayed as a sigma-character, for example. This incompatibility within the same system does have its disadvantages. In theory there should be ways to fix this by changing some settings in the registry, but actually almost nobody has done that and messing with the registry is not exactly risk-less.

It is quite common that we need to enter a date into a software or read a date displayed by a software. Often it is paired with a time, forming a timestamp. This might be a birthday or the deadline of an IT project nobody likes to be reminded of or whatever. We deal with dates all the time.

It is a good idea to distinguish the internal representation of dates from the representation used for user interfaces. Unless legacy stands against it, internal String representation of dates (for example in XML) should follow ISO 8601. Often purely binary or numerical representations are used internally and make sense. If legacy stands against this, it is a good moment to question and hopefully discard this legacy.

ISO 8601 is this date format &lt;year&gt;-&lt;month&gt;-&lt;day&gt;, for example. 2012-11-16 (variants: 20121116 121116 12-11-16). I have been using this for the last 20 years, even on paper. What I like about it is that it is immediately obvious which part of the string stand for the year, the month and the date. How do we interpret the date 03/04/05? I also like that the ordering is following established standards. For writing numbers, the most significant digit comes first, which would be the year or the most significant digit of the year. For sorting this format is also obviously better than any other string representation. Another advantage is that this is not the legacy date format of a major country, but is kind of international and neutral, thus acceptable to everybody. Since more and more software and web pages are using this date format even in their user interface, I would assume that everybody has seen it and is able to recognize and interpret it. By the way, I prefer writing it with the „-“ surrounding the month, so it is immediately clear to human reader, that this is a date and it is easier to decompose it. 8 digits are too much for most humans to grab at once.

For the user interface, good applications should recognize user preferences. Most of the time a language setting is around and the applications should follow the conventions of this language settings. Now the official date format for German language in Switzerland, Austria and Germany is ISO 8601, i.e. 2012-11-10 for today, but people are still using the legacy format 10.11.2012 more often and even most software is still clutching to the legacy formats. I would really let the user choose between the legacy format and the standard format, rather than imposing the legacy.

In principal current operating systems provide mechanisms to set preferences and applications should follow them. So it should be sufficient to set the preference of a certain date format together with the language settings once and retain this as long as the user account is valid. But these mechanisms are somehow flawed.

it can be hard to change the default date format

is it a setting of the user account, of the specific computer or of the combination of the two?

Most software of today ignores these settings anyway, at least partly.

It is quite common that some software uses the localized date format internally and only works if the user has chose this format in his or her account settings. This one really hurts, but I have seen it.

I recommend testing software with really distant language settings, such as Arabic, Chinese or Uzbek (if that is not your native language), to see if it works with any settings. Even if it is possible to use data produced with Chinese when running with Arabic, for example. This might help to eliminate such dependencies, not only those concerning the date format. I consider this useful even if the software is meant to be used only be local users who are supposed to have the same native language. Is that true, no foreigner around who runs his computer with his native language settings? No future plans to use the software abroad, that come up several years later?

For entering dates, it has become a best practice to provide a calendar where the right year and month can be chosen and the day of the month can be clicked. Everybody has used that with good (web-)applications already. But it is useful to provide a shortcut by allowing to enter the date as a string. Several string formats can be recognized, and nothing is wrong with recognizing the local legacy date format. But ISO 8601 should always be recognized. A good example for this is the Linux date command, where you can provide a date in ISO 8601 formats.

In any case the date needs to be translated internally into a canonical format, to ensure that 2012-11-16 and 16.11.2012 are the same.

A suggestion I would make for the Posix-, Unix- and GNU-tools: the „date“ program has been around in Unix- and Linux (and probably MacOS X) systems for decades. Unfortunately the default is to use an obscure US-format for output, ignoring locale settings. I am afraid that it is too late to change this, because to much software relies on this (mis-)behavior. But at least a companion like idate (for „international date“) could be provided that has the same functionality and the same options, but uses an ISO 8601 format as default for its output. fgrep, grep and egrep are examples of providing slightly different default behaviors of basically the same program by providing three names or three variants for it. So I would like to have this:

$ idate
2012-11-16T17:33:12

Maybe I will suggest this to the developers of core-utils, which contains the GNU variant of date commonly used in the Linux world. ls is more friendly, because it recognizes an environment variable TIME_STYLE.

For now I suggest the following for .bashrc (even in cygwin, if you use MS-Windows):