Introduction

In some cases, you need to know what the best codepage (encoding) is to either transfer text over the internet or store it in a text file. One could argue that Unicode always does the trick but I needed the most efficient (byte saving) way to transfer data.

Detecting a code page from text is a very tricky task. But luckily, Microsoft provides the MLang API, in which the IMultiLang3 interface is used for outbound encoding detection.

Similarly, the IMultiLang2 interface has a function to detect the encoding of an incoming byte array. This is very handy for codepage detection of text stored in files or for text that needs to be sent over the internet.

The EncodingTools class offers some easy-to-use functions to determine the best encoding for different scenarios.

Background

The Problem

I started this along with another component that constructs MIME conformant emails. The body of the email is passed as String. The user had to provide the charset to use for the Transfer-Encoding by hand. This is fine as long as you know the target character set or always assume Unicode. But it is definitely not a good solution if you have an end-user GUI application (most users do not even know what an "encoding" is).

I wondered if it is possible to detect the best encoding from the given text.

The Dirty Hack Attempt

My first attempt was a simple brute-force attack:

Built a list of suitable encodings (only iso-codepages and unicode)

Iterate over all considered encodings

Encode the text using this encoding

Encode it back to Unicode

Compare the results for errors

If no errors remember the encoding that produced the fewest bytes

This is not only ugly, it does not even work properly. All single byte encodings are binary equal in their encoding result. The codepage is only used to map single bytes to the correct character for display.

So this method can only distinguish between ASCII (7bit), single byte (8bit) and the different Unicode flavors (UTF-7, UTF8, Unicode, etc.).

Finding Something Better

Then I remembered the IMultiLang2.DetectInputCodepage method that was introduced along with Internet Explorer 5.0. This method detects the encoding used in a text (used by Internet Explorer to do automatic codepage detection if the header is missing from a page). Even this was not suitable for my problem, and I wondered if there had been development since version 5.0. A wrapper function to the DetectInputCodepage is provided in the EncodingTools class.

Internet Explorer 5.5 introduced a new interface exported from the MLang DLL: IMultiLang3. This is what MSDN says about this interface: This interface extends IMultiLanguage2 by adding outbound text detection functionality to it.

Wow! This sounded more than promising! The interface has only two methods:

DetectOutboundCodePage (for strings)

DetectOutboundCodePageInIStream (for streams)

I chose to use the first one.

Using MLang

The MLang.dll is in the Windows\system32 directory. Along some exported functions, it provides some COM classes but does not contain a typelibrary. So the easy way (Add Reference in Visual Studio) did not work.

The MLang.idl is part of the Platform SDK and can be found in the include directory. To create an assembly from the IDL file, use the following commands from the Visual Studio Command Prompt:

Then I added the source files to my project (no more MultiLanguage.dll assembly required).

Using IMultiLanguage3::DetectOutboundCodePage

Getting an instance of COM class implementing IMultiLanguage3 is straightforward:

// get the IMultiLanguage3 interface
MultiLanguage.IMultiLanguage3 multilang3 = new
MultiLanguage.CMultiLanguageClass();
if (multilang3 == null)
thrownew System.Runtime.InteropServices.COMException(
"Failed to get IMultilang3");

The next thing is to fill the parameters.

The first parameter, dwFlags, is a combination of the tagMLCPF flags. I chose always to set the MLDETECTF_VALID_NLS because the result will be used for conversion.

The MLDETECTF_PRESERVE_ORDER and MLDETECTF_PREFERRED_ONLY are used depending on the parameters passed to my detection method.

The next two parameters (lpWideCharStr and cchWideChar) are simply the sting passed for detection and its length.

With the next two parameters (puiPreferredCodePages and nPreferredCodePages), the detection can be limited to a subset of all codepages. This is very useful if you only want to return a certain subset of codepages.

The last three parameters contain the result of detection after the method has completed successfully.

Using IMultiLanguage2::DetectInputCodepage

After being able to choose the best encoding to send text over the internet, or save it to a stream, the next task was to detect the best encoding for incoming text if the sender (or storer) did not choose the best encoding.

The DetectInputCodepage has (at least) two practical uses. By default, Windows stores text files in the current default (UI) Encoding. For example, on my system this is "Windows-1252". A user from Russia will write text using "Windows-1251". Both codepages are singlebyte and do not have any preamble. So a text file will not contain any information about the used codepage.

So if you open a text file containing text created with codepage that is different than the current UI code page, a StreamReader will read the text as if it was stored in the UI's current codepage. (The encoding detection of the StreamReader is mostly a preamble check. So it will fail for almost any non-Unicode files (or those Unicode files without BOM.) Most characters outside of the common ASCII charset will be displayed incorrectly.

This is where the DetectInputCodepage comes in handy. Its accuracy is not 100% but it is definitely better than the one from the StreamReader.

In the demo application, you can double click on an encoding to test which method has the better result (see "Testing the DetectInputCodepage performance" below).

The other practical use is to detect the encoding of emails from badly implemented mime mailers. Some wired mailers send emails in 8-bit encoding without specifying any characterset in the header. In this case, DetectInputCodepage can help a lot.

As for the DetectOutboundCodePage method, I change the method signature a little and add the MLDETECTCP enumeration. The resulting code looks like this:

My first tests were not that promising. I always had a COMExcpetion with E_FAIL thrown when I tried to detect a codepage.

The DetectInputCodepage will fail on texts that are too short, or that do not have BOM (Byte Order Mask / Encoding Preamble) prefixed data. There are two kinds of failures. If the input data is very short (less than 60 bytes), there is a good chance that the wrong codepage will be detected. Below 200 bytes, there is a good chance that DetectInputCodepage will return E_FAIL, because it could not decide which codepage to use. For the latter problem, I implemented a nasty workaround. I simply multiplied the input data up to 256 bytes. This seems to return reasonable results even for short strings.

Share

About the Author

Carsten started programming Basic and Assembler back in the 80’s when he got his first C64. After switching to a x86 based system he started programming in Pascal and C. He started Windows programming with the arrival of Windows 3.0. After working for various internet companies developing a linguistic text analysis and classification software for 25hours communications he is now working as a contractor.

Comments and Discussions

I was testing this thing and it seems to be working fine in a test project. When I embedded that into my real project, it didn't detected things correctly. I was wonder why? I then realize that my real project is built under x86 platform. I changed the test project to x86 platfor and BANG! it stopped working on test project too. When I changed that back to x86 or Any CPU, it worked again.

I wonder what it makes difference to change the target platform in project settings? I am using it in VB.NET, .NET framework 3.5, Visual Studio 2008

Any help will be appreciated.

thanks,
Sameers

BTW, I tried to rebuild the EncodingTools using x86 platform and also on Any CPU, bit whenever using calling application's platform = x86, it doesn't read file with correct encoding.

No, there is no error message, just the characters are not read properly.
It does read file, just that the file content are not read properly as they should be. Like
"Casque tour de cou - ?tanche - Rouge" is read instead of
"Casque tour de cou - étanche - Rouge"