The diffrence between this and the Nutch gram-based language identifier is quite a bit. For a starter this calculate the vertices on full words, edge-grams and bi-grams where the two charaters are the same. The frequency is normalized against the text size. The same goes for analysis at classification time. The n most important (feature selection using ranked information gain) tokens are selected for consideration by the classifier, currently 200 (out of 1000 per language) per registred language. So whis the default test (5 languages) there are 1000 tokens. It is really speedy on my dual core.

Karl Wettin
added a comment - 07/Mar/07 06:53 Foot note:
The diffrence between this and the Nutch gram-based language identifier is quite a bit. For a starter this calculate the vertices on full words, edge-grams and bi-grams where the two charaters are the same. The frequency is normalized against the text size. The same goes for analysis at classification time. The n most important (feature selection using ranked information gain) tokens are selected for consideration by the classifier, currently 200 (out of 1000 per language) per registred language. So whis the default test (5 languages) there are 1000 tokens. It is really speedy on my dual core.

Added support for all modern large germanic, balto-slavic, latin and some other languages. I'll add the complete indo-iranian tree soon.

The test case will gather and classify random pages from wikipedia in the target language. Only on too small articles (again, I say that 160 charaters, one paragraph, is required) or them with very mixed language (article talking about something like a discography of a non native band) is there a false positive.

Documents with mixed languages could probably be handled at paragraph level, reporting back as the document is in language A, but contains paragraphs (quotes, et c) in language B and C.

Supported languages(35):

swedish
danish
norwegian
islandic
faroese

dutch
afrikaans
frisian

low german
german

english

latvian
lithuanian

russian
ukranian
belarussian

czech
slovak
polish

bosnian
croatian
macedonian
bulgarian
slovenian
serbian

italian
spanish
french
portugese

armenian

greek

hungarian
finnish
estonian

modern persian (farsi)

There are some languages in the training set that due to low representation in Wikipedia also have problems with false positive classifications:

Faroese with its 80 paragraphs (mean is 600) get some 60% false positives.

Macedonian with its 150 paragraphs get 45% false positives, most often Serbian.

Croatian is often confused with Bosnian.

Also, some of these southern slavic languages can use either cyrillic or latin alphabet, and this is something I should consider a bit.

All other languages are detected without any problems.

One simple way to get the false positives better here is to manually check the training data. There is some <!-- html comments --> here and there. Hopefully they are washed away with the feature selection.

Preparing the training data (download data from Wikipedia, parse, tokenize) for all them languages takes just a few minutes on my dual core, but the token feature selection (selecting the 7000 most prominent tokens out of 65000, in 20000 paragraphs of text) takes 90 minutes and consumes something like 700MB heap.

Once the arff-file is create the classifier takes 10 minutes to compile (the support vectors) and once done it consumes not more than a fistful of MB. It could probably be serialized and dumped to disk for faster loading at startup time.

The time it takes to classify a document will of course depend on its size. Wikipedia articles average out on about 500ms.

For a really speedy classification of very large texts one could switch to REPtree instead of SVM. It does the job 95% as well (with a big enough text), but at 1% of the time or 2ms per classification. I still focus on 160 charaters long paragraphs though.

Next step is optimizations. The current training data for the 35 languages is 25000 instances and 7000 attributes. That is an instane amount of data. Way too much.

I think the CPU performance and RAM requirements can be optimized quite some by simply make the number of training instances (paragraphs) a bit more even. 500 per language. It is quite gaussian right now, and that is wrong. Also, by selecting 100*language attributes (tokens) for use in the SVM rathern than 200 as now does not do much to the classification quality, but would make the speed in creating training data and building the classifier to sqrt(what it is now).

For now I run on my 6 languages. It takes just a minute to download data from Wikipedia, tokenize and build the classifier. And classification time is about 100ms on average for a Wikipedia article.

Karl Wettin
added a comment - 08/Mar/07 06:33 Added support for all modern large germanic, balto-slavic, latin and some other languages. I'll add the complete indo-iranian tree soon.
The test case will gather and classify random pages from wikipedia in the target language. Only on too small articles (again, I say that 160 charaters, one paragraph, is required) or them with very mixed language (article talking about something like a discography of a non native band) is there a false positive.
Documents with mixed languages could probably be handled at paragraph level, reporting back as the document is in language A, but contains paragraphs (quotes, et c) in language B and C.
Supported languages(35):
swedish
danish
norwegian
islandic
faroese
dutch
afrikaans
frisian
low german
german
english
latvian
lithuanian
russian
ukranian
belarussian
czech
slovak
polish
bosnian
croatian
macedonian
bulgarian
slovenian
serbian
italian
spanish
french
portugese
armenian
greek
hungarian
finnish
estonian
modern persian (farsi)
There are some languages in the training set that due to low representation in Wikipedia also have problems with false positive classifications:
Faroese with its 80 paragraphs (mean is 600) get some 60% false positives.
Macedonian with its 150 paragraphs get 45% false positives, most often Serbian.
Croatian is often confused with Bosnian.
Also, some of these southern slavic languages can use either cyrillic or latin alphabet, and this is something I should consider a bit.
All other languages are detected without any problems.
One simple way to get the false positives better here is to manually check the training data. There is some <!-- html comments --> here and there. Hopefully they are washed away with the feature selection.
Preparing the training data (download data from Wikipedia, parse, tokenize) for all them languages takes just a few minutes on my dual core, but the token feature selection (selecting the 7000 most prominent tokens out of 65000, in 20000 paragraphs of text) takes 90 minutes and consumes something like 700MB heap.
Once the arff-file is create the classifier takes 10 minutes to compile (the support vectors) and once done it consumes not more than a fistful of MB. It could probably be serialized and dumped to disk for faster loading at startup time.
The time it takes to classify a document will of course depend on its size. Wikipedia articles average out on about 500ms.
For a really speedy classification of very large texts one could switch to REPtree instead of SVM. It does the job 95% as well (with a big enough text), but at 1% of the time or 2ms per classification. I still focus on 160 charaters long paragraphs though.
Next step is optimizations. The current training data for the 35 languages is 25000 instances and 7000 attributes. That is an instane amount of data. Way too much.
I think the CPU performance and RAM requirements can be optimized quite some by simply make the number of training instances (paragraphs) a bit more even. 500 per language. It is quite gaussian right now, and that is wrong. Also, by selecting 100*language attributes (tokens) for use in the SVM rathern than 200 as now does not do much to the classification quality, but would make the speed in creating training data and building the classifier to sqrt(what it is now).
For now I run on my 6 languages. It takes just a minute to download data from Wikipedia, tokenize and build the classifier. And classification time is about 100ms on average for a Wikipedia article.

In the LanguageClassifier.java source file we have the following problem...

stringToWordVector.setDelimiters(";"); <-- this now works
stringToWordVector.setNormalizeDocLength(new SelectedTag(StringToWordVector.FILTER_NORMALIZE_ALL, StringToWordVector.TAGS_FILTER)); <-- older versions of the API simply expect a boolean value rather than a SelectedTag object as a param)

Peter Taylor
added a comment - 08/Nov/07 18:15 Just out of curiosity which version of Weka are you using...
I ask because in newer versions of weka...
In the LanguageClassifier.java source file we have the following problem...
stringToWordVector.setDelimiters(";"); <-- setDelimiters method has disappeared
stringToWordVector.setNormalizeDocLength(new SelectedTag(StringToWordVector.FILTER_NORMALIZE_ALL, StringToWordVector.TAGS_FILTER)); <-- this works
and in older versions of weka...
In the LanguageClassifier.java source file we have the following problem...
stringToWordVector.setDelimiters(";"); <-- this now works
stringToWordVector.setNormalizeDocLength(new SelectedTag(StringToWordVector.FILTER_NORMALIZE_ALL, StringToWordVector.TAGS_FILTER)); <-- older versions of the API simply expect a boolean value rather than a SelectedTag object as a param)
Please advise
Cheers,
Peter

Peter Taylor - 08/Nov/07 10:15 AM
> Just out of curiosity which version of Weka are you using...

You can also check out all Lucene-no-deps baysian LUCENE-1039, spell checker in the test case.

I have 600 instances per class, and 25 classes. Get great results with ^3-4, 3- and 3-5$ ngrams of context sensitive 2-5 word sentances . Using a LUCENE-550 index is 4-5 times faster (100-300ms) than a RAMDirectory (500-1600) for classification.

Karl Wettin
added a comment - 09/Nov/07 01:54 Peter Taylor - 08/Nov/07 10:15 AM
> Just out of curiosity which version of Weka are you using...
You can also check out all Lucene-no-deps baysian LUCENE-1039 , spell checker in the test case.
I have 600 instances per class, and 25 classes. Get great results with ^3-4, 3- and 3-5$ ngrams of context sensitive 2-5 word sentances . Using a LUCENE-550 index is 4-5 times faster (100-300ms) than a RAMDirectory (500-1600) for classification.

I think Nutch (and eventually Mahout) plan to use Tika for charset/mime-type/language detection going forward.

I've filed an issue TIKA-369 about improving the current Tika code, which is a simplification of the Nutch code. While using this on lots of docs, there were performance issues. And for small chunks of text the quality isn't very good.

It would be interesting if Karl could comment on the approach Ted Dunning took (many years ago - 1994 ) versus what he did.

Ken Krugler
added a comment - 24/Jan/10 19:35 I think Nutch (and eventually Mahout) plan to use Tika for charset/mime-type/language detection going forward.
I've filed an issue TIKA-369 about improving the current Tika code, which is a simplification of the Nutch code. While using this on lots of docs, there were performance issues. And for small chunks of text the quality isn't very good.
It would be interesting if Karl could comment on the approach Ted Dunning took (many years ago - 1994 ) versus what he did.

I still haven't found a one strategy that works good on any text: a user query, a sentence, a paragraph or a complete document. 1-5 grams using SVM or NB works pretty good for them all but you really need to train it with the same sort of data you want to classify. Even when training with a mix of text lengths it tend to perform a lot worse than if you had one classifier for each data type. And you still probably want to twiddle with the classifier knobs to make it work great with the data you are classifying and training with.

In some cases I've used 1-10 grams and other times I've used 2-4 grams. Sometimes I've used SVM and other times I've used a simple desiction tree.

To sum it up, to achieve good quality I've always had to build a classifier for that specific use case. Weka has a great test suite for figuring out what to use. Set it up, press play and return one week later to find out what to use.

Karl Wettin
added a comment - 26/Jan/10 13:34 Hi Ken,
it's hard for me to compare. I'll rant a bit about my experience from language detection though.
I still haven't found a one strategy that works good on any text: a user query, a sentence, a paragraph or a complete document. 1-5 grams using SVM or NB works pretty good for them all but you really need to train it with the same sort of data you want to classify. Even when training with a mix of text lengths it tend to perform a lot worse than if you had one classifier for each data type. And you still probably want to twiddle with the classifier knobs to make it work great with the data you are classifying and training with.
In some cases I've used 1-10 grams and other times I've used 2-4 grams. Sometimes I've used SVM and other times I've used a simple desiction tree.
To sum it up, to achieve good quality I've always had to build a classifier for that specific use case. Weka has a great test suite for figuring out what to use. Set it up, press play and return one week later to find out what to use.

Reviving this issue - would be interesting to arrive at a proposal whether this code could replace Tika's existing languageIdentifier. We still need to solve the case with small texts. I'm thinking of a hybrid solution where we fallback to a dictionary based detector for small texts, i.e. based on Ooo dictionaries.

Jan Høydahl
added a comment - 27/Jun/11 08:26 Reviving this issue - would be interesting to arrive at a proposal whether this code could replace Tika's existing languageIdentifier. We still need to solve the case with small texts. I'm thinking of a hybrid solution where we fallback to a dictionary based detector for small texts, i.e. based on Ooo dictionaries.