[ https://issues.apache.org/jira/browse/LUCENE-2414?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
Robert Muir resolved LUCENE-2414.
---------------------------------
Resolution: Fixed
Committed revision 940447.
> add icu-based tokenizer for unicode text segmentation
> -----------------------------------------------------
>
> Key: LUCENE-2414
> URL: https://issues.apache.org/jira/browse/LUCENE-2414
> Project: Lucene - Java
> Issue Type: New Feature
> Components: contrib/*
> Affects Versions: 3.1
> Reporter: Robert Muir
> Assignee: Robert Muir
> Fix For: 3.1
>
> Attachments: LUCENE-2414.patch, LUCENE-2414.patch, LUCENE-2414.patch, LUCENE-2414.patch,
LUCENE-2414.patch
>
>
> I pulled out the last part of LUCENE-1488, the tokenizer itself and cleaned it up some.
> The idea is simple:
> * First step is to divide text into writing system boundaries (scripts)
> * You supply an ICUTokenizerConfig (or just use the default) which lets you tailor segmentation
on a per-writing system basis.
> * This tailoring can be any BreakIterator, so rule-based or dictionary-based or your
own.
> The default implementation (if you do not customize) is just to do UAX#29, but with tailorings
for stuff with no clear word division:
> * Thai (uses dictionary-based word breaking)
> * Khmer, Myanmar, Lao (uses custom rules for syllabification)
> Additionally as more of an example i have a tailoring for hebrew that treats the punctuation
special. (People have asked before
> for ways to make standardanalyzer treat dashes differently, etc)
--
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.
---------------------------------------------------------------------
To unsubscribe, e-mail: dev-unsubscribe@lucene.apache.org
For additional commands, e-mail: dev-help@lucene.apache.org