KANJIDIC

A Database of Information on the 6,355 Kanji in the JIS X 0208 Standard

Copyright (C) 2015 The Electronic Dictionary Research and DevelopmentGroup.

(NB: This document has been converted quickly from plain text to HTML. As a result, some of the formatting has been left as it was in the original document. A more elegant version may be developed later.)

The KANJIDIC file contains comprehensive information about Japanese kanji. It
is a text file currently 6,355 lines long, with one line for each kanji in
the two levels of the characters specified in the JIS X 0208-1990 set. (For
basic information about this set, see Appendix A.)

The file contains a mixture of ASCII characters and kana/kanji encoded using
the EUC (Extended Unix Code) coding.

Attention is drawn to the KANJIDIC LICENCE STATEMENT AND COPYRIGHT NOTICE
included below in this document.

A similar file, KANJD212, is available for the 5,801 supplementary kanji in
the JIS X 0212-1990 set.

From June 2003, the KANJIDIC file has been generated from a database
developed from KANJIDIC to support the KANJIDIC2 XML-format version.
The legacy KANJIDIC format file will continue to be distributed.

The first part of each line is of a fixed format, indicating which character
the line is for, while the rest is more free-format.

The first two bytes are the kanji itself. There is then a space, the 4-byte
ASCII representation of the hexadecimal coding of the two-byte JIS encoding,
and another space.

The rest of the line is composed of a combination of three kinds of fields
(which may be in any order and interspersed):

information fields, beginning with an identifying letter and ending with
a space. See below for more information about these fields.

readings (with '-' to indicate prefixes/suffixes, and '.' to indicate the
portion of the reading that is okurigana). ON-yomi are generally in
katakana and KUN-yomi in hiragana. An exception is the set of kokuji for
measurements such as centimetres, where the reading is in katakana.
There may be several classes of reading fields, with ordinary
readings first, followed by members of the other classes, if any. The current
other classes, and their tagging, are:

where the kanji has special "nanori" (i.e. name) readings,
these are preceded the marker "T1";

where the kanji is a radical, and the radical name is not already
a reading, the radical name is preceded the marker "T2".

(Other Tn classes may be created at a later date.)

English meanings. Each such field begins with an open brace '{' and ends
at the next close brace '}'.

There are currently a variety of predefined fields (programs using KANJIDIC
should not make any assumptions about the presence or absence of any of these
fields, as KANJIDIC is certain to be extended in the future):

B<num> -- the radical (Bushu) number. There is one per entry. As
far as possible, this is the radical number used in the Nelson
"Modern
Japanese-English Character Dictionary" (i.e. the Classic, not the New Nelson). Where the classical or
historical radical number differs from this, it is present as a
separate C<num> entry.

C<num> -- the historical or classical radical number, as recorded
in the KangXi Zidian (where this differs from the B<num> entry.) There
will be at most one of these.

F<num> -- the frequency-of-use ranking. At most one per line. The
2,501 most-used characters have a ranking; those characters that lack
this field are not ranked. The frequency is a number from 1 to 2,501
that expresses the relative frequency of occurrence of a character in
modern Japanese. The data is based on an analysis of word
frequencies in the Mainichi Shimbun over 4 years by Alexandre Girardi.
From this the relative frequencies have been derived. Note:

these frequencies are biassed towards words and kanji used in newspaper
articles,

the relative frequencies for the last few hundred
kanji so graded is quite imprecise.

(Earlier editions of the KANJIDIC file used a frequency-of-use ranking
from the
National Language Research Institute (Tokyo), interpreted and adapted
by Jack Halpern.)

G<num> -- the "grade" of the kanji. At most one per line.

G1 to G6 indicates the grade level as specified by the Japanese
Ministry of Education for kanji that are to be taught in elementary school
(1006 Kanji). These are sometimes called the "Kyouiku" (education) kanji and
are part of the set of Jouyou (daily use) kanji;

G8 indicates the remaining Jouyou kanji that are to be taught in
secondary school (additional 1130 Kanji);

G9 and G10 indicate Jinmeiyou ("for use in names") kanji which in
addition to the Jouyou kanji are approved for use in family name registers
and other official documents. G9 (649 kanji, of which 640 are in KANJIDIC)
indicates the kanji is a "regular" name kanji, and G10 (212 kanji of
which 130 are in KANJIDIC) indicates the kanji is a variant of a
Jouyou kanji;

J<num> -- the level of the Japanese Language Proficiency Test (JLPT)
in which the kanji occurs. (1-4) Note that the JLPT test levels
changed in 2010, with a new 5-level
system (N1 to N5) being introduced. No official kanji lists are
available for the new levels. The new levels are regarded as
being similar to the old levels except that the old level 2 is
now divided between N2 and N3.

H<num> -- the index number in the New Japanese-English Character
Dictionary (1990), edited by Jack Halpern. At most one allowed per line.
If not preset, the character is not in Halpern.

N<num> -- the index number in the "Modern Reader's Japanese-English
Character Dictionary", edited by Andrew Nelson. At most one allowed
per line. If not present, the character is not in Nelson, or is
considered to be a non-standard version, in which case it may have a
cross-reference code in the form: XNnnnn. (Note that many kanji
currently used are what Nelson described as "non-standard" forms or
glyphs.)

V<num> -- the index number in The New Nelson Japanese-English
Character Dictionary, edited by John Haig.

D<code> -- the "D" codes will be progressively used for dictionary
based codes.

DAnnnn - the index numbers used in the 2011 edition of the
Kanji & Kana book, by Spahn & Hadamitzky. "nnnn" is the number of the kanji referenced in
that book.

DBnnn - the index numbers used in "Japanese For Busy People" vols I-III,
published by the AJLT. The codes are the volume.chapter.

DCnnnn - the index numbers used in "The Kanji Way to Japanese Language
Power" by Dale Crowley.

DFnnn - the index numbers used in the "Japanese Kanji Flashcards",
by Max Hodges and Tomoko Okazaki (White Rabbit Press).

DGnnn - the index numbers used in the "Kodansha Compact Kanji Guide".

DHnnnn - the index numbers used in the 3rd edition of
"A Guide To Reading and Writing Japanese" edited by Ken Hensall et al.

DJnnnn - the index numbers used in the "Kanji in Context" by
Nishiguchi and Kono.

DKnnnn/DLnnnn - the index numbers used by Jack Halpern in his Kanji
Learners Dictionary, published by Kodansha in 1999 with a second edition
in 2103. The numbers have been provided by Mr Halpern.

DMnnnn - the numbers in Yves Maniette's French adapatation of
Heisig's "Remembering The Kanji"".

DNnnnn -- the index number used in "Remembering The Kanji,
6th Edition" by James Heisig.

DOnnnn - the index numbers used in P.G. O'Neill's Essential Kanji
The numbers have been provided by Glenn Rosenthal.

DPnnnn - the index numbers used by Jack Halpern in his Kodansha
Kanji Dictionary (2013), which is the revised version of the "New
Japanese-English Kanji Dictionary" of 1990.

DRnnnn - these are the codes developed by Father Joseph De Roo,
and published in his book "2001 Kanji" (Bonjinsha). Fr De Roo has
given his permission for these codes to be included.

DSnnnn - the index numbers used in the early editions of
"A Guide To Reading and Writing Japanese" edited by Florence Sakade.

DTnnn - the index numbers used in the Tuttle Kanji Cards, compiled
by Alexander Kask.

P<code> -- the SKIP pattern code. The <code> is of the form
"P<num>-<num>-<num>". The System of Kanji Indexing by Patterns
(SKIP) is a scheme for the classification and rapid retrieval of
Chinese characters on the basis of geometrical patterns. Developed
by Jack Halpern, it first appeared in the New Japanese-English
Character Dictionary (Kenkyusha, Tokyo 1990; NTC, Chicago 1993).
(A brief
summary of the method is in Appendix C. See Appendix E. for some of
the rules applied when counting strokes in some of the radicals.)

S<num> -- the stroke count. At least one per line. If more than
one, the first is considered the accepted count, while subsequent
ones are common miscounts. (See Appendix E. For some of the rules
applied when counting strokes in some of the radicals.)

U<hexnum> -- the Unicode encoding of the kanji. See Appendix B for
further information on this code. There is exactly one per line.

I<code> -- the index codes in the reference books by Spahn &
Hadamitzky. These codes take two forms:

for The Kanji Dictionary (Tuttle 1996), they are in the form
nxnn.n, e.g. 3k11.2, where the kanji has 3
strokes in the identifying radical, it is radical "k" in the S&H
classification system, there are 11 other strokes, and it is the 2nd
kanji in the 3k11 sequence. I am very grateful to Mark Spahn for
providing the (almost) full list of these descriptor codes for the
kanji in this file. At the time of writing some 800 kanji in the
file lack the SH descriptor. This is because the book used a
different glyph as the primary kanji. The gaps are gradually being
filled in. Where the JIS X 0208 glyph is the second kanji for a
particular descriptor code, it has a "-2" appended to the code.

for the Kanji & Kana book (Tuttle), they are in the form
INnnnn, where nnnn is the number of the kanji referenced in
that book (2nd edition.)

Qnnnn.n -- the "Four Corner" code for that kanji. This is a code
invented by Wang Chen in 1928, it has since then been widely used for
dictionaries in China and Japan. In some cases there are two of these
codes, as it is can be little ambiguous, and Morohashi has some kanji
coded differently from their traditional Chinese codes. See Appendix
D for an overview of the Four Corner System. Christian Wittern,
who passed on these codes, comments that they are in need of
proof-reading and thus users are advised to be cautious using the
codes for serious scholarship.

MNnnnnnnn and MPnn.nnnn -- the index number and volume.page
respectively of the kanji in the 13-volume Morohashi Daikanwajiten.
In the MNnnn field, a terminal `P`, e.g. MN4879P, indicates that it
is 4879' in the original. In some 500 cases, the number is terminated
with an `X`, to indicate that the kanji in Morohashi has a close, but
not identical, glyph to the form in the JIS X 0208 standard.

Ennnn -- the index number used in "A Guide To Remembering Japanese
Characters" by Kenneth G. Henshall. There are 1945 kanji with these
numbers (i.e. the Jouyou subset.)

Knnnn -- the index number in the Gakken Kanji Dictionary ("A New
Dictionary of Kanji Usage"). Some of the numbers relate to the list
at the back of the book, jouyou kanji not contained in the
dictionary, and various historical tables at the end.

Lnnnn -- the index number used in "Remembering The Kanji" by James
Heisig.

Onnnn -- the index number in "Japanese Names", by P.G. O'Neill.
(Weatherhill, 1972) (A warning: some of the numbers end with 'A'. This
is how they appear in the book; it is not a problem with the file.)

Wxxxx -- the romanized form of the Korean reading(s) of the kanji.
Most of these kanji have one Korean reading, a few have two or more.
The readings are in the (Republic of Korea) Ministry of Education
style of romanization.

Yxxxxx -- the "Pinyin" of each kanji, i.e. the (Mandarin or Beijing)
Chinese romanization. About 6,000 of the kanji have these. Obviously
most of the native Japanese kokuji do not have Pinyin, however at least
one does as it was taken into Chinese at a later date.

Xxxxxxx -- a cross-reference code. An entry of, say, XN1234 will mean
that the user is referred to the kanji with the (unique) Nelson index
of 1234. XJnxxxx is a cross-reference to the kanji with
the JIS hexadecimal code of xxxx. n = (0,1,2) indicating the reference
is to a kanji in JIS X 0208, JIS X 0212 or JIS X 0213 respecively.

Zxxxxxx -- a mis-classification code. It means that this kanji is
sometimes mis-classified as having the xxxxxx coding. In the case of
the SKIP classifications, an extra letter code is used to indicate
the type of mis-classification. ZPPn-n-n, ZSPn-n-n and ZBPn-n-n
indicate mis-classification according to position, stroke-count and
both position and stroke-count. (ZRPn-n-n codes are where Jim
Breen &
Jack Halpern are having a [hopefully temporary] disagreement over the
number of strokes.)

If the final field of a line is not an English field, there is a final space.
Each reading and information field is therefore bracketed by space characters
(which makes it convenient for searches using programs like "grep".)

As far as possible all entries will have their yomikata and readings
attached, even if they are a recognized variant of another kanji. This is to
facilitate electronic searches using these fields as keys, and should not be
taken as a recommendation to use such obscure kanji.

KANJIDIC is used now to build the "kinfo.dat" file which is used by JDIC and
JREADER, and by Stephen Chung's JWP. "kinfo.dat" contains the identical
information, but in a compressed form and in a structure suitable for fast
indexed access.

KANJIDIC is also used in the XJDIC and MacJDic dictionary programs, and a
growing number of other programs such as KDRILL and KDIC.

KANJIDIC is now rather large, and has information in it which is not much use
for people who are not studying and researching Japanese orthography. It is
still appropriate to maintain it as a useful freely-available compendium of
such information.

For people who only wish to use a subset of the information in KANJIDIC,
there is a program "kdfilt.c", also available as kdfilt.exe for MS-DOS, which
will strip out unwanted fields. Dan Crevier has also released a program
(kanjidicSplit) which does the same for MacJDic users. (For users of the JDIC
program, the KANJDFIX.EXE utility also strips out unwanted fields prior to
building the KINFO.DAT file.)

KANJIDIC began as two files: jis1detl.lst and jis2detl.lst, which were later
merged into a single file.

The first file was compiled initially from the file "kinfo.dat" supplied by
Stephen Chung, who in turn compiled his file from a file prepared by Mike
Erickson. I originally added about 1900 "meanings" by James Heisig keyed in
by Kevin Moore from the book "Remembering The Kanji". I later added the
meanings from Rik Smoody's files, compiled when he was working for Sony in
Japan. These appear to have been based on Nelson.

The second file was compiled from a complete JIS2 list with Bushu and stroke
counts kindly supplied to me by Jon Crossley, to which I added Nelson
numbers, yomikata and meanings extracted from Rik Smoody's file.

Theresa Martin was an early assister with this file, particularly with
tracking down and correcting many mistranscribed yomikata (the old zu/dzu,
oo/ou, ji/dji, etc. problems).

Jeffrey Friedl did a major overhaul in September-October 1992, in which he
added the original frequency rankings, Halpern codes, SKIP patterns, updated the
grading ("G" fields) to reflect the modern Jouyou lists, corrected radical
numbers, corrected stroke counts and readings to fall in line with modern
usage.

Magnus Halldorsson corrected some erroneous Halpern numbers, and provided
them for a lot of the radicals. He provided the list of Heisig indices,
which he originally compiled himself, then verified and expanded using lists
from Richard Walters and Antti Karttunen. He also passed on to me the list of
Gakken indices compiled by Antti Karttunen.

Lee Collins provided the Unicode mappings (see appendix B)

Iain Sinclair has provided the yomikata, meanings and S&H indices of many of
the obscure JIS2 kanji.

Christian Wittern, a Sinologist working at Kyoto University, sent me a
monster file prepared by Dr Urs App from Hanazono College. From this I have
extracted the Four Corner and Morohashi information. Christian also provided
the original Pinyin details, which were later replaced. I am very grateful
for these significant contributions.

In March 1994 the Morohashi indices were proof-read and corrected by
Christian.

Alfredo Pinochet supplied all the Henshall numbers.

Ingar Holst has provided considerable assistance in regularizing the Bnnn and
Cnnn radical classifications to remove some errors that were in the original
JIS2 file, and to make it all conform to Nelson's classification.

In mid-1993 I withdrew the SKIP codes from the distributed file as it
appeared that their presence violated Jack Halpern's copyright on these
codes. Jeffrey Friedl contacted Jack about this, and Jack obtained permission
from his publisher for the codes to be included subject (initially) to
copyright and
usage restrictions. In March 1994 the Halpern indices
and SKIP codes were checked against an extract from Jack's files, and the "Z"
mis-classification codes added, again from his files. Jack has also made a
lot of useful comments and suggestions about the content and format of the
file. I am most grateful to Jack for his permission and assistance, and also
to Jeffrey for making the contact.

In May 1995, a number of updates took place. Jeffrey Friedl established
contact with James Heisig, and obtained a further set of his indices. I
contacted Mark Spahn (via the "honyaku" mailing list) and he kindly provided
most of the missing S&H descriptors, and Jack Halpern released to me the SKIP
codes of the kanji not in the New Japanese-English Character Dictionary. For
all this material I am most grateful.

In August 1995, I added the O'Neill index numbers. These were compiled by
Jenny Nazak, David Rosenfeld and myself. Thanks to Jenny & David for their
assistance.

In January and February 1996 the Morohashi numbers were checked thoroughly
against two important sources: a file of Unicode-Morohashi data (Uni2Dict)
which was prepared by Koichi Yasuoka from the allocation in the JIS X 0221
standard, and the review draft of the proposed revision of the JIS X 0208
standard, which was prepared by the INSTAC Committee, and made available in
a text file, thus enabling comparisons. All the mismatches between the three
files were examined against the Morohashi text, and extensive corrections
made to all three files. I am grateful to Koichi Yasuoka and Masayuki
Toyoshima for their considerable assistance in this task.

In March 1996 the Korean readings were added. They were provided by Dr
Charles Muller of Toyo Gakuen University (acmuller@gol.com), to whom I
am most grateful. Chuck's compilation of Korean readings is extremely
thorough and scholarly, and I am pleased to be able to incorporate
them.

In April 1996 the readings of all the kanji were compared with those in the
JIS X 0208 draft, and a number of corrections and additions made.

In May 1996 I carried out a "unification" of the readings of the KANJIDIC
and KANJD212 files, wherein all the readings of the "itaiji" were brought
into line. The identification of these itaiji was drawn from a file posted
to the fj.kanji group by Taichi Kawabata (kawabata@is.s.u-tokyo.ac.jp),
which was compiled at the ETL from the itaiji identification in the
JIS X 0208 and JIS X 0212 standards. I corrected a few errors, and added
some extra sets which were indicated in the JIS X 0208-1996 draft.

In July 1996 the Pinyin details were completely replaced by a new set. The
original Pinyin were from an earlier compilation by Christian Wittern, and
and contained many errors. Two more reliable sources had become available:
the Uni2Pinyin file compiled by Koichi Yasuoka, which is based in part on
the TONEPY.tit by Yongguang Zhang; and the PYCHAR set of readings of Big5
hanzi compiled by Christian Wittern. The Pinyin currently in the KANJIDIC
file is a combination of the two, following the order in the Uni2Pinyin
file.

In August 1996 I corrected a few more missing and erroneous Nelson numbers,
using a massive Nelson list prepared by Wolfgang Cronrath. He also flagged
the kokuji, so I added these to the readings fields as "{(kokuji)}".

Also in August 1996 I deleted the handful of former "XJxxxx" cross-references,
and replaced them with a much more comprehensive set, so that they now
represent all the recognized "itaiji". The file I used for this was the
corrected itaiji file mentioned above.

In April 1997 I corrected a large number of bushu codes. Many of these had
been identified as errors by Jean-Luc Leger (reiga@iria.mines.u-nancy.fr) who
analyzed and examined all the Nelson bushu. I also identified and added a large
number of missing Cnnn codes.

Also in April 1997 I added the S&H "Kanji & Kana" indices. These had been
keyed by Olivier Galibert (Olivier.Galibert@mines.u-nancy.fr). (There must
be an outbreak of kanji interest on Nancy.)

In February 1998, the long-awaited inclusion of the "New Nelson" numbers took
place. I had been waiting for the editor of the New Nelson, John Haig, to
supply a list (as he had agreed some years before), but in the meantime,
Jean-Luc Leger keyed a list, so they are now available.

Also between December 1997 and February 1998 a large number of Level 2
kanji had their stroke counts corrected to bring them into line with the
counting principles used in the Level 1 kanji. This usually aligned the
counts with those used in the New Nelson and in S&H. Appendix E of this
document was amended to reflect this. The leg-work in tracking this material
down was done by Wolfgang Cronrath.

During December 1998 & Jan 1999 I updated the stroke counts of many of the
Level 2 kanji, using an analysis of them carried out by Wolfgang Cronrath.
I also added the De Roo codes, which had been keyed by Jasmin Blanchette,
who also typed the explanatory material. I contacted Fr De Roo in Tokyo who
readily agreed to the inclusion of thecodes.

The extension of the S&H Kana & Kanji numbers to the 2nd edition was
done by Enrique Sanchez Rosa.

The Hangul versions of the Korean readings (which only appear in the
XML version) were provided by Francis Bond and Kyonghee Paik.

I did the Tuttle card numbers myself.

James Rose provided the numbers from Crowley's "The Kanji Way to Japanese
Language Power", Sakade's "A Guide To Reading and Writing Japanese", and
also for that book's 3rd Edition edited by Henshall, Seeley & De Groot.

In summary, KANJIDIC can be freely used provided satisfactory
acknowledgement is made, and a number of other conditions are met.

The following people have granted permission for material for which they hold
copyright to be included in the files, and distributed under the above
conditions, while retaining their copyright over that material:

Jack HALPERN: The SKIP codes in the KANJIDIC file.

With regard to the SKIP codes, Mr Halpern draws your attention to the
statement he has prepared on the matter, which is included at Appendix F.

Christian WITTERN and Koichi YASUOKA: The Pinyin information in the KANJIDIC
file.

Urs APP: the Four Corner codes and the Morohashi information in the KANJIDIC
file.

Mark SPAHN and Wolfgang HADAMITZKY: the kanji descriptors from their
dictionary.

For full information about JIS codes, please see Ken Lunde's "japan.inf"
file, or his book "Understanding Japanese Information Processing", O'Reilly
1993. The following is a brief extract from the "japan.inf" file.

This standard was first established in 1978, modified for the first time in
1983 (character position swapping, glyph changes, and four kanji appended to
JIS Level 2), and modified again in 1990 (two kanji were appended to JIS
Level 2). This character set is widely implemented on a variety of platforms.
Encoding methods for JIS X 0208-1990 include Shift-JIS, EUC, and JIS."

The following information about Unicode was provided in 1992 by Lee
Collins at Taligent.

(The Unicode sequences are) "the final, official mapping to JIS of the
CJK-JRG's (Chinese, Japanese, Korean- Joint Research Group) "Unified
Repertoire and Ordering Version 2.0" which is the unified Han character set
of ISO 10646 and Unicode. All of the Unicode companies (Apple, IBM,
Microsoft, NeXT, Taligent, etc) are now using this mapping. There has been
some confusion because of difference in nomenclature. Unicode people call it
UniHan, the Chinese sometimes call it HCS (Han Character Set) and ISO calls
it "Ideographic CJK Character Unified Repertoire and Ordering". ISO can't use
the term "Han" character because Japan was very sensitive to this (even
though it is a direct translation of "Kanzi") and it can't be called a
character set because only ISO WG2 is empowered with the authority to encode
characters. Problems of naming aside, they are all the same thing.

The CJK-JRG was formed under the aegis of ISO in 1990 to investigate and
propose a unified Han character set for inclusion in ISO 10646. It brought
together various experts on Han characters from China, Hong Kong, Japan,
Korea, Taiwan and the United States selected by the national bodies
participating in ISO WG2.

Including the initial work in the US on Unicode and in China on GB 13000,
which were merged and became the basis for the URO, the task spanned about 4
years. The work was completed in April of this year. It contains 21,000 Han
characters from all of the major standards used in East Asia, including JIS X
0208-1990 and JIS X 0212-1990. The Unicode consortium provides a
cross-reference file for all of the source sets. To get a copy contact

Steve Greenfield
unicode-inc@HQ.M4.Metaphor.COM

For further details about the URO/UniHan, you might want to pick up a copy of
the "The Unicode Standard Version 1.0 Vol II". It's published by Addison
Wesley, ISBN 0-201-60845-6. It's been available in the USA for over a month
now. For a slightly different presentation of the characters, a copy of 10646
or of the "Ideographic CJK Character Unified Repertoire and Ordering Version
2.0" might be available through the the Australian national body to ISO WG2."

[This document contains the text and examples from the covers of the "New
Japanese-English Character Dictionary" edited by Jack Halpern and published
by Kenkyusha and NTC. It is reproduced with Mr Halpern's kind permission.

The text on which this is based used four patterns which are not able to be
reproduced in this document. They are referred to below as #1 through #4,
and relate to the following shapes in the NJECD:
. ¢£¢£¡±¡±¡Ã ¢£¢£¢£¢£ ¢£¢£¢£¢£ ¢£¢£¢£¢£
. ¢£¢£ ¡Ã ¢£¢£¢£¢£ ¢£¢£¢£¢£ ¢£¢£¢£¢£
. ¢£¢£ ¡Ã ¢£¢£¢£¢£ ¢£ ¢£ ¢£¢£¢£¢£
. ¢£¢£ ¡Ã ¡Ã ¡Ã ¢£ ¢£ ¢£¢£¢£¢£
. ¢£¢£ ¡Ã ¡Ã ¡Ã ¢£¢£¢£¢£ ¢£¢£¢£¢£
. ¢£¢£¡²¡²¡× ¡Ã¡²¡²¡× ¢£¢£¢£¢£ ¢£¢£¢£¢£
. #1 #2 #3 #4
. LEFT- TOP- ENCLOSURE SOLID
. RIGHT BOTTOM]
. HOW TO LOCATE AN ENTRY
A. Determine the SKIP number of your character.
STEP 1 IDENTIFY PATTERN
Determine to which of the four PATTERNS your character belongs to get the
first part of the SKIP number (the PATTERN NUMBER).
If your character belongs to pattern #1, #2 or #3 (Áê¢ª#1), carry out the
steps in the left column; if it belongs to pattern #4 (²¼¢ª#4), carry out the
steps in the right column. (REF: R4. How to Identify the Pattern)
. #1 #2 #3 #4
STEP 2
DIVIDE CHARACTER OMIT
Divide the character into two parts at (Since solid characters
the first division point. [Áê=ÌÚ+ÌÜ] cannot be divided, go to
REF: R5. How to Divide the Character STEP 3.) REF: R6. How to
Subclassify the Solid Pattern
STEP 3
COUNT STROKES OF SHADED PART DETERMINE TOTAL STROKE-COUNT
Count the strokes of the SHADED PART Determine the total stroke-count of
to get the second part of the SKIP your character to get the second part
number. [Áê #1 1-4-] of the SKIP number. [²¼ #4 4-3-]
REF: Appendix 2. How to Count Strokes REF: Appendix 2. How to Count Strokes
STEP 4
COUNT STROKES OF BLANK PART IDENTIFY SOLID SUBPATTERN
Count the strokes of the BLANK PART Determine to which of the four
to get the third part of the SKIP SOLID SUBPATTERNS your character
number. [Áê #1 1-4-5] belongs to get the third part of the
REF: Appendix 2. How to Count Strokes SKIP number. Select from: `¡±' 1,
`¡²' 2, `|' 3, or `¢£' 4. [²¼ #4 4-3-1]
REF: R6. How to Subclassify the
Solid Pattern
After determining the SKIP number of your character, locate your character
entry in one of two ways:
1. Determine the entry number in the Pattern Index beginning on p. 1952 then
locate your character entry in the main part of the dictionary. See R3.1.2
Index Method for details.
2. Locate your character entry directly (without referring to the Pattern
Index) from its SKIP number. See R3.1.3 Direct Method for details.
NOTE: All references preceded by a section mark (R) refer to SYSTEM OF KANJI
INDEXING BY PATTERNS beginning on p. 106a
HOW TO IDENTIFY THE PATTERN
DETERMINE TO WHICH OF THE FOUR PATTERNS YOUR CHARACTER BELONGS
#1 Characters that can be divided into left and right parts
RIGHT: Áê 4-5 È¬ 1-1 ½ç 1-11 °· 3-3
WRONG: ÊÒ 1-3 ÍÑ 1-4 ²Ä 3-2 Â¿ 3-3
#2 Characters that can be divided into top and bottom parts
RIGHT: Æó 1-1 »û 3-3 ¸Å 2-3 ½Õ 5-4
WRONG: Ëü 1-2 ¹Í 4-2 ´Ö 8-4 ºÁ 4-3
#3 Characters that can be divided by an enclosure element
RIGHT: ¿Ê 3-8 ¹­ 3-2 Ìä 8-3 ¹ñ 3-5
WRONG: Æþ 1-1 ¸â 4-3 Ì¾ 3-3 °Ù 5-4
#4 Characters that cannot be classified under patterns #1, #2, or #3
RIGHT: ±« 8-1 Ê¼ 5-2 Ãæ 4-3 Í¿ 3-4
WRONG: Åá 2-1 Æü 4-1 ¿å 4-3
IF A CHARACTER CAN BE CLASSIFIED UNDER MORE THAN ONE PATTERN, SELECT THE ONE
THAT FOLLOWS THE NATURAL CONSTRUCTION OF THE CHARACTER
RIGHT: »ù 2-5-2 È¢ 2-6-9
WRONG: »ù 1-2-5 È¢ 1-7-8
HOW TO DIVIDE THE CHARACTER
DIVIDE THE CHARACTER INTO TWO PARTS AT THE FIRST DIVISION POINT
#1 Going from left to right, divide at the first space
RIGHT: ÌÀ 4-4 ¾® 1-2 °· 3-3
WRONG: ¾® 2-1 ³¹ 9-3
#2 Going from top to bottom, divide at the first space, horizontal line, or
frame element, whichever comes first
RIGHT: »° 1-2 ¶¼ 2-8 ÀÖ 3-4 ¸Å 2-3
WRONG: »° 2-1 ¶¼ 6-4 ÀÖ 2-5 ²¼ 1-2
#3 Going from the outside toward the inside, divide after the first enclosure
element
RIGHT: ÅÙ 3-6 ¿Ê 3-8 ÊÄ 8-3 ÌÜ 3-2
WRONG: ÅÙ 7-2 Ëá 11-5
DO NOT VIOLATE THE PRINCIPLE OF ELEMENT INTEGRITY
. 1. Never break through strokes
. RIGHT: ¶§ 3-2-2 WRONG: ¶§ 1-1-4
. 2. Never break through indivisible units
. RIGHT: ¾ð 1-3-8 WRONG: ¾ð 1-1-10
. 3. Never make unnatural divisions
. RIGHT: µ¤ 3-4-2 WRONG: µ¤ 2-2-4
HOW TO SUBCLASSIFY THE SOLID PATTERN
A. DETERMINE TO WHICH OF THE FOUR SOLID SUBPATTERNS YOUR CHARACTER BELONGS
`T' 1. Characters that contain a top line
RIGHT: ±« 8-1 ²¼ 3-1 ¼ª 6-1 ²Ì 8-1
WRONG: Åá 2-1 Àé 3-2 ¿â 8-1 Ê¼ 5-1
2. Characters that contain a bottom line
RIGHT: ¾å 3-2 Ê¼ 5-2 ¿â 8-2
WRONG: »³ 3-2 Êñ 5-2 ¼Ô 8-2
3. Characters that contain a through line
RIGHT: Ãæ 4-3 Åì 8-3 ÌÓ 4-3
WRONG: ¿å 4-3 À£ 3-3 ¸á 4-3 Äï 7-3
4. Characters that do not contain a top line, bottom line, or through line
RIGHT: Í¿ 3-4 Âç 3-4 ¼÷ 7-4
WRONG: »å 6-4 µ× 3-4 Í§ 4-4 Îô 6-4
B. IF A CHARACTER CAN BE CLASSIFIED UNDER MORE THAN ONE SUBPATTERN, THE
SUBPATTERN WITH THE SMALLEST NUMBER TAKES PRECEDENCE
RIGHT: ²¦ 4-1 ¸Ê 3-1 ÆÓ 7-1 ²Ì 8-1 ½Ð 5-2 À¸ 5-2 ¹Ã 5-1
WRONG: ²¦ 4-2 ¸Ê 3-2 ÆÓ 7-2 ²Ì 8-3 ½Ð 5-3 À¸ 5-3 ¹Ã 5-3

The Four Corner System has been used for many years in China and Japan for
classifying kanji. In China it is losing popularity in favour of Pinyin
ordering. Some Japanese dictionaries, such as the Morohashi Daikanwajiten
have a Four Corner Index.

The following overview of the system has been condensed from the article "The
Four Corner System: an introduction with exercises" by Dr Urs App, which
appeared in the Electronic Bodhidharma No 2, February 1992, published by the
International Research Institute for Zen Buddhism, Hanazono College. (More
examples will be added from that article in due course.)

to the stroke counts in the SKIP codes. Where this results in a SKIP
which differs from that in the NJECD, or in the non-NJECD SKIPs
provided by Jack Halpern, the Jack Halpern version is included prefixed
with "ZR"

RADICALS

The radicals listed below are ones where there are differing approaches to
the counting of radicals in the various references. The stroke counting in
this file does not strictly follow any reference, but tends to more
aligned to Halpern.

B54 ENNYOU - ×®. Traditionally counted as 3 strokes, but more recently
often counted as 2. S&H count this as 2; Nelson, Halpern, Koujien, etc,
count it is 3. I treat it as 3.

B97 URI - ±». Traditionally counted as 5 strokes, as the middle
portion looks like a katakana ¥à. Modern glyphs invariably make it look
like 6 strokes. Nelson says it is 5 strokes. Halpern does too, but
then counts the shape as 6 in other kanji. Koujien says 6, as do S&H. I
treat it as 6.

B113 SHIMESU e.g. Îé, is counted as 4 strokes in that form, and 5 strokes
in its older form, ã«. 18 kanji are in the 4-stroke form and 20 are in
the 5-stroke form. (Nelson and S&H count it as 4; Halpern counts it as 4
or 5. [See Note 1.])

B131 SHIN/KERAI ¿Ã. Counted as 7 (Nelson counts it as 6, Halpern as 7
(in the book), and S&H as both for different kanji.)

B136 MAI ASHI Á¤. Counted as 7 (traditionally counted as 6, in
accordance with the older writing of `¥ð'. Nelson counts as 6, S&H as
7, and Halpern as 7 for ¾ïÍÑ and ¿ÍÌ¾ÍÑ´Á»ú and 6 for the rest.) Note
this is also applied to counting å¬ and for kanji with the ðê pattern.

B140 KUSA-KANMURI e.g. ²× always counted as 3 strokes (Halpern counts
this 4 strokes for the (mostly level 2) kanji where the older form is
often printed.) Note that this has been carried through to kanji where
this element is not the indexing radical, such as Û¯.

B199 MUGI Çþ always counted as 7 strokes, except for óÎ & óÏ where it
is counted as 11. (Nelson and Halpern do the same, and S&H avoid treating
it as a radical, but count it as 12 in the remainder.)

The ROO or OI radical (Ï·) has a variant consisting of the top 4 strokes.
For example, it is in ¼Ô. Traditionally, this variant had an extra dot,
and was counted as 5 strokes. I'm counting it as 4 throughout.

OTHER STROKE PATTERNS

While the pattern ±± is a 6-stroke radical, the top half of Ò× is made up
of three distinct parts totalling 8 strokes. Note that this also is the
case with Õ¿, Þì, çÛ and Áé despite the simplification in the JIS glyphs.

²ç (KIBA HEN) is a problem. It is classically counted as 4 strokes, but
these days has a flick that makes it effectively 5. Halpern, Nelson and
S&H usually have it as 5 strokes, so I'm standardizing on that.

Another little horror is ÚÜ (MU or NASHI), which is classically counted
as 4 strokes. The most common variant has 5 strokes, but looks like 6.
Halpern, S&H and the Classical Nelson count this as 4 strokes, and the New
Nelson as 5. I'm making it 5 too.

The JUU or ASHIATO radical is at the bottom of ¶Ù and ã¼. It is
traditionally counted as 5 strokes, although sometimes it looks like 4.
I'm using 5 throughout.

A related shape is ¥à, as in ±», ¸É, ¸Ì, etc. This is sometimes counted
as two strokes (both Nelsons) and sometimes as three strokes (Halpern, S&H).
Classically it is regarded as two strokes. I am using 6 strokes for ±».

The pattern to the left of ÚÉ, which appears in several kanji, e.g.
Ê¾ and ÊÍ, has 8 strokes. (There are 3 strokes at the top as in ¾°.)

The "east" pattern (Åì) has 8 strokes. There is an older form in which
there are two strokes in the box (ÛË). It is counted as 8
strokes here in the Åì form (e.g. ´Ò) and 9 in the ÛË form, as in ëÝ.

The pattern at the bottom of ð´ is counted as 4 strokes in modern
dictionaries, although traditionally it was 5.

The pattern ´¬, which appears in several kanji, is counted as 9 strokes.
Several dictionaries count it as either 8 or 9.

The pattern on the left of ¼ý is variously handled as 2 strokes or 3
strokes. As more recent dictionaries make it 4, I will do so too.

The Ú¾ pattern has 3 and 4-stroke versions, and sometimes the
glyphs can be confusing as to which is used. In the åÌ kanji, for example,
it is traditionally
counted as 3, but Spahn & Hadamitzky count it as 4 and the Nelsons include
both.

[This document contains the text found in the second edition of "2001 Kanji"
edited by Joseph R. De Roo and published by Bonjinsha.]

The system used in "2001 Kanji" is intended for the beginner who encounters
a kanji and wants to look it up, knowing neither its radical, pronunciation,
nor its exact number of strokes. The method consists of looking at the top
of the kanji, and then at its bottom, disregarding its other parts.

"2001 Kanji" provides drawings for all graphic elements. This information
cannot be reproduced here. However, an attempt was made to describe each
element as much as possible given the constraints of a computer text file,
and examples of characters possessing the element are always given.