Easy To Use Patents Search & Patent Lawyer Directory

At Patents you can conduct a Patent Search, File a Patent Application, find a Patent Attorney, or search available technology through our Patent Exchange. Patents are available using simple keyword or date criteria. If you are looking to hire a patent attorney, you've come to the right place. Protect your idea and hire a patent lawyer.

Systems and processes are disclosed for predicting words using a
categorical stem and suffix word n-gram language model. A word prediction
includes determining a stem probability using a stem language model. The
word prediction also includes determining a suffix probability using
suffix language model decoupled from the stem model, in view of one or
more stem categories. The word prediction also includes determine a
probability of the stem belonging to the stem category. A joint
probability is determined based on the foregoing, and one or more word
predictions having sufficient likelihood. In this way, the categorical
stem and suffix language model constraints predicted suffixes to those
that would be grammatically valid with predicted stems, thereby producing
word predictions with grammatically valid stem and suffix combinations.

This application claims priority from U.S. Provisional Ser. No.
62/058,060, filed on Sep. 30, 2014, entitled "Parsimonious Handling of
Word Inflection via Categorical Stem+Suffix N-Gram Language Models,"
which is hereby incorporated by reference in its entirety for all
purposes.

This application also relates to the following applications: U.S. patent
application Ser. No. 62/005,837, entitled "Device, Method, and Graphical
User Interface for a Predictive Keyboard," filed May 30, 2014, U.S.
patent application Ser. No. 14/713,420, "Entropy-Guided Text Prediction
Using Combined Word and Character N-gram Language Models," filed May 15,
2015 U.S. patent application Ser. No. 14/724,641, "Text Prediction Using
Combined Word N-gram and Unigram Language Models," filed May 28, 2015 and
U.S. patent application Ser. No. 14/719,163, "Canned Answers in
Messages," filed May 21, 2015, which are hereby incorporated by reference
in their entirety for all purposes.

Claims

What is claimed is:

1. A method for predicting words, the method comprising: at an electronic device: receiving input from a user; determining, using an n-gram language model, a probability of
a predicted word based on a previously-input word in the received input, wherein the predicted word comprises a stem and a suffix; determining, a probability of the suffix being grammatically valid for the stem; determining an integrated probability of
the predicted word based on the probability of the predicted word and the probability of the suffix being grammatically valid for the stem; and providing output of the predicted word, based on the integrated probability.

2. The method of claim 1, wherein the stem is associated with a stem category, and wherein determining the probability of the predicted word comprises determining a probability of the suffix based on the stem category and the previously-input
word.

3. The method of claim 2, wherein the stem is associated with a stem category, and wherein determining the probability of the suffix being grammatically valid for the stem comprises determining a probability of the stem being associated with
the stem category.

4. The method of claim 1, wherein determining the probability of the predicted word using the n-gram language model comprises: determining, using a first n-gram language model, a probability of the stem based on the previously-input word in the
received input; and determining, using a second n-gram language model, a probability of the suffix based on the previously-input word in the received input, wherein the first and the second n-gram language models are different language models.

5. The method of claim 4, wherein the first and the second n-gram language models are decoupled.

6. The method of claim 4, wherein the first n-gram language model is a word stem n-gram language model and the second n-gram language model is a word suffix n-gram language model.

7. The method of claim 3, wherein determining the probability of the stem being associated with the stem category is based on a spelling of the stem.

8. The method of claim 3, wherein determining the probability of the stem being associated with the stem category is based on a plurality of consecutive characters at an end of the stem.

9. The method of claim 1, wherein determining the probability of the suffix being grammatically valid for the stem comprises: assigning a probability of zero to the determination if the stem does not belong to a stem category.

10. The method of claim 1, further comprising: determining whether the stem is a regular verb, and in accordance a determination that the stem is not a regular verb, forgoing output of the predicted word based on the integrated probability.

11. The method of claim 1, wherein determining the probability of the predicted word comprises determining, using the n-gram language model, the probability of the predicted word based on a plurality of words in the received input.

12. The method of claim 11, wherein the plurality of words comprises a string of recently entered words.

13. The method of claim 1, wherein the received input is typed input.

14. The method of claim 1, wherein the received input is verbal input.

16. The method of claim 1, wherein providing output of the predicted word comprises audible playback of the predicted word.

17. A non-transitory computer-readable storage medium comprising computer-readable instructions, which when executed by one or more processors, cause the one or more processors to: receive input from a user; determine, using an n-gram language
model, a probability of a predicted word based on a previously-input word in the received input, wherein the predicted word comprises a stem and a suffix; determine, a probability of the suffix being grammatically valid for the stem; determine an
integrated probability of the predicted word based on the probability of the predicted word and the probability of the suffix being grammatically valid for the stem; and provide output of the predicted word, based on the integrated probability.

18. A system comprising: one or more processors; memory storing one or more programs, wherein the one or more programs include instructions, which when executed by the one or more processors, cause the one or more processors to: receive input
from a user; determine, using an n-gram language model, a probability of a predicted word based on a previously-input word in the received input, wherein the predicted word comprises a stem and a suffix; determine, a probability of the suffix being
grammatically valid for the stem; determine an integrated probability of the predicted word based on the probability of the predicted word and the probability of the suffix being grammatically valid for the stem; and provide output of the predicted
word, based on the integrated probability.

19. The system of claim 18, wherein the stem is associated with a stem category, and wherein determining the probability of the predicted word comprises determining a probability of the suffix based on the stem category and the previously-input
word.

20. The system of claim 19, wherein: the stem is associated with a stem category, and determining the probability of the suffix being grammatically valid for the stem comprises determining a probability of the stem being associated with the
stem category.

21. The system of claim 19, wherein determining the probability of the predicted word using the n-gram language model comprises: determining, using a first n-gram language model, a probability of the stem based on the previously-input word in
the received input; and determining, using a second n-gram language model, a probability of the suffix based on the previously-input word in the received input, wherein the first and the second n-gram language models are different language models.

22. The system of claim 21, wherein the first n-gram language model is a word stem n-gram language model and the second n-gram language model is a word suffix n-gram language model.

23. The system of claim 18, wherein determining the probability of the suffix being grammatically valid for the stem comprises: assigning a probability of zero to the determination if the stem does not belong to a stem category.

24. The system of claim 18, wherein determining the probability of the predicted word comprises determining, using the n-gram language model, the probability of the predicted word based on a plurality of words in the received input.

25. The system of claim 24, wherein the plurality of words comprises a string of recently entered words.

Electronic devices and the ways in which users interact with them are evolving rapidly. Changes in size, shape, input mechanisms, feedback mechanisms, functionality, and the like have introduced new challenges and opportunities relating to how
a user enters information, such as text. Statistical language modeling can play a central role in input prediction and/or recognition, such as keyboard input prediction and speech (or handwriting) recognition. Effective language modeling can thus play
a critical role in the overall quality of an electronic device as perceived by the user.

In some examples, statistical language modeling is used to convey the probability of occurrence in the language of possible strings of n words. Given a vocabulary of interest for an expected domain of use, determining the probability of
occurrence of possible strings of n words can be done using a word n-gram model, trained to provide the probability of the current word given the n-1 previous words. Training data can be obtained from machine-readable text databases having
representative documents in the expected domain.

Due to the finite size of such databases, however, many occurrences of n-word strings can be seen infrequently, yielding unreliable prediction results for all but the smallest values of n. Relatedly, sometimes it is cumbersome or impractical to
gather a sufficiently large amount of training data. Further, the sizes of resulting language models may exceed what can reasonably be deployed onto portable electronic devices. Though it is possible to prune training data sets and/or n-gram language
models to an acceptable size, pruned models tend to have reduced predictive power. Grammatically incorrect predictions are particularly problematic, as bad predictions are often more distracting than the lack of a prediction.

SUMMARY

A compact and robust language model that can provide accurate input prediction and/or input recognition is desirable. Systems and processes are disclosed for predicting words using decoupled stem and suffix language models, and further
constraining the predicted word stem and suffix using a categorical stem and suffix language model, thereby limiting word predictions to grammatically valid stem and suffix combinations.

In some embodiments, input is received from a user. Using an n-gram word language model (e.g., an n-gram stem language model in combination with an n-gram suffix language model), the probability of a predicted word is determined based on a
previously-input word in the received input. The predicted word contains a predicted stem and a predicted suffix. Using a categorical stem and suffix language model, the probability that the predicted suffix is grammatically valid when conjoined with
the predicted stem is determined. An integrated probability of the predicted word is determined based on the probabilities produced by the stem language model, suffix language model, and the categorical stem and suffix language model. One or more
candidate words--for example, the most probable word, out of multiple predicted words, based on integrated probabilities--is determined. The candidate word(s) may be displayed and/or played-back. A graphical user interface can allow the user to select
a candidate word without having to manually input the entire word. In this way, the efficiency of the man-machine interaction and the user's overall user experience are improved.

BRIEF DESCRIPTION OF THE DRAWINGS

FIG. 1 illustrates an exemplary system for constraining word predictions based on a categorical stem and suffix language model.

FIG. 2 illustrates an exemplary process for constraining word predictions based on a categorical stem and suffix language model.

FIG. 3 illustrates a functional block diagram of an electronic device configured to constraining word predictions based on a categorical stem and suffix language model.

DETAILED DESCRIPTION

In the following description, reference is made to the accompanying drawings in which it is shown by way of illustration specific examples that can be practiced. It is to be understood that other examples can be used and structural changes can
be made without departing from the scope of the various examples.

It is useful for an electronic device to provide predictive text input based on input already entered by a user. For example, as a user enters text into a draft e-mail message, the electronic device may suggest possible next words for user
selection to reduce the amount of manual typing that is needed. Based on the user's previous input, the electronic device can calculate possible next words using word n-gram language models, and determine probabilities of different possible next words.
One or more of the possible next words--such as a subset having the highest predictions probabilities--can be displayed on-screen for user selection. In this way, the electronic device can be permit user entry of one or more words without requiring the
user to manually enter each character of each word. In order to be truly helpful, however, word predictions need to be grammatically valid.

The occurrence of word inflection raises certain challenges in the context of word predictions using word n-gram language models. Word inflection refers to the modifying of words to encode grammatical information such as tense, number, gender,
so forth. For example, English inflects regular verbs for past tense using the suffix "_ed" (as in "talk".fwdarw."talked"). Other languages can exhibit higher levels of word inflection: Romance languages such as French have more overt inflection due to
complex verb conjugation and gender declension. Agglutinative languages such as Finnish have even higher levels of inflection, as a separate inflected form may be needed for each grammatical category.

In n-gram language modeling, word inflection generally increases the size of the underlying vocabulary needed for word prediction, as each inflected form of a word (e.g., "talks", "talked", "talking") can be thought of as its own word by the
language model. This increase in vocabulary leads to attendant problems such as difficulties in obtaining sufficient training data and resulting language models that are larger than ideal for deployment onto portable electronic devices. For these
reasons, a brute force approach to handling word inflections, while possible, is not desirable.

Attention is now directed to the possibility of breaking words into stem and suffix forms, and using decoupled language models to train stem data and suffix data for purposes of n-gram language modeling. In general, an inflected word can be
broken into a stem and a suffix, and one language model (a "stem LM") can be trained on the stem and suffix data expurgated from all suffixes, while another language model (a "suffix LM") can be trained based on the stem and suffix data expurgated from
all stems.

Consider the sentence "he talked fast": a stem LM can be trained based on (among others) the trigram of ("he", "talk_" and "fast"), and a suffix LM can be trained based on (among others) the trigram of ("he", "_ed", and "fast"). Under this
approach, it is possible to predict a sentence like "he arrived fast" even if the stem and suffix language models have not been previously trained on this particular 3-word string, so long as a related string was observed, such as "he arrives fast".
This ability to, in effect, substitute one stem for another (or, equivalently, one suffix for another) produces robust predictions while requiring feasible amounts of training data, and translate into language models suitable for deployment, particularly
in terms of size.

However, under this approach involving decoupled stem and suffix language models, it is still possible to predict a sentence like "he speaked fast", given a prior observation such as "he speaks fast". This prediction is undesirable as the word
"speaked" is grammatically incorrect; a more grammatically proper prediction would have been "he spoke fast". To address this issue of spurious predictions of inflected words, a categorical stem and suffix n-gram language model can be devised to enforce
necessary stem to suffix constraints during word prediction.

FIG. 1 illustrates exemplary system 100 for predicting words using a categorical stem and suffix n-gram language model component. Exemplary system 100 includes user device 102 (or multiple user devices 102) that can provide a user input
interface or environment. User device 102 can include any of a variety of devices, such as a cellular telephone (e.g., smartphone), tablet computer, laptop computer, desktop computer, portable media player, wearable digital device (e.g., digital
glasses, wristband, wristwatch, brooch, armbands, etc.), television, set top box (e.g., cable box, video player, video streaming device, etc.), gaming system, or the like. User device 102 can have display 116. Display 116 can be any of a variety of
displays, and can also include a touchscreen, buttons, or other interactive elements. In some examples, display 116 is incorporated within user device 102 (e.g., as in a touchscreen, integrated display, etc.). In some examples, display 116 is external
to--but communicatively coupled to--user device 102 (e.g., as in a television, external monitor, projector, etc.).

User device 102 can include or be communicatively coupled to keyboard 118, which can capture user-entered text (e.g., characters, words, symbols, etc.). Keyboard 118 can include any of a variety of text entry mechanisms and devices, such as a
stand-alone external keyboard, a virtual keyboard, a remote control keyboard, a handwriting recognition system, or the like. For example, keyboard 118 can be a virtual keyboard on a touchscreen capable of receiving text entry from a user (e.g.,
detecting character selections from touch). In another example, keyboard 118 can be a virtual keyboard shown on a display (e.g., display 116), and a pointer or other indicator is used to indicate character selection (e.g., indicating character selection
using a mouse, remote control, pointer, button, gesture, eye tracker, etc.). In yet another example, keyboard 118 can include a touch sensitive device capable of recognizing handwritten characters. In still other examples, keyboard 118 can include
other mechanisms and devices capable of receiving text entry from a user.

User device 102 can also include processor 104, which can receive text entry from a user (e.g., from keyboard 118) and interact with other elements of user device 102 as shown. In one example, processor 104 can be configured to perform any of
the methods discussed herein, such as predicting words using a categorical stem and suffix n-gram language model. In other examples, processor 104 can cause data (e.g., entered text, user data, etc.) to be transmitted to server system 122 through
network 120. Network 120 can include any of a variety of networks, such as a cellular telephone network, WiFi network, wide area network, local area network, the Internet, or the like. Server system 120 can include a server, storage devices, databases,
and the like and can be used in conjunction with processor 104 to perform any of the methods discussed herein. For example, processor 104 can cause an interface to be provided to a user for text entry, can receive entered text, can transmit some or all
of the entered text to server system 120, and can cause predicted words to be displayed on display 116.

In some examples, user device 102 can include storage device 106, memory 108, word stem n-gram language model 110, word suffix n-gram language model 112, and stem category database 114. In some examples, language models 110 and 112, and
database 114 are stored on storage device 106, and can be used to predict words and determine probabilities according to the methods discussed herein. Language models 110 and 112 can be trained on any of a variety of text data, and can include
domain-specific models for use in particular applications, as will be appreciated by one of ordinary skill in the art.

The functions or methods discussed herein can be performed by a system similar or identical to system 100. It should be appreciated that system 100 can include instructions stored in a non-transitory computer readable storage medium, such as
memory 108 or storage device 106, and executed by processor 104. The instructions can also be stored and/or transported within any non-transitory computer readable storage medium for use by or in connection with an instruction execution system,
apparatus, or device, such as a computer-based system, processor-containing system, or other system that can fetch the instructions from the instruction execution system, apparatus, or device and execute the instructions. In the context of this
document, a "non-transitory computer readable storage medium" can be any medium that can contain or store the program for use by or in connection with the instruction execution system, apparatus, or device. The non-transitory computer readable storage
medium can include, but is not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, a portable computer diskette (magnetic), a read-only memory (ROM), an erasable programmable read-only
memory (EPROM), a portable optical disc such as CD, CD-R, CD-RW, DVD, DVD-R, or DVD-RW, or flash memory such as compact flash cards, secured digital cards, USB memory devices, memory sticks, and the like.

It should be understood that system 100 is not limited to the components and configuration of FIG. 1, but can include other or additional components in multiple configurations according to various examples. For example, user device 102 can
include a variety of other mechanisms for receiving input from a user, such as a microphone, optical sensor, camera, gesture recognition sensor, proximity sensor, ambient light sensor, or the like. Additionally, the components of system 100 can be
included within a single device, or can be distributed among multiple devices. For example, although FIG. 1 illustrates language models 110 and 112, and stem category database 114, as part of user device 102, it should be appreciated that, in other
examples, the functions of processor 104 can be performed by server system 120, and/or one or more of entities 110, 112, and 114 can be stored remotely as part of server system 122 (e.g., in a remote storage device). In still other examples, language
models and other data can be distributed across multiple storage devices, and many other variations of system 100 are also possible.

At block 202 of process 200, input is received from a user. The input can be received in any of a variety of ways, such as from keyboard 118 in system 100 (FIG. 1) discussed above. The input also can be voice input received through a
microphone or a touchscreen of system 100 (FIG. 1). The input can include a single typed character, such as a letter or symbol. The typed input can also include a string of characters, a word, multiple words, multiple sentences, or the like. The input
received at block 202 can be directed to various types of interface or environment on an electronic device. For example, such an interface could be configured for typing text messages, emails, web addresses, documents, presentations, search queries,
media selections, commands, form data, calendar entries, notes, or the like.

The input received at block 202 is used to predict a word. In some embodiments, the input is used to predict one or more of: a subsequent word likely to be entered following previously-entered words; the likely completion of a partially-entered
word; and/or a group of words likely to be entered following previously-entered words.

Previously-entered characters or words can be considered as observed context that can be used to make predictions. For reference, let: W.sub.q-n+1.sup.q=w.sub.q-n+1w.sub.q-n+2 . . . w.sub.q-1w.sub.q, (1) denote the string of n words relevant
to the prediction of the current word w.sub.q, and assume that w.sub.q can be decomposed into a stem s.sub.q and a suffix f.sub.q. The n words may be one or more words in the received input.

At block 204, the probability of a current word w.sub.q is determined, using a word n-gram language model, based on the available word history (e.g., W.sub.q-n+1.sup.q). As one of ordinary skill would appreciate, some morpheme-based word n-gram
language models compute the probability of a current word w.sub.q as follows: Pr(w.sub.q|W.sub.q-n+1.sup.q-1)=Pr(f.sub.q|s.sub.qW.sub.q-n+2.sup.q-1)Pr(- s.sub.q|W.sub.q-n+1.sup.q-1), (2) where W.sub.q-n+1.sup.q-1 refers to the relevant string of n-1
words used for stem prediction, and W.sub.q-n+2.sup.q-1 refers to the truncated-by-one (e.g., q-n+2 instead of q-n+1) history used for suffix prediction. The overall prediction of w.sub.q is thus a joint prediction of a stem s.sub.q and a suffix
f.sub.q.

In contrast to the standard morpheme-based prediction model, in some embodiments, the n-gram language model has a stem LM (e.g., language model 110 in FIG. 1) and a suffix LM (e.g., language model 112 in FIG. 1) decoupled from the stem LM. In
these embodiments, while the probability of the stem prediction remains the same as Pr(w.sub.q|W.sub.q-n+1.sup.q-1), the suffix model becomes Pr(s.sub.q|CW.sub.q-n+1.sup.q-1), where C denotes a generic stem category accounted for in the suffix language
model. The probability of the current word w.sub.q thus can be computed as: Pr(w.sub.q|W.sub.q-n+1.sup.q-1)=Pr(f.sub.q|CW.sub.q-n+1.sup.q-1)Pr(s.sub.- q|W.sub.q-n+1.sup.q-1) (3)

Because a stem is always present before a suffix, the generic category C has no impact on the word history that is available for suffix prediction. As such, the scope of conditioning is identical for both the stem and the suffix language
models, e.g., both are based upon W.sub.q-n+1.sup.q-1. Accordingly, suffix prediction in Equation (3) no longer relies on a truncated-by-one word history, thereby leading to more robust word predictions.

Despite its increased robustness, Equation (3) can still generate spurious linguistic events because stem and suffix consistency is not guaranteed, meaning it is possible for Equation (3) to predict "he speaked fast" given the training
observation "he speaks fast".

To further enhance the n-gram language model, in some embodiments, the n-gram language model constrains stem and suffix predictions by accounting for stem categories and association of those stem categories with suffixes deemed grammatically
valid for conjunction with stems of particular categories. For example, French has (among others) regular verbs ending in "_er" and "_ir". An "_er" stem category and an "_ir" stem category can be defined. The categories can then be associated with the
set of inflectional morphemes called for by the particular stem category. For example, the stem category of "_er" may be associated with a list of suffixes that begin with letter "_e".

In some embodiments, stem categorization is based on the type of verb, such as whether a verb is a regular or irregular verb. In some embodiments, some types of verbs are not predicted using stem and suffix constraints. For example, irregular
verbs can be predicted using alternate language models while stem categorization is performed for regular verbs. In some embodiments, stem categorization is based on stem spelling. For example, stem categorization in French can be based on stem
spelling, particularly the consecutive characters at end of the stem (e.g., "_ir").

The defining of stem categories and the association of suffixes to defined stem categories are deterministic, and can be performed a priori to prediction-time. For example, in French an "_er" stem category may be defined and be associated with
suffixes beginning with "_e". The context in which a "_er" verb is used in French need not alter the underlying constraint. As the associated stem category for a particular word may be independent of the context in which the word appears, the
association may be created a priori in the underlying categorical stem and suffix n-gram language model.

For reference, let stem categories be defined as {C.sub.k }, 1.ltoreq.k.ltoreq.K, where K represents the total number of stem categories accounted for in the language model. The probability of the current word w.sub.q based on previous user
input, accounting for categorical stem and suffix constraints, is computed as:

The derivation of Equation (4)--particularly in the last step--takes advantage of the fact that conditioning on the stem category in Pr(f.sub.q|C.sub.ks.sub.qW.sub.q-n+1.sup.q-1) subsumes conditioning on the actual stem. Thus, no approximation
is needed in the derivation of Equation (4). Notably, although equation (4) resembles Equation (3), closer inspection of Equation (4) reveals that the underlying language modeling considers multiple categories C.sub.k to enforce stem and suffix
consistency in word predictions.

Referring again to block 204 of process 200 (FIG. 2), in embodiments utilizing Equation (4) (or a similar probability function), the probability of a current word w.sub.q is determined by, among other things, determining the probability of stem
s.sub.q Pr(s.sub.q|W.sub.q-n+1.sup.q-1) and the probability of suffix f.sub.q in view of stem category C.sub.k Pr(f.sub.q|C.sub.kW.sub.q-n+1.sup.q-1), for {C.sub.k }, 1.ltoreq.k.ltoreq.K, where K represents the total number of stem categories accounted
for in the language model.

At block 206, the probability of predicted suffix f.sub.q as being grammatically valid for a predicted stem s.sub.q is determined. In some embodiments this probability is determined based on Pr(C.sub.k|s.sub.q) as shown in Equation (4) or a
similar probability function. Pr(C.sub.k|s.sub.q) provides the probability of stem s.sub.q as corresponding to a stem category C.sub.k. Notably, the expression Pr(C.sub.k|s.sub.q) produces a zero value if the predicted stem s.sub.q is not associated
with stem category C.sub.k. This effect of Pr(C.sub.k|s.sub.q) is further discussed with respect to block 208, below.

At block 208, an integrated (e.g., joint) probability of the predictions from blocks 204 and 206 is determined. In some embodiments this integrated probability is based on the integrated probability produced from Equation (4) or a similar
probability function. As Pr(C.sub.k|s.sub.q) produces zero if a predicted stem s.sub.q is not associated with stem category C.sub.k, the inclusion of Pr(C.sub.k|s.sub.q) in Equation (4) effectively zeroes out the probability for w.sub.q having suffix
f.sub.q--even if Pr(f.sub.q|C.sub.kW.sub.q-n+1.sup.q-1) provides a positive probability--where the predicted stem s.sub.q is not associated with stem category C.sub.k. Further, the integrated probability may include a summation of probabilities for each
k where 1.ltoreq.k.ltoreq.K, as a given predicted stem s.sub.q (and suffix f.sub.q) can be associated with more than one stem category C.sub.k. In this way, the joint probability constraints possible word predictions w.sub.q to those having non-zero
Pr(C.sub.k|s.sub.q) for at least one C.sub.k where 1.ltoreq.k.ltoreq.K. For example, a word prediction w.sub.q (comprising stem s.sub.q and suffix f.sub.q) is possible where both the probability of a stem s.sub.q being associated with stem category
C.sub.k and the probability of a suffix f.sub.q in view of C.sub.k are non-zero.

At block 210, an output of the predicted word is provided, based on the integrated probability determined at block 208. In some embodiments, a predicted word has a non-zero probability as determined at block 208. In some embodiments, block 210
outputs one or more predicted words having the highest prediction probabilities among one or more predicted words. In some embodiments, block 210 determines whether the integrated probability for any predicted word w.sub.q exceeds a predetermined
threshold probability value. In these embodiments, block 210 may output a predicted word w.sub.q if its probability exceeds the threshold, and block 210 may forego output of predicted word(s) if no predicted word w.sub.q exceeds the predetermined
threshold. When this is the case, process 200 can return to block 202 to await further input from a user. Blocks 202, 204, 206, and 208 can be repeated with the addition of each new word entered by a user, and a determination can be made for each new
word whether a predicted word should be displayed based on newly determined integrated probabilities of candidate words.

The outputting of the predicted word can include displaying the one or more predicted words. In some embodiments, the outputting of a predicted word includes displaying a user-selectable affordance representing the predicted word, such that the
word can be selected by the user without the user having to individually and completely enter all the characters of the word. The outputting of the predicted word may include playback of the one or more predicted words. In some embodiments, outputting
a predicted word includes passing the predicted word to an input recognition sub-routine (e.g., a handwriting recognition or voice recognition sub-routine) such that further user output can be provided by the downstream sub-routine. For example, a
handwriting recognition sub-routine can display an image of the predicted word that resembles handwriting, based on the word prediction. For example, a voice recognition sub-routine can provide a speech-to-text and/or speech-to-speech output, based on
the word prediction. The audio output may be determined with the assistance of a voice-based assistant, such as Siri.RTM. by Apple Inc. of Cupertino, Calif.

The above-described approach to predicting words, particularly inflected words, combines the benefits of using decoupled stem and suffix language models (e.g., improved size and accuracy) while reducing ungrammatical word predictions based on
categorical stem and suffix constraints (e.g., avoiding spurious predictions such as "he speaked fast"). An electronic device employing these techniques for predicting words can permit user input without requiring the user to individually and manually
enter each character and/or word associated with an input string, while limiting the occurrence of spurious predictions. In this way, the efficiency of the man-machine interaction and the user's overall user experience with the electronic device are
both improved drastically.

FIG. 3 shows a functional block diagram of exemplary electronic device 300 configured in accordance with the principles of the various described examples. The functional blocks of the device can be implemented by hardware, software, or a
combination of hardware and software to carry out the principles of the various described examples, including those described with reference to process 200 of FIG. 2. It is understood by persons of skill in the art that the functional blocks described
in FIG. 3 can be combined or separated into sub-blocks to implement the principles of the various described examples. Therefore, the description herein optionally supports any possible combination or separation or further definition of the functional
blocks described herein.

Processing unit 306 can be configured to receive input from a user (e.g., from input receiving unit 304). Predicted word determining unit 308 can be configured to determine, using n-gram language models, the probability of a predicted word
(having a stem and suffix) based on one or more previously entered words in the typed input. Stem category unit 310 can be configured to aid predicted word determining unit 308 in determining the probability of a predicted suffix being grammatically
valid with a predicted stem. Integrated probability determining unit 312 can be configured to determine a joint probability of the predicted word based on the probability of the predicted stem, the probability of the predicted suffix, and the
probability of the predicted suffix being grammatically valid for the predicted stem. Processing unit 306 can be further configured to cause the predicted word to be displayed (e.g., using display unit 302) based on the integrated probability.

Processing unit 306 can be further configured to determine (e.g., using predicted word determining unit 308) the probability of the predicted word based on a plurality of words in the typed input. In some examples, the plurality of words
comprises a string of recently entered words. For example, recently entered words can include words entered in a current input session (e.g., in a current text message, a current email, a current document, etc.). For predicting words, the recently
entered words can include the last n words entered (e.g., the last three words, the last four words, the last five words, or any other number of words).

Although examples have been fully described with reference to the accompanying drawings, it is to be noted that various changes and modifications will become apparent to those skilled in the art (e.g., modifying any of the systems or processes
discussed herein according to the concepts described in relation to any other system or process discussed herein). Such changes and modifications are to be understood as being included within the scope of the various examples as defined by the appended
claims.